Skip to content

Commit

Permalink
Staging successful deployed
Browse files Browse the repository at this point in the history
  • Loading branch information
unkcpz committed Feb 14, 2024
1 parent 01c1be7 commit 265d9d1
Show file tree
Hide file tree
Showing 3 changed files with 82 additions and 31 deletions.
35 changes: 35 additions & 0 deletions .github/workflows/sync-staging.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
name: sync-staging

on:
push:
branches:
- main

jobs:
# after main updated (merge from staging), sync staging to main
sync-staging:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
persist-credentials: false
fetch-depth: 0
- run: |
git config user.name github-actions[bot]
git config user.email github-actions[bot]@users.noreply.github.com
- name: Update Test Branch
run: |
git checkout main
git fetch origin
git checkout staging
git pull
git merge origin/main
- name: Push changes
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: staging
76 changes: 46 additions & 30 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,16 +25,16 @@ ssh-keygen -f ssh-key-aiidalab-demo-server
## Create an auto-scaling Kubernetes cluster

```bash
az group create --name aiidalab_demo_server_marvel --location=switzerlandnorth --output table
az group create --name aiidalab-demo-server-rg --location=switzerlandnorth --output table
```

- `aiidalab_demo_server_marvel` is the name of the resource group.
- `aiidalab-demo-server-rg` is the name of the resource group.

Create networkpolicy for the pods to communicate with each other and to the internet.

```bash
az network vnet create \
--resource-group aiidalab_demo_server_marvel \
--resource-group aiidalab-demo-server-rg \
--name aiidalab-vnet \
--address-prefixes 10.0.0.0/8 \
--subnet-name aiidalab-subnet \
Expand All @@ -45,12 +45,12 @@ We will now retrieve the application IDs of the VNet and subnet we just created

```bash
VNET_ID=$(az network vnet show \
--resource-group aiidalab_demo_server_marvel \
--resource-group aiidalab-demo-server-rg \
--name aiidalab-vnet \
--query id \
--output tsv)
SUBNET_ID=$(az network vnet subnet show \
--resource-group aiidalab_demo_server_marvel \
--resource-group aiidalab-demo-server-rg \
--vnet-name aiidalab-vnet \
--name aiidalab-subnet \
--query id \
Expand All @@ -68,7 +68,7 @@ SP_PASSWD=$(az ad sp create-for-rbac \
--output tsv)
SP_ID=$(az ad app list \
--filter "displayname eq 'aiidalab-sp'" \
--query [0].appId \
--query "[0].appId" \
--output tsv)
```

Expand All @@ -77,7 +77,7 @@ Time to create the Kubernetes cluster, and enable the auto-scaler at the same ti
```bash
az aks create \
--name demo-server \
--resource-group aiidalab_demo_server_marvel \
--resource-group aiidalab-demo-server-rg \
--ssh-key-value ssh-key-aiidalab-demo-server.pub \
--node-count 3 \
--node-vm-size Standard_D2s_v3 \
Expand All @@ -95,21 +95,48 @@ az aks create \
--output table
```

```bash
CLUSTER_ID=$(az aks show \
--resource-group aiidalab-demo-server-rg \
--name demo-server \
--query id \
--output tsv)
```

Update the service principal to have access to the cluster.

```bash
SP_PASSWD=$(az ad sp create-for-rbac \
--name aiidalab-sp \
--role Contributor \
--scopes $CLUSTER_ID $VNET_ID \
--query password \
--output table)
```

```bash
az aks update-credentials \
--resource-group aiidalab-demo-server-rg \
--name demo-server \
--reset-service-principal \
--service-principal <YourServicePrincipalAppId> \
--client-secret <NewClientSecret>
```


The auto-scaler will scale the number of nodes in the cluster between 3 and 6, based on the CPU and memory usage of the pods.
It can be updated later with the following command:

```bash
az aks update \
--name demo-server \
--resource-group aiidalab_demo_server_marvel \
--resource-group aiidalab-demo-server-rg \
--update-cluster-autoscaler \
--min-count <DESIRED-MINIMUM-COUNT> \
--max-count <DESIRED-MAXIMUM-COUNT> \
--output table
```



### Customizing the auto-scaler

The auto-scaler can be customized to scale based on different metrics, such as CPU or memory usage.
Expand All @@ -124,16 +151,6 @@ These are two rules applied to the VMSS:

The above setup in general is done once.
But make sure the [Pre-requisites](#pre-requisites) are done before proceeding, to have `az` command available.
If the cluster is already created, the ssh-key can be update to the cluster with the following command:

```bash
az aks update \
--name demo-server \
--resource-group aiidalab_demo_server_marvel \
--ssh-key-value <your-pub-ssh-key>.pub
```

The command will update the key on all node pools.

The following steps are for administrators/maintainers of the cluster to configure in their local machines.

Expand All @@ -149,7 +166,7 @@ Get credentials from Azure for kubectl to work:
```bash
az aks get-credentials \
--name demo-server \
--resource-group aiidalab_demo_server_marvel \
--resource-group aiidalab-demo-server-rg \
--output table
```

Expand Down Expand Up @@ -191,8 +208,8 @@ jinja2 --format=env basehub/values.yaml.j2 > basehub/values.yaml
The following environment variables are required to be set:

* `K8S_NAMESPACE`: The namespace where the JupyterHub will be installed, e.g. `production`, `staging`.
* `GITHUB_CLIENT_ID`: The client ID of the GitHub app.
* `GITHUB_CLIENT_SECRET`: The client secret of the GitHub app.
* `OAUTH_CLIENT_ID`: The client ID of the GitHub app.
* `OAUTH_CLIENT_SECRET`: The client secret of the GitHub app.
* `OAUTH_CALLBACK_URL`: The callback URL of the GitHub app.

We use GitHub oauthenticator, the users will be able to login with their GitHub account.
Expand All @@ -206,6 +223,12 @@ To deploy the JupyterHub, run the following command:

If the namespace does not exist, it will be created.

The IP address of proxy-public service can be retrieved with the following command:

```bash
kubectl get svc proxy-public -n <namespace>
```


## For maintainers

Expand All @@ -221,10 +244,3 @@ On the GitHub repository, the secrets are set for `production` and `staging` env
The `aiidalab-sp` was only assigned the Contributor role for the VNet, and it is not yet assigned to the resource group. This is to avoid the service principal to have too much access to the resources.

To get the kube credentials, the `aiidalab-sp` should be assigned to cluster `demo-server` as well.

```bash
az ad sp create-for-rbac \
--name aiidalab-sp \
--role Contributor \
--scopes /subscriptions/<subscription-id>/resourcegroups/aiidalab_demo_server_marvel/providers/Microsoft.ContainerService/managedClusters/demo-server $VNET_ID \
```
2 changes: 1 addition & 1 deletion basehub/values.yaml.j2
Original file line number Diff line number Diff line change
Expand Up @@ -55,11 +55,11 @@ jupyterhub:
default_url: /lab
environment:
JUPYTERHUB_SINGLEUSER_APP: jupyter_server.serverapp.ServerApp

hub:
db:
pvc:
storageClassName: default
storage: 1Gi
extraConfig:
00-logo: |
c.JupyterHub.logo_file = "/usr/local/share/jupyterhub/static/external/aiidalab-wide-logo.png"
Expand Down

0 comments on commit 265d9d1

Please sign in to comment.