NOTES:
Currently, we do not have a proper package to deploy the cluster operator as well its dependant operators with a simple and unified way. So, the guideline shown here is a temporary solution. We're working on to provide a formal deployment solution latter.
- A VM with linux OS (MEM: 4G+, DISK: 50GB+)
- Docker installed (Version: v19.03.12+)
- kubectl installed (Version: v1.18+)
- kustomize installed (Version: v3.1.0+)
- kind installed (Version: v0.8.1+)
Use the following configuration to create a kind cluster with multiple worker nodes:
# a cluster with 3 control-plane nodes and 3 workers
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- role: worker
- role: worker
- role: worker
Execute command:
kind create cluster --name myk8s --config kind.yaml
Check kind cluster:
kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:43415 KubeDNS is running at https://127.0.0.1:43415/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Deploy nginx ingress controller with the command shown below:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
Check if it is ready:
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=90s
As optional steps, you can try to deploy sample apps and access the ingress routes.
Deploy sample apps:
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml
Verify that the ingress works:
# should output "foo"
curl localhost/foo
# should output "bar"
curl localhost/bar
Follow the guide shown here to deploy the cert-manager into the kind cluster.
Follow the installation guide shown here to install the PostgreSQL operator. It can be used with kubectl 1.14 or newer as easy as:
kubectl apply -k github.com/zalando/postgres-operator/manifests
Follow the deployment guide shown here to deploy Redis operator to the kind cluster.
A simple way is:
kubectl create -f https://raw.githubusercontent.com/spotahome/redis-operator/master/example/operator/all-redis-operator-resources.yaml
NOTES:
If encounter RBAC permission issue, as a simple way, upgrade the privileges of the default service account under the deploying namespace. Try:
# <NAMESPACE> is PLACEHOLDER
kubectl create clusterrolebinding --clusterrole=cluster-admin --user=system:serviceaccount:<NAMESPACE>:default --clusterrole=cluster-admin --user=system:serviceaccount rds-admin-binding
NOTES:
Deploy the operator:
kubectl apply -k github.com/minio/operator
or using the command shown below after doing overaly pacthing
kustomize build | kubectl apply -f -
Deploy core operator from source code.
Clone the repo:
https://github.com/goharbor/harbor-operator.git
Build the controller image:
# cd harbor-operator
# IMG ?= goharbor/harbor-operator:dev
make docker-build
Load the image into kind cluster nodes:
# my k8s is cluster name
kind load --name myk8s docker-image goharbor/harbor-operator:dev
Deploy the operator:
make deploy
Deploy cluster operator from source code.
Clone the repo:
git clone https://github.com/goharbor/harbor-cluster-operator.git
Build the controller image:
# cd harbor-operator
export IMG=goharbor/harbor-cluster-operator:dev
make docker-build
Load the image into kind cluster nodes:
# my k8s is cluster name
kind load --name myk8s docker-image goharbor/harbor-cluster-operator:dev
Deploy the operator:
make deploy
Create a sample
namespace:
kubectl create ns sample
Create an admin secret with the manifest like:
cat <<EOF | kubectl apply -f -
# A secret of harbor admin password.
# Password is encoded with base64.
apiVersion: v1alpha1
kind: Secret
metadata:
name: admin-secret
namespace: sample
data:
password: SGFyYm9yMTIzNDU=
type: Opaque
EOF
Create a self-signed issuer:
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: selfsigned-issuer
namespace: sample
spec:
selfSigned: {}
EOF
Here is a sample manifest for deploying a Harbor with all in-cluster services(use sample.goharbor.io
as public URL
and notary.goharbor.io
as Notary public URL):
cat <<EOF | kubectl apply -f -
apiVersion: goharbor.io/v1alpha1
kind: HarborCluster
metadata:
name: sz-harbor-cluster
namespace: sample
spec:
redis:
kind: "inCluster"
spec:
server:
replicas: 1
resources:
requests:
cpu: "1"
memory: "2Gi"
storage: "10Gi"
sentinel:
replicas: 1
schema: "redis"
adminPasswordSecret: "admin-secret"
certificateIssuerRef:
name: selfsigned-issuer
tlsSecret: public-certificate
database:
kind: "inCluster"
spec:
replicas: 2
resources:
requests:
cpu: "1"
memory: "2Gi"
limits:
cpu: "1"
memory: "2Gi"
publicURL: "https://sample.goharbor.io"
replicas: 2
notary:
publicUrl: "https://notary.goharbor.io"
disableRedirect: true
jobService:
workerCount: 10
replicas: 2
storage:
kind: "inCluster"
options:
provider: minIO
spec:
replicas: 4
version: RELEASE.2020-01-03T19-12-21Z
volumeClaimTemplate:
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
resources:
requests:
memory: 1Gi
cpu: 500m
limits:
memory: 1Gi
cpu: 1000m
version: 1.10.0
EOF
After a while, the harbor cluster (HarborCluster
) should be ready:
kubectl get HarborCluster -n sample -o wide
you can get the output:
NAME VERSION PUBLIC URL SERVICE READY CACHE READY DATABASE READY STORAGE READY
sz-harbor-cluster 1.10.0 https://sample.goharbor.io Unknown True True True
As an easy and quick way, add host mappings into the /ect/hosts
of the host that's used to access the new deployed Harbor.
<KIND_HOST_IP> sample.goharbor.io
<KIND_HOST_IP> notary.goharbor.io
Try the API server first:
curl -k https://sample.goharbor.io/api/systeminfo
There will be some JSON data output like:
{"with_notary":true,"with_admiral":false,"admiral_endpoint":"NA","auth_mode":"db_auth","registry_url":"sample.goharbor.io","external_url":"https://sample.goharbor.io","project_creation_restriction":"everyone","self_registration":false,"has_ca_root":false,"harbor_version":"v1.10.0-6b84b62f","registry_storage_provider_name":"memory","read_only":false,"with_chartmuseum":false,"notification_enable":true}
Try to push images:
docker login sample.goharbor.io -u admin -p <PASSWORD>
docker tag nginx:latest sample.goharbor.io/library/nginx:latest
docker push sample.goharbor.io/library/nginx:latest
Open browser and navigate to https://sample.goharbor.io
to open web UI of Harbor.