Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run all controllers in a source cluster, nothing in target clusters #52

Open
hyorigo opened this issue Aug 7, 2020 · 10 comments
Open
Labels
documentation enhancement New feature or request

Comments

@hyorigo
Copy link

hyorigo commented Aug 7, 2020

Hi guys,

TL;DR

Three questions:

  1. Is it really okay to not install cert-manager and multicluster-scheduler on the target cluster, and install both only on the source cluster?
  2. Apart from kubectl get pods -n admiralty and kubectl get nodes -o wide, is there any way to ensure all the clusters are setup and connected correctly?
  3. The nginx demo didn't work, always pending, got no virtual pods or delegate pods. How to debug or resolve this issue?

Here's the lengthy context 😢 :

I'm a newcomer to multicluster-scheduler, and I want to deploy Argo Workflows across clusters and avoid to change the target cluster too much.

So I started with multicluster-scheduler installation guide first, followed the instructions twice:

  1. used helm to install cert-manager v0.12.0 and multicluster-scheduler 0.10.0-rc.1 only on the source cluster (as per the doc, the If you can only access one of the two clusters section says it's okay to not install on the target cluster);
  2. installed cert-manager v0.12.0 and multicluster-scheduler 0.10.0-rc.1 on the source and target clusters;

Both attempts looked great after the installation part:

  • kubectl get nodes -o wide returns two clusters;
  • kubectl get pods -n admiralty tells all 3 pods on source cluster is running;

Here's the details after installed cert-manager and multicluster-scheduler on both clusters:

export CLUSTER1=dev-earth
❯ export CLUSTER2=dev-moon

❯ kubectl --context "$CLUSTER1" get pods -n admiralty
NAME                                                          READY   STATUS    RESTARTS   AGE
multicluster-scheduler-candidate-scheduler-79f956995c-9kvbz   1/1     Running   0          125m
multicluster-scheduler-controller-manager-6b694fc496-vvhlw    1/1     Running   0          125m
multicluster-scheduler-proxy-scheduler-59bb6c778-45x54        1/1     Running   0          125m

❯ kubectl --context "$CLUSTER1" get nodes -o wide
NAME                               STATUS   ROLES     AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
admiralty-earth                    Ready    cluster   3h5m             <none>          <none>        <unknown>            <unknown>           <unknown>
admiralty-moon                     Ready    cluster   125m             <none>          <none>        <unknown>            <unknown>           <unknown>
host-10-198-21-72-10.198.21.72     Ready    <none>    24d    v1.14.1   10.198.21.72    <none>        Ubuntu 16.04.1 LTS   4.4.0-130-generic   docker://18.9.9
host-10-198-22-176-10.198.22.176   Ready    <none>    24d    v1.14.1   10.198.22.176   <none>        Ubuntu 16.04.1 LTS   4.4.0-130-generic   docker://18.9.9
host-10-198-23-129-10.198.23.129   Ready    master    24d    v1.14.1   10.198.23.129   <none>        Ubuntu 16.04.1 LTS   4.4.0-130-generic   docker://18.9.9

❯ kubectl --context "$CLUSTER2" get pods -n admiralty
NAME                                                          READY   STATUS    RESTARTS   AGE
multicluster-scheduler-candidate-scheduler-6cff566db6-77l9h   1/1     Running   0          59m
multicluster-scheduler-controller-manager-d857466cd-bvxht     1/1     Running   1          59m
multicluster-scheduler-proxy-scheduler-7d8bb6666d-568tc       1/1     Running   0          59m

❯ kubectl --context "$CLUSTER2" get nodes -o wide
NAME                 STATUS     ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
bj-idc-10-10-18-73   NotReady   <none>   15d   v1.15.9   10.10.18.73   <none>        Ubuntu 16.04.6 LTS   4.15.0-45-generic   docker://18.9.7
bj-idc-10-10-18-74   Ready      master   15d   v1.15.9   10.10.18.74   <none>        Ubuntu 16.04.1 LTS   4.4.0-184-generic   docker://19.3.8

But the nginx demo never works, and I always got 10 pending pods on the source cluster, and no virutal pods, neither any pods on the target cluster:

Here's the kubectl get pods: (I ran the demo three times, and Pending pods turned into everlasting Terminating after the kubectl delete pods nginx-6b676d6776-XXX --grace-period=0 --force forced deletion):

❯ kb get pods
NAME                                      READY   STATUS        RESTARTS   AGE
camera-pose-service-6c79665bbb-lvcsv      1/1     Running       0          3d22h
counter                                   1/1     Running       0          20h
importing-agent-754b64c55b-xhnsx          1/1     Running       0          17d
myapp                                     1/1     Running       0          3d22h
nginx-6b676d6776-25ztl                    0/1     Terminating   0          96m
nginx-6b676d6776-2ph7q                    0/1     Terminating   0          96m
nginx-6b676d6776-6b2qb                    0/1     Terminating   0          28m
nginx-6b676d6776-7bssr                    0/1     Pending       0          14m
nginx-6b676d6776-7jsgt                    0/1     Terminating   0          28m
nginx-6b676d6776-7z5c5                    0/1     Pending       0          14m
nginx-6b676d6776-9zrnm                    0/1     Pending       0          14m
nginx-6b676d6776-bw4ck                    0/1     Pending       0          14m
nginx-6b676d6776-cvdkx                    0/1     Terminating   0          28m
nginx-6b676d6776-ghkwk                    0/1     Terminating   0          28m
nginx-6b676d6776-hv9nz                    0/1     Pending       0          14m
nginx-6b676d6776-kk5ng                    0/1     Terminating   0          96m
nginx-6b676d6776-kwtx5                    0/1     Terminating   0          28m
nginx-6b676d6776-lbqjv                    0/1     Terminating   0          96m
nginx-6b676d6776-mc2nr                    0/1     Terminating   0          28m
nginx-6b676d6776-mnmlr                    0/1     Terminating   0          96m
nginx-6b676d6776-n7p47                    0/1     Pending       0          14m
nginx-6b676d6776-p4pkj                    0/1     Pending       0          14m
nginx-6b676d6776-pql85                    0/1     Terminating   0          96m
nginx-6b676d6776-q89px                    0/1     Terminating   0          96m
nginx-6b676d6776-r7frt                    0/1     Terminating   0          28m
nginx-6b676d6776-rfv9d                    0/1     Terminating   0          96m
nginx-6b676d6776-rwpn5                    0/1     Terminating   0          28m
nginx-6b676d6776-trt64                    0/1     Pending       0          14m
nginx-6b676d6776-txdvs                    0/1     Terminating   0          96m
nginx-6b676d6776-tzb9l                    0/1     Terminating   0          28m
nginx-6b676d6776-v9swj                    0/1     Pending       0          14m
nginx-6b676d6776-xbcx4                    0/1     Terminating   0          96m
nginx-6b676d6776-z9qlr                    0/1     Terminating   0          28m
nginx-6b676d6776-zzjmx                    0/1     Pending       0          14m
pose-wrapper-6446446694-hbvtj             1/1     Running       0          3d22h
sfd-init-minio-bucket-job-svfxq           0/1     Completed     0          23d
simple-feature-db-proxy-6c795967d-d5xkx   1/1     Running       0          3d6h
simple-feature-db-set-0-worker-0          1/1     Running       0          3d6h

kubectl describe pod for a pending pod:

❯ kubectl describe pod nginx-6b676d6776-7bssr
Name:           nginx-6b676d6776-7bssr
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=nginx
                multicluster.admiralty.io/has-finalizer=true
                pod-template-hash=6b676d6776
Annotations:    multicluster.admiralty.io/elect:
                multicluster.admiralty.io/sourcepod-manifest:
                  apiVersion: v1
                  kind: Pod
                  metadata:
                    annotations:
                      multicluster.admiralty.io/elect: ""
                    creationTimestamp: null
                    generateName: nginx-6b676d6776-
                    labels:
                      app: nginx
                      pod-template-hash: 6b676d6776
                    namespace: default
                    ownerReferences:
                    - apiVersion: apps/v1
                      blockOwnerDeletion: true
                      controller: true
                      kind: ReplicaSet
                      name: nginx-6b676d6776
                      uid: 83c74b66-d885-11ea-bff2-fa163e6c431e
                  spec:
                    containers:
                    - image: nginx
                      imagePullPolicy: Always
                      name: nginx
                      ports:
                      - containerPort: 80
                        protocol: TCP
                      resources:
                        requests:
                          cpu: 100m
                          memory: 32Mi
                      terminationMessagePath: /dev/termination-log
                      terminationMessagePolicy: File
                      volumeMounts:
                      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
                        name: default-token-m962m
                        readOnly: true
                    dnsPolicy: ClusterFirst
                    enableServiceLinks: true
                    priority: 0
                    restartPolicy: Always
                    schedulerName: default-scheduler
                    securityContext: {}
                    serviceAccount: default
                    serviceAccountName: default
                    terminationGracePeriodSeconds: 30
                    tolerations:
                    - effect: NoExecute
                      key: node.kubernetes.io/not-ready
                      operator: Exists
                      tolerationSeconds: 300
                    - effect: NoExecute
                      key: node.kubernetes.io/unreachable
                      operator: Exists
                      tolerationSeconds: 300
                    volumes:
                    - name: default-token-m962m
                      secret:
                        secretName: default-token-m962m
                  status: {}
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/nginx-6b676d6776
Containers:
  nginx:
    Image:      nginx
    Port:       80/TCP
    Host Port:  0/TCP
    Requests:
      cpu:        100m
      memory:     32Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-m962m (ro)
Volumes:
  default-token-m962m:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-m962m
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/network-unavailable
                 virtual-kubelet.io/provider=admiralty
Events:          <none>

kubectl describe pod for a terminating pod:

❯ kubectl describe pod nginx-6b676d6776-rwpn5
Name:                      nginx-6b676d6776-rwpn5
Namespace:                 default
Priority:                  0
Node:                      <none>
Labels:                    app=nginx
                           multicluster.admiralty.io/has-finalizer=true
                           pod-template-hash=6b676d6776
Annotations:               multicluster.admiralty.io/elect:
                           multicluster.admiralty.io/sourcepod-manifest:
                             apiVersion: v1
                             kind: Pod
                             metadata:
                               annotations:
                                 multicluster.admiralty.io/elect: ""
                               creationTimestamp: null
                               generateName: nginx-6b676d6776-
                               labels:
                                 app: nginx
                                 pod-template-hash: 6b676d6776
                               namespace: default
                               ownerReferences:
                               - apiVersion: apps/v1
                                 blockOwnerDeletion: true
                                 controller: true
                                 kind: ReplicaSet
                                 name: nginx-6b676d6776
                                 uid: 9766036b-d883-11ea-bff2-fa163e6c431e
                             spec:
                               containers:
                               - image: nginx
                                 imagePullPolicy: Always
                                 name: nginx
                                 ports:
                                 - containerPort: 80
                                   protocol: TCP
                                 resources:
                                   requests:
                                     cpu: 100m
                                     memory: 32Mi
                                 terminationMessagePath: /dev/termination-log
                                 terminationMessagePolicy: File
                                 volumeMounts:
                                 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
                                   name: default-token-m962m
                                   readOnly: true
                               dnsPolicy: ClusterFirst
                               enableServiceLinks: true
                               priority: 0
                               restartPolicy: Always
                               schedulerName: default-scheduler
                               securityContext: {}
                               serviceAccount: default
                               serviceAccountName: default
                               terminationGracePeriodSeconds: 30
                               tolerations:
                               - effect: NoExecute
                                 key: node.kubernetes.io/not-ready
                                 operator: Exists
                                 tolerationSeconds: 300
                               - effect: NoExecute
                                 key: node.kubernetes.io/unreachable
                                 operator: Exists
                                 tolerationSeconds: 300
                               volumes:
                               - name: default-token-m962m
                                 secret:
                                   secretName: default-token-m962m
                             status: {}
Status:                    Terminating (lasts 63m)
Termination Grace Period:  0s
IP:
IPs:                       <none>
Controlled By:             ReplicaSet/nginx-6b676d6776
Containers:
  nginx:
    Image:      nginx
    Port:       80/TCP
    Host Port:  0/TCP
    Requests:
      cpu:        100m
      memory:     32Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-m962m (ro)
Volumes:
  default-token-m962m:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-m962m
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/network-unavailable
                 virtual-kubelet.io/provider=admiralty
Events:          <none>

Btw, kubectl describe node says both node/cluster has no resource, is this expected?

❯ kubectl describe node admiralty-earth
Name:               admiralty-earth
Roles:              cluster
Labels:             alpha.service-controller.kubernetes.io/exclude-balancer=true
                    kubernetes.io/role=cluster
                    type=virtual-kubelet
                    virtual-kubelet.io/provider=admiralty
Annotations:        node.alpha.kubernetes.io/ttl: 0
CreationTimestamp:  Fri, 07 Aug 2020 13:44:47 +0800
Taints:             virtual-kubelet.io/provider=admiralty:NoSchedule
Unschedulable:      false
Conditions:
  Type    Status  LastHeartbeatTime                 LastTransitionTime                Reason  Message
  ----    ------  -----------------                 ------------------                ------  -------
  Ready   True    Fri, 07 Aug 2020 17:27:27 +0800   Fri, 07 Aug 2020 14:44:08 +0800
Addresses:
System Info:
 Machine ID:
 System UUID:
 Boot ID:
 Kernel Version:
 OS Image:
 Operating System:
 Architecture:
 Container Runtime Version:
 Kubelet Version:
 Kube-Proxy Version:
PodCIDR:                     10.244.3.0/24
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----    ------------  ----------  ---------------  -------------  ---
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                0 (0%)    0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
Events:              <none>

❯ kubectl describe node admiralty-moon
Name:               admiralty-moon
Roles:              cluster
Labels:             alpha.service-controller.kubernetes.io/exclude-balancer=true
                    kubernetes.io/role=cluster
                    type=virtual-kubelet
                    virtual-kubelet.io/provider=admiralty
Annotations:        node.alpha.kubernetes.io/ttl: 0
CreationTimestamp:  Fri, 07 Aug 2020 14:44:08 +0800
Taints:             virtual-kubelet.io/provider=admiralty:NoSchedule
Unschedulable:      false
Conditions:
  Type    Status  LastHeartbeatTime                 LastTransitionTime                Reason  Message
  ----    ------  -----------------                 ------------------                ------  -------
  Ready   True    Fri, 07 Aug 2020 17:27:36 +0800   Fri, 07 Aug 2020 14:44:08 +0800
Addresses:
System Info:
 Machine ID:
 System UUID:
 Boot ID:
 Kernel Version:
 OS Image:
 Operating System:
 Architecture:
 Container Runtime Version:
 Kubelet Version:
 Kube-Proxy Version:
PodCIDR:                     10.244.4.0/24
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----    ------------  ----------  ---------------  -------------  ---
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                0 (0%)    0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
Events:              <none>

Many thanks!

@adrienjt
Copy link
Contributor

adrienjt commented Aug 7, 2020

Hi @hyorigo, are you the same person as "repositorywhoo" in our community chat?

To answer your TL;DR questions:

  1. No, it's not okay. Someone does have to install cert-manager and multicluster-scheduler in the target clusters and give you the necessary service account tokens. We'll rephrase the doc to avoid the confusion.
  2. Those are the first checks. Multicluster-scheduler is decentralized, so no cluster can see whether all clusters are configured properly. That being said we do need a more user-friendly way to determine for each cluster if it is configured properly, and if not, what's not working.
  3. At the moment, your best bet is to look at the logs of multicluster-scheduler's pods in the admiralty namespace.

Thank you very much for providing so many details. However, given (1), I haven't taken the time to analyze them yet. Please let me know whether things work after you or someone else have installed cert-manager and multicluster-scheduler in the target cluster. It looks like you did install cert-manager and multicluster-scheduler in the target cluster, but virtual nodes indicate no resources are available and that's not right. Could you please check the logs of the multicluster-scheduler controller manager?

@adrienjt adrienjt added documentation question Further information is requested labels Aug 7, 2020
@hyorigo
Copy link
Author

hyorigo commented Aug 7, 2020

Thanks for your quick resposne! Yes, it's me in the chat.

Apart from multicluster-scheduler, can you please let me know that why cert-manager is a must as well?

About the pods log you asked, I've uploaded to GitHub Gist, please take a look:

$ kb get pods -nadmiralty
NAME                                                          READY   STATUS    RESTARTS   AGE
multicluster-scheduler-candidate-scheduler-79f956995c-9kvbz   1/1     Running   0          11h
multicluster-scheduler-controller-manager-6b694fc496-vvhlw    1/1     Running   0          11h
multicluster-scheduler-proxy-scheduler-59bb6c778-45x54        1/1     Running   0          11h

$ kb logs -nadmiralty multicluster-scheduler-candidate-scheduler-79f956995c-9kvbz > candidate.log

$ kb logs -nadmiralty multicluster-scheduler-controller-manager-6b694fc496-vvhlw > controller.log

$ kb logs -nadmiralty multicluster-scheduler-proxy-scheduler-59bb6c778-45x54 > proxy.log

Here's the log:

@adrienjt
Copy link
Contributor

adrienjt commented Aug 8, 2020

From the logs, I see two issues:

  1. The error message in candidate.log and proxy.log about csinodes indicates that you're using a version of Kubernetes pre-1.17. multicluster-scheduler requires Kubernetes v1.17+, unless you recompile on a fork of Kubernetes as described here: candidate-scheduler and proxy-scheduler: Forbidden: cannot list resource "csinodes" #19 (comment)
  2. It seems that the clustersummaries and podchaperons CRDs aren't installed in the target cluster. Did Helm complain during the installation? Try to install them now: kubectl apply -f https://github.com/admiraltyio/multicluster-scheduler/releases/download/v0.10.0-rc.1/admiralty.crds.yaml

@hyorigo
Copy link
Author

hyorigo commented Aug 8, 2020

Thanks for your time! Let me see if I can resolve the Kubernetes version issue in the company clusters.

Helm said nothing about this even with debug mode --- (sorry, I can't find the old log then)

helm install --name multicluster-scheduler . \
  --kube-context "$CLUSTER1" \
  --namespace admiralty \
  --wait \
  --set clusterName=earth \
  --set targetSelf=true \
  --set targets[0].name=moon \
  --debug

@adrienjt
Copy link
Contributor

adrienjt commented Aug 8, 2020

I forgot to answer this question:

Apart from multicluster-scheduler, can you please let me know that why cert-manager is a must as well?

cert-manager is required to provision certificates for the multicluster-scheduler's mutating pod admission webhook. Technically, the webhook is only used in source clusters, but it does run in target clusters. We could make cert-manager an optional dependency, but haven't found a compelling reason to so far.

@hyorigo
Copy link
Author

hyorigo commented Aug 9, 2020

@adrienjt OK. Thanks! So if I resolve the Kubernetes version issue, maybe I can give a shoot to not install anything else on new target clusters for the deployment? Or I should just forget about it.

@adrienjt
Copy link
Contributor

adrienjt commented Aug 9, 2020

I'm not sure I understand your requirement. You don't want anything installed in the target clusters (why?), but you have access to them? You could potentially run instances of multicluster-scheduler "out-of-cluster", i.e., anywhere, including the source cluster. There must be one instance per cluster, doesn't matter where the containers run. However, there are no easy-to-follow instructions for that kind of setup.

@hyorigo
Copy link
Author

hyorigo commented Aug 10, 2020

@adrienjt Let me explain about the scenarios. As a platform team, the source cluster is our testbed/internal deployment, and the target clusters are development/staging/production of other teams. We are helping them to run smoke/integration/performance tests in their clusters from our cluster with Argo Workflows, so ---

  1. their clusters may not be running Kubernetes 1.17+;
  2. the permission may be limited to a specific namespace, not allowed to create another automatically;
  3. not allowed to install anything else, which is not used in production all the time

@adrienjt adrienjt changed the title All nginx pods is pending and got no virtual pods after followed the instructions Run all controllers in a source cluster, nothing in target clusters Aug 10, 2020
@adrienjt adrienjt added enhancement New feature or request and removed question Further information is requested labels Aug 10, 2020
@adrienjt
Copy link
Contributor

Your setup requires a few code changes:

  • optionally disabling the pod admission webhook
  • depending on permissions:
    • optionally restricting the pod chaperon controller to a single namespace
    • optionally disabling the cluster summary controllers
    • optionally disabling the candidate scheduler (and bypassing the corresponding filter in the proxy scheduler), assuming that you're okay to schedule based on local virtual node names or labels, rather than remote node labels.

It also requires changes to the Helm chart, or at least a custom manifest, where kubeconfigs are used instead of service accounts.

@adrienjt
Copy link
Contributor

adrienjt commented Feb 8, 2022

cf. #136 for a similar yet different use case: run controllers for source cluster out of source cluster

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants