-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run all controllers in a source cluster, nothing in target clusters #52
Comments
Hi @hyorigo, are you the same person as "repositorywhoo" in our community chat? To answer your TL;DR questions:
Thank you very much for providing so many details. |
Thanks for your quick resposne! Yes, it's me in the chat. Apart from multicluster-scheduler, can you please let me know that why cert-manager is a must as well? About the pods log you asked, I've uploaded to GitHub Gist, please take a look: $ kb get pods -nadmiralty
NAME READY STATUS RESTARTS AGE
multicluster-scheduler-candidate-scheduler-79f956995c-9kvbz 1/1 Running 0 11h
multicluster-scheduler-controller-manager-6b694fc496-vvhlw 1/1 Running 0 11h
multicluster-scheduler-proxy-scheduler-59bb6c778-45x54 1/1 Running 0 11h
$ kb logs -nadmiralty multicluster-scheduler-candidate-scheduler-79f956995c-9kvbz > candidate.log
$ kb logs -nadmiralty multicluster-scheduler-controller-manager-6b694fc496-vvhlw > controller.log
$ kb logs -nadmiralty multicluster-scheduler-proxy-scheduler-59bb6c778-45x54 > proxy.log Here's the log:
|
From the logs, I see two issues:
|
Thanks for your time! Let me see if I can resolve the Kubernetes version issue in the company clusters. Helm said nothing about this even with debug mode --- (sorry, I can't find the old log then) helm install --name multicluster-scheduler . \
--kube-context "$CLUSTER1" \
--namespace admiralty \
--wait \
--set clusterName=earth \
--set targetSelf=true \
--set targets[0].name=moon \
--debug |
I forgot to answer this question:
cert-manager is required to provision certificates for the multicluster-scheduler's mutating pod admission webhook. Technically, the webhook is only used in source clusters, but it does run in target clusters. We could make cert-manager an optional dependency, but haven't found a compelling reason to so far. |
@adrienjt OK. Thanks! So if I resolve the Kubernetes version issue, maybe I can give a shoot to not install anything else on new target clusters for the deployment? Or I should just forget about it. |
I'm not sure I understand your requirement. You don't want anything installed in the target clusters (why?), but you have access to them? You could potentially run instances of multicluster-scheduler "out-of-cluster", i.e., anywhere, including the source cluster. There must be one instance per cluster, doesn't matter where the containers run. However, there are no easy-to-follow instructions for that kind of setup. |
@adrienjt Let me explain about the scenarios. As a platform team, the source cluster is our testbed/internal deployment, and the target clusters are development/staging/production of other teams. We are helping them to run smoke/integration/performance tests in their clusters from our cluster with Argo Workflows, so ---
|
Your setup requires a few code changes:
It also requires changes to the Helm chart, or at least a custom manifest, where kubeconfigs are used instead of service accounts. |
cf. #136 for a similar yet different use case: run controllers for source cluster out of source cluster |
Hi guys,
TL;DR
Three questions:
kubectl get pods -n admiralty
andkubectl get nodes -o wide
, is there any way to ensure all the clusters are setup and connected correctly?Here's the lengthy context 😢 :
I'm a newcomer to multicluster-scheduler, and I want to deploy Argo Workflows across clusters and avoid to change the target cluster too much.
So I started with multicluster-scheduler installation guide first, followed the instructions twice:
If you can only access one of the two clusters
section says it's okay to not install on the target cluster);Both attempts looked great after the installation part:
kubectl get nodes -o wide
returns two clusters;kubectl get pods -n admiralty
tells all 3 pods on source cluster is running;Here's the details after installed cert-manager and multicluster-scheduler on both clusters:
But the nginx demo never works, and I always got 10 pending pods on the source cluster, and no virutal pods, neither any pods on the target cluster:
Here's the
kubectl get pods
: (I ran the demo three times, andPending
pods turned into everlastingTerminating
after thekubectl delete pods nginx-6b676d6776-XXX --grace-period=0 --force
forced deletion):kubectl describe pod
for a pending pod:kubectl describe pod
for a terminating pod:Btw,
kubectl describe node
says both node/cluster has no resource, is this expected?Many thanks!
The text was updated successfully, but these errors were encountered: