-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
need help to install Admiralty 0.14.1 in OpenShift 4.7 #128
Comments
cert-manager 1.6 stopped serving alpha and beta APIs: https://github.com/jetstack/cert-manager/releases/tag/v1.6.0
instead of
should work. Please feel free to submit a PR to implement the conversions in the chart (for helm install to work again). We haven't upgraded to cert-manager 1.6 yet on our side, so haven't had an urgent need for the conversion. |
@adrienjt Thanks. I also just found out how to get around the helm install issue using "helm template" route. Things seem to be working fine now. |
Everything works fine out of the box using the Kubernetes clusters. However, there are quite few things that users need to change in order to get it working on OpenShift (e.g. clusterroles). Now I am facing an issue for the virtual node which represents the workload cluster:
Any idea why the resources (cpu/memory) on the virtual node are all 0? I am using the service account to do the authentication between the target and workload clusters. It works fine on K8s but not OpenShift. |
I was able to figure out how to set up a kubeconfig secret for OpenShift clusters. Everything works beautifully. Love the tool! |
Hi @hfwen0502, I'm glad you were able to figure this out. Would you care to contribute how to set up a kubeconfig secret for OpenShift clusters to the Admiralty documentation? (PR under docs/) |
Of course. Would be happy to contribute the documentation. Can the platform be based on the IKS and ROKS services on IBM Cloud? I am working in the hybrid cloud organization in IBM Research.
|
Yes, no problem.
Could you contribute the RBAC changes to the Helm chart? |
A PR is submitted which includes both RBAC and doc changes: #134 |
Things only work in the default namespace on OpenShift. There are issues related to scc when we set up Admiralty in the non-default namespace. Errors are shown below:
|
When Admiralty is installed in the non-default namespace and/or when Sources/Targets are set up (and pods created) in the non-default namespace? Which SCC are you expecting to apply? |
@adrienjt Sorry. I should have made myself clear. Admiralty is always installed in the Admiralty namespace. The issue related to SCC occurs when sources/targets are set up in the non-default namespace. Let's assume sources/targets are in the hfwen namespace. In the annotation of the proxy pod at the source, we have the following:
On the target cluster, the PodChaperon object has this:
This is a problem because the target cluster actually expects SCC in the hfwen namespace with the following:
Any idea how to resolve this? When sources/targets are in the default namespace, the securityContext stays empty. That's why we did not hit this problem. I have also tried to adjust the SCC in the service account, which did not work. |
On OpenShift, it always comes with 3 service accounts by default.
Adding the privileged SCC to the default sa in my hfwen namespace (both source and target) seems to fix the SCC issue.
@adrienjt Is this something you have in mind? Is this a good practice or the only way to resolve it? |
ok. Find a better solution. The OpenShift clusters on IBM Cloud come with other preconfigured SCC groups. We can use a less-privileged one instead of privileged. |
I am trying to explore the capabilities that Admiralty can offer in the OCP cluster provisioned on IBM Cloud. Below is the info. about the OCP cluster and the cert-manager version installed here:
However, when trying to install Admiralty, I encountered issues shown below:
Any idea how to fix this?
The text was updated successfully, but these errors were encountered: