-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
best way to create certificate for each pod in a deployment #4
Comments
Hi @etfeet, to be clear, are you asking for different SANs for each replica in a deployment resource? |
yes ie |
We're a little confused here. Do you have a different service per pod (I really don't know how to do that in k8s)? |
I have a single service. However, we have legal requires where all of our transport traffic needs to be tls/ssl terminated (including the backends). I'm Trying to deploy a single elasticsearch cluster with 3 nodes (replicas) and have the transport traffic between the replicas be https only. However, i need to be able to deploy a ssl certificate to each replica that contains the name of the specific replica. in my case the deployment has 3 pods with the following names: with autocert i can deploy a certificate per service. However, to encrypt the replica transport I need to be able to deploy a ssl certificate for each replica in the cluster. |
Right now each replica will have its own cert, so the traffic will be encrypted, the only thing is that the SAN will be the same. In any case, you're pointing to a really good scenario, and we should think about how we can add support for this. It's not ideal, but right now you can try to create 3 different deployments, or perhaps using step in a different way, details are coming. |
The autocert webhook admission controller gets called by kubernetes on pod creation... so it'll get called per-pod, not per-deployment. If there's some way to create the pods outside of a deployment with the appropriate annotation that should work. It sounds like that's what you were trying to do in your first comment, but had trouble. I'm not sure if that's possible either, but if someone who knows more about kubernetes sees this and has an idea how to do that, that'd be helpful. This feels like something that should be possible without changing autocert. Another way to work around this, potentially, would be to allow a name template in the autocert annotation. We'd only be able to use information the webhook admission controller has readily available, though, and I'm not sure off the top of my head what that'd be. I don't know if the pod name (e.g., Another option for a workaround is to not use autocert for this use case (at least for now). The init container and sidecar for obtaining and renewing a certificate are actually really simple. The big thing that Happy to answer any questions you have to setup the ACME stuff if you decide to try that out. |
would it be possible to expose the pod name and cluster domain to the bootstrap container? Looking at the controller code it looks like there's only a couple things that need to be changed to facilitate that.
and in the patch function
from what i can tell it looks like the cert common name, and pod namespace already are. If the pod name and cluster name were exposed I could tweak the boostrap image to issue the certificate. |
Sure it will, it makes sense to add POD_NAME, CLUSTER_DOMAIN, and NAMESPACE. |
Hi @etfeet, I've been testing this, and the problem with the pod name, is that in a deployment is usually blank, if this is blank kubernetes also defines the GeneratedName. But this is not totally unique. For example, for the deployment from here but with 3 replicas:
The The only option to get a unique name is to use the HOSTNAME environment variable that is already present, in my example they are like You might be able to extract the cluster domain and namespace from existing current variables. But if you want me to I can add the environment variables CLUSTER_DOMAIN and NAMESPACE. At least for a deployment I don't see any benefit for a POD_NAME, but I can add it too (setting it to name or generate name if the other is blank). What do you think is the best solution for your scenario? Did you look into the ACME solution proposed above? |
I think we should be able to get it working using the HOSTNAME. I did look at acme however, due to the java keystore/truststore its easier to just modify the autocert init images to also create the keystore files. to use acme we'd have to rework a lot of the public helm charts we use. We're not opposed to doing that but it would be a lot less work to just have the init container do it and then we don't need to change any of the helm charts. |
By default, we generate elliptic curve certificates, but if you integrate directly with the step-certificates instead of using autocert, you can get an RSA certificate signed. You can do it with the Adding custom annotations to support different types of keys would be something to think about. I'll create an issue for that. |
would doing this inside the init container work to get an rsa key instead of EC as a workaround? |
As you are not going to be able to interact with the command you will need to split the command in two: TOKEN=$(step ca token --provisioner {provisioner-name} --password-file /var/run/secrets/password.txt $HOSTNAME)
step ca certificate --kty RSA --token $TOKEN $HOSTNAME test.crt test.key or just: step ca certificate --kty RSA --token $(step ca token --provisioner {provisioner-name} --password-file /var/run/secrets/password.txt $HOSTNAME) $HOSTNAME test.crt test.key You might need The renew command can also send signals or execute an script to force elasticsearch to re-read the certificate if this is required, see |
wanted to propose another approach which I prototyped locally and worked well. so instead of hoping the webhook can get a stable hostname or pod IP (impossible before a pod is scheduled), the token generation can be initiated by the bootstrapper to the controller via an endpoint, say this eliminates the need for secret creation/cleanup too. also reduces the initial mutation time without generating the token in the first place |
@jack4it interesting approach, I'll bring this to our open-source triage meeting. |
Hi @jack4it, after talking with the team, we don't think the |
What if we secure the thx @maraino |
but the acme approach is also not bad as long as the IP address support is implemented. i was actually looking at that option too |
That's an interesting idea (with reference to the service account token volume projection)!
|
The projected token feature is widely available at this point I think. Tried on AKS, works. EKS/GKE has it as well based on quick googling. it's a stable feature already yes, in the admission hook, we need to add a projected volume to request a token that will be put into the container at a specified location, with a specific audience, etc. The token looks like this: {
"aud": [
"autocert"
],
"exp": 1621447051,
"iat": 1621446451,
"iss": "\"aks-__8<__.hcp.westus2.azmk8s.io\"",
"kubernetes.io": {
"namespace": "__8<__",
"pod": {
"name": "token-client-ddffd6489-prsb5",
"uid": "f28ac1fa-5527-49eb-b17f-0624b522d3da"
},
"serviceaccount": {
"name": "default",
"uid": "df85fdfd-aa89-4c9d-89d2-59a83dbc5928"
}
},
"nbf": 1621446451,
"sub": "system:serviceaccount:__8<__:default"
} Notice the namespace and pod name claims are present. |
How would I go about creating per pod certificate annotations?
Looking through the kubernetes / helm docs I'm not seeing an easy way. From what i can see i can only add the annotations after the pod has been created but then the certificates won't get created/injected since the init container for pod will have already run. I can create a annotation for the deployment but that just creates a single certificate for the cluster / workload and not for each individual pod.
Would it be possible to add an
autocert.step.sm/enabled
and ifautocert.step.sm/name
is not set it defaults to creating and injecting a certificate for each pod in the deployment/statefulset, etc or alternatively issue a san cert that has the cn for each pod in the deployment?The text was updated successfully, but these errors were encountered: