Replies: 2 comments
-
@lennart Try unsetting the version of traefik and running |
Beta Was this translation helpful? Give feedback.
0 replies
-
thanks for the reply I will try and report back (I had problems with some kustomizations depending on a certain version of the traefik helm chart so I locked the version, I guess I can now lift this restriction again) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Description
When restoring from an etcd snapshot (as described in the docs), the terraform apply step hangs waiting for the load balancer ip:
I ssh'ed into one of the control plane nodes and could confirm, the etcd snapshot is in fact restored but since the traefik service was removed during restore (with reasons) I am wondering how the apply step should finish? It seems that this is a deadlock situation since the rest of the deployment waits for the load balancer ip and it won't get one since the traefik service (that would contain the corresponding annotations to connect to the hetzner load balancer) was removed.
I understand that not deleting the traefik service might alter existing load balancers, which is unwanted, but is there something I am missing on how the traefik service is eventually restored? I can see from the deploy logs, that the post_install kustomizations are applied and that the traefik helm chart is obviously unchanged. In order to recreate the traefik service, I had to:
this re-ran the helm install job and created the service. Re-running the terraform apply step then could finish as the load balancer now contained targets (it did not before the service re-creation)
I guess this is not the way a restore is supposed to work, so I am interested where I used it wrong, or if there is something different in the way restore used to work...
Kube.tf file
Screenshots
No response
Platform
linux
Beta Was this translation helpful? Give feedback.
All reactions