-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mysql-k8s breaks after scale down 0 #409
Comments
I assume there is no update on this? Just lost a database once again in kubernetes environment. |
@swasenius There's some details in discussion, but support for scale up from zero will come on the linked PR. |
Can you give details on why you needed to scale it down? |
Coming back to this, since I faced this again. If you cannot scale down a database pod, then something is not quite right. Let's say your Kubernetes cluster needs an upgrade, it will drain the worker node. Or let's say there is a problem in your cluster and node goes down, the way I see it, there is no way you can salvage your database, unless you have taken a manual dump. (But for the manual dump you need to have the mysql password saved since you cannot fetch it while the database is down. It will be "null" if you try to fetch). I would argue that regardless what the reason may be, you should be able to scale 0/1. Once I had a support ticket open where it was asked to stop all apps, well this broke all the mysql databases. The cause still is that if the relation gets broken for eg between kfp-mysql-db <-> kfp-api. |
Hi @paulomach , |
Issue description:
Using the juju and charms versions below:
After scaling down the mysql-k8s to 0 and scaling up to 1 the mysql-k8s is broken on juju side, the k8s side shows the pod running:
I was able to simulate this issue after having some app with the database relation like the app mysql-test-app-0.
DEBUG LOG:
So we need a way to fix the juju status.
The text was updated successfully, but these errors were encountered: