-
Notifications
You must be signed in to change notification settings - Fork 762
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Install PodDistruptionBudgets when replica greater than 1 #1509
base: main
Are you sure you want to change the base?
Conversation
@Vad1mo ready for review |
I like it. It would be beneficial to make it optional:
|
Is setting the min amount really necessary? Currently at least one pod is necessary in order for the component to fulfill its tasks. Therefore I think we should be good with the fix value. |
Thanks for the implementation, it's really cool! I quite agree with @Vad1mo and I understand your point of view, but harcoding values is never very good, some users may have use cases that push them to change this number. (If, for example, the harbor has a lot of simultaneous users, perhaps two replicas of a certain service is the bare minimum required to maintain the desired quality of service when operating on Kubernetes nodes.) In issue #1181 robinkb proposed a rather elegant solution from the oauth2-proxy chart. I think it would be better to set the default value to 1 and leave it up to the user to choose what best suits their needs. (ref helm doc) It might also be cool to have the same logic with maxUnavailable fields. (ref #1182) |
@rgarcia89 I have pushed a version of the above changes in my fork, can we merge my version to yours ? |
…_HTTP_CLIENT_TIMEOUT Signed-off-by: Shengwen Yu <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
Signed-off-by: Gene Liu <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
Signed-off-by: Shengwen Yu <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
Signed-off-by: Cyril Jouve <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
Signed-off-by: Rafał Boniecki <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
Signed-off-by: OrlinVasilev <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
Based on multiple discussions and questions from the community. https://cloud-native.slack.com/archives/CC1E09J6S/p1682520074518419 It would make sense to update the readme and clarify the use of username password Signed-off-by: Raul Garcia Sanchez <[email protected]>
The JSON string set in core.configureUserSettings string will be added to the core secret and loaded into the environment variable CONFIG_OVERWRITE_JSON Signed-off-by: Philip Nelson <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
Signed-off-by: Shengwen Yu <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
…Y_CACHE Signed-off-by: Shengwen Yu <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
Signed-off-by: Mitsuru Kariya <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
Signed-off-by: Mitsuru Kariya <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
the commend above the existing secret field specified the wrong information for the what the key of the secret should be Signed-off-by: Arjun Gandhi <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
Signed-off-by: Hein-Jan Vervoorn <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
Co-authored-by: Stephan Mauermann <[email protected]> Co-authored-by: Stephan Mauermann <[email protected]> Signed-off-by: Diogo Guerra <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
Signed-off-by: Andy Suderman <[email protected]> Signed-off-by: Raul Garcia Sanchez <[email protected]>
Signed-off-by: Raul Garcia Sanchez <[email protected]>
Signed-off-by: Raul Garcia Sanchez <[email protected]>
Signed-off-by: Raul Garcia Sanchez <[email protected]>
Looks good to me. I have merged your fork. |
I forgot to sign my commits …. What do we do ? |
Yeah I just signed them one by one. Should be fine now |
@Vad1mo I think this PR is ready to be merged |
There is typo in all pdb, I will push a fix to your fork. It misses |
Signed-off-by: Marc Henriot <[email protected]>
Signed-off-by: Marc Henriot <[email protected]>
Fix pdb configuration and add pdb to exporter
@Vad1mo any idea when this can be merged? |
@zyyw can you please review that PR thank you! |
@MinerYang @Kajot-dev maybe you can have a look here too |
Would be great to get this in - at the moment we're manually adding it for our Harbor setup |
That would be great to have! We're adding these besides the helm chart currently |
@zyyw please review and approve this MR. It is pending just single approval. |
Hi, we just had a MAJOR issue on our setup due to the lack of a PDB on Harbor. This feature is definetly needed ASAP, and I'm quite perplex how this is not there on the default HR since the beggining. Please approve and add this to the next release ASAP. Thanks a lot for this amazing piece of software. |
@Vad1mo I think that's good to go |
It looks like the only failing workflow integration test for this PR is due to an outdated workflow action ( |
All cleared and ready to be merged |
Currently there is no pdb deployed for components were the replica is defined greater than 1. This can lead to an outage of the components during a cluster update. The added pdb definitions will be added automatically once the replica size is greater than 1 and ensure that at least one replica is ready before the pod will be drained.