This repository has been archived by the owner on Mar 23, 2020. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 20
pg_num is not properly set for our pool #99
Comments
I think we need some help from the OCS folks /cc @mykaul |
Is the intent to use the auto-scale[1] here @mykaul [1] https://docs.ceph.com/docs/master/rados/operations/placement-groups/ |
Yes. |
Can this be done automatically by the operator somehow? |
|
I wonder if you could set that with a config map. |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
@karmab in your setups, this deploys ceph, but if you
oc rsh <toolbox>
and runceph -s
, is ceph in HEALTH_WARN due to number of PGs?This could be easily fixed to just set the pg count for the openshift-pool to 1024. Or do we want to enable the auto-scaler for pg count?
Originally posted by @jtaleric in #29 (comment)
The text was updated successfully, but these errors were encountered: