Skip to content
This repository has been archived by the owner on Mar 23, 2020. It is now read-only.

pg_num is not properly set for our pool #99

Open
jtaleric opened this issue Sep 9, 2019 · 6 comments
Open

pg_num is not properly set for our pool #99

jtaleric opened this issue Sep 9, 2019 · 6 comments

Comments

@jtaleric
Copy link
Contributor

jtaleric commented Sep 9, 2019

@karmab in your setups, this deploys ceph, but if you oc rsh <toolbox> and run ceph -s, is ceph in HEALTH_WARN due to number of PGs?

This could be easily fixed to just set the pg count for the openshift-pool to 1024. Or do we want to enable the auto-scaler for pg count?

Originally posted by @jtaleric in #29 (comment)

@e-minguez
Copy link
Member

I think we need some help from the OCS folks /cc @mykaul

@jtaleric
Copy link
Contributor Author

jtaleric commented Sep 9, 2019

Is the intent to use the auto-scale[1] here @mykaul

[1] https://docs.ceph.com/docs/master/rados/operations/placement-groups/

@mykaul
Copy link

mykaul commented Sep 9, 2019

Yes.

@e-minguez
Copy link
Member

Can this be done automatically by the operator somehow?

@e-minguez
Copy link
Member

$ oc rsh -n openshift-storage rook-ceph-tools-5f5dc75fd5-fwwq7
sh-4.2# ceph osd pool autoscale-status
 POOL                                         SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE                                                                                               
 openshift-storage-cephblockpool            104.0G                3.0        20106G  0.0155                 1.0       8              on                                                                                                      
 openshift-storage-cephfilesystem-metadata   1536k                3.0        20106G  0.0000                 1.0       8              on                                                                                                      
 openshift-storage-cephfilesystem-data0         0                 3.0        20106G  0.0000                 1.0       8              on                                                                                                      
 .rgw.root                                      0                 3.0        20106G  0.0000                 1.0       8              on                                                                                                      
sh-4.2# ceph status
  cluster:
    id:     0b23199a-52a3-4e0c-9e14-9769055e6026
    health: HEALTH_WARN
            too few PGs per OSD (10 < min 30)

  services:
    mon: 3 daemons, quorum a,b,c (age 3h)
    mgr: a(active, since 2h)
    mds: openshift-storage-cephfilesystem:1 {0=openshift-storage-cephfilesystem-a=up:active} 1 up:standby-replay
    osd: 9 osds: 9 up (since 2h), 9 in (since 2h)

  data:
    pools:   4 pools, 32 pgs
    objects: 9.37k objects, 36 GiB
    usage:   114 GiB used, 20 TiB / 20 TiB avail
    pgs:     32 active+clean

  io:
    client:   16 KiB/s rd, 37 MiB/s wr, 10 op/s rd, 28 op/s wr

@schmaustech
Copy link
Contributor

I wonder if you could set that with a config map.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants