测试环境ceph集群报错,内容如下:


[root@node241 ceph]# ceph -s
    cluster 3b37db44-f401-4409-b3bb-75585d21adfe


     health HEALTH_WARN
            too many PGs per OSD (652 > max 300)       《==报错内容


     monmap e1: 1 mons at {node241=192.168.2.41:6789/0}
            election epoch 1, quorum 0 node241
     osdmap e408: 5 osds: 5 up, 5 in
      pgmap v23049: 1088 pgs, 16 pools, 256 MB data, 2889 objects
            6100 MB used, 473 GB / 479 GB avail
                1088 active+clean



问题原因为集群osd 数量较少,测试过程中建立了大量的pool,每个pool要咋用一些pg_num  和pgs ,ceph集群默认每块磁盘都有默认值,好像每个osd 为128个pgs,默认值可以调整,调整过大或者过小都会对集群性能优影响,此为测试环境以快速解决问题为目的,解决此报错的方法就是,调大集群的此选项的告警阀值;方法如下,在mon节点的ceph.conf 配置文件中添加:

[global]

.......

mon_pg_warn_max_per_osd = 1000

然后重启服务:

/etc/init.d/ceph restart mon


验证:

[root@node241 ceph]# ceph -s
    cluster 3b37db44-f401-4409-b3bb-75585d21adfe
     health HEALTH_OK
     monmap e1: 1 mons at {node241=192.168.2.41:6789/0}
            election epoch 1, quorum 0 node241
     osdmap e408: 5 osds: 5 up, 5 in
      pgmap v23201: 1088 pgs, 16 pools, 256 MB data, 2889 objects
            6101 MB used, 473 GB / 479 GB avail
                1088 active+clean

告警解决;