health HEALTH_WARN too few PGs per OSD (16 min 30)

health HEALTH_WARN too few PGs per OSD (16 min 30)

health HEALTH_WARN too few PGs per OSD (16 < min 30)
执行ceph –s 发现集群状态并非ok,具体信息如下:
ceph -s
cluster 257faba1-f259-4164-a0f9-1726bd70b05a
health HEALTH_WARN
too few PGs per OSD (16 < min 30)
monmap e1: 1 mons at {bdc217=192.168.13.217:6789/0}
election epoch 2, quorum 0 bdc217
osdmap e50: 8 osds: 8 up, 8 in
flags sortbitwise
pgmap v119: 64 pgs, 1 pools, 0 bytes data, 0 objects
715 MB used, 27550 GB / 29025 GB avail
64 active+clean

查看有多少池
ceph osd lspools
0 rbd,
1 vms
2 …

通过排查发现rbd的池为64
$ sudo ceph osd pool get rbd pg_num
pg_num: 64

pgs为64,因为是3副本的配置,有8个osd的时候,
每个osd上均分了64/8 *3=24个pgs,也就是出现了如上的错误 小于最小配置30个

解决办法:修改默认pool rbd的pgs
ceph osd pool set rbd pg_num 128
set pool 0 pg_num to 128
发现需要把pgp_num也一并修改,默认两个pg_num和pgp_num一样大小均为64,此处也将两个的值都设为128
ceph osd pool set rbd pgp_num 128
set pool 0 pgp_num to 128

ceph -s
cluster 257faba1-f259-4164-a0f9-1726bd70b05a
health HEALTH_WARN
64 pgs stuck inactive
64 pgs stuck unclean
pool rbd pg_num 128 > pgp_num 64
monmap e1: 1 mons at {bdc217=192.168.13.217:6789/0}
election epoch 2, quorum 0 bdc217
osdmap e52: 8 osds: 8 up, 8 in
flags sortbitwise
pgmap v121: 128 pgs, 1 pools, 0 bytes data, 0 objects
715 MB used, 27550 GB / 29025 GB avail
64 active+clean
64 creating

最后查看集群状态,显示为OK,错误解决:
ceph -s
cluster 257faba1-f259-4164-a0f9-1726bd70b05a
health HEALTH_OK
monmap e1: 1 mons at {bdc217=192.168.13.217:6789/0}
election epoch 2, quorum 0 bdc217
osdmap e54: 8 osds: 8 up, 8 in
flags sortbitwise
pgmap v125: 128 pgs, 1 pools, 0 bytes data, 0 objects
718 MB used, 27550 GB / 29025 GB avail
128 active+clean

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值