当CEPH 数据不一致时,需要对ceph pg的数据进行平衡
1:检查数据分布是否均衡
#查看osd使用情况
# # ceph osd df tree #查看osd_num,PGS, %USE # ceph osd df tree | awk '/osd\./{print $NF" "$(NF-1)" "$(NF-3) }' osd.0 up 0.92 osd.3 up 1.02 osd.1 up 0.90 osd.4 up 1.23 osd.2 up 0.95 osd.5 up 1.03 # |
2:reweight-by-pg 按归置组分布情况调整 OSD 的权重
# ceph osd reweight-by-pg |
3: reweight-by-utilization 按利用率调整 OSD 的权重
# ceph osd reweight-by-utilization moved 10 / 843 (1.18624%) #10个PG发送迁移 avg 140.5 #每个OSD承载的平均PG数目为140.5 stddev 8.69387 -> 12.339 (expected baseline 10.8205) #执行本次调整后, 标准方差将由8.69387 变为12.339 min osd.3 with 127 -> 127 pgs (0.903915 -> 0.903915 * mean) #负载最轻的OSD为osd.3,只承载了127个PG, 执行本次调整后,将承载127 max osd.0 with 154 -> 154 pgs (1.09609 -> 1.09609 * mean) #负载最重的OSD为osd.0, 承载了154个PG, 执行本次调整后,讲承载154个PG
oload 120 max_change 0.05 max_change_osds 4 average_utilization 0.0904 overload_utilization 0.1084 osd.4 weight 0.9500 -> 0.9000 检查数据的平衡状态: # ceph -s |
|
4: 数据均衡后还原权重
#统计osd_num, REWEIGHT [root@node-10 ~]# ceph osd df tree | awk '/osd\./{print $NF" "$4 }' osd.0 1.00000 osd.3 1.00000 osd.1 1.00000 osd.4 0.90002 osd.2 1.00000 osd.5 1.00000 [root@node-10 ~]#
#依次设置osd权重为默认值,1.0 #ceph osd reweight {id} {weight} #说明:osd weight的取值为0~1
$ ceph osd reweight 5 1.0 |