前提条件
部署好ceph集群
查看ceph集群概况
[root@ceph01 ~]# ceph -s
cluster 41732856-b7e0-4d00-969d-bbbaf9f2b187
health HEALTH_OK
monmap e1: 3 mons at {ceph01=192.168.229.114:6789/0,ceph02=192.168.229.121:6789/0,ceph03=192.168.229.115:6789/0}
election epoch 1296, quorum 0,1,2 ceph01,ceph03,ceph02
osdmap e237: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v636530: 64 pgs, 1 pools, 0 bytes data, 0 objects
24705 MB used, 251 GB / 275 GB avail
64 active+clean
创建ceph存储池
ceph osd pool create k8s-volumes 64 64
查看下副本数
ceph osd pool get k8s-volumes size
size: 2
pg的设置参照以下公式:
Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool_count
结算的结果往上取靠近2的N次方的值。比如总共OSD数量是2,复制份数3