Ceph创建存储池
ceph pool操作总结
一个ceph集群可以有很多个pool,每个pool是逻辑上的隔离单位,不同的pool可以有完全不一样的数据处理方式,比如Replica size、Placement Groups、Crush Rules、快照、所属者等。
创建pool操作
通常在创建pool之前,需要覆盖默认的pg_num,官方推荐:
- 少于5个osd,设置pg_num为128
- 5~10个osd,设置pg_num为512
- 10~50个osd,设置pg_num为4096
- 超过50个,可以参考此公式计算:total_PGs = (OSDs * 100) / pool_size
创建pool的语法如下:
#多副本存储池
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] [crush-ruleset-name] [expected-num-objects]
#纠删码存储池
ceph osd pool create {pool-name} {pg-num} {pgp-num} erasure [erasure-code-profile] [crush-ruleset-name] [expected_num_object]
创建一个k8sgpu的pool,pg_num为8192
# ceph osd create k8sgpu 8192
For better initial performance on pools expected to store a large number of objects, consider supplying the expected_num_objects parameter when creating th
修改配额
设置允许最大object数量为2000
# ceph osd pool set-quota k8sgpu max_objects 2000
set-quota max_objects = 2000 for pool k8sgpu
设置允许最大容量限制为1TB
# ceph osd pool set-quota k8sgpu max_bytes $((1024*1024*1024*1024))
set-quota max_bytes = 1099511627776 for pool k8sgpu
删除pool
删除一个pool会同时清空pool的所有数据,因此非常危险。在删除pool的时候,ceph要求必须输入两次pool名称,同时加上- -yes-i-really-mean-it
ceph osd pool delete k8sgpu --yes-i-really-mean-it
查看pool状态信息
rados df
创建快照和删除快照
创建快照
ceph osd pool mksnap k8sgpu k8sgpu-snapshot
删除快照
ceph osd pool rmsnap k8sgpu k8sgpu-snapshot
ceph osd pool rmsnap <poolname><snap>
设置/获取pool
通过以下语法设置pool的元数据
ceph osd pool set {pool-name} {key} {value}
ceph osd pool get {pool-name} {key}
比如设置pool的冗余副本数量是3,建议不要设置sizez为1,容易丢失数据
ceph osd pool set k8sgpu size 3
设置pool的pg_num
ceph osd pool set k8sgpu pg_num 8192
获取当前pool的配置参数
ceph osd pool get k8sgpu pg_num
ceph osd pool get k8sgpu size
创建k8s用户
# ceph auth add client.dep01 mon 'allow r' osd 'allow rwx pool=dep01'
获取k8s用户对池的授权
# ceph auth get-key client.dep01 | base64