ceph pool常用操作

ceph pool常用操作

查看pool数量

$ sudo ceph osd lspools
0 rbd,1 metadata,2 data,5 rbdpool,6 sysimg,8 company_5_del,10 satapool,11 ssdcache,12 pcie_del,13 satacache,31 opms,32 bs_sata,33 bs_ssd,34 bs_pcie_del,35 tiersata,36 tierssd,37 tieropms,38 sata.rgw.buckets.data,39 sata.rgw.buckets.non-ec,40 .rgw.root,41 ssd.rgw.control,42 ssd.rgw.meta,43 ssd.rgw.log,44 ssd.rgw.buckets.index,45 ssd.rgw.buckets.data,46 ssd.rgw.buckets.non-ec,47 sata.rgw.buckets.index,

创建pool

$ ceph osd pool create sasta 100 #这里的100指的是PG组
pool ‘test’ created

为pool设置配额

$ ceph osd pool set-quota sasta max_objects 10000
set-quota max_objects = 10000 for pool sasta

删除pool

$ ceph osd pool delete sasta sasta --yes-i-really-really-mean-it #pool的名字需要重复两次
pool ‘test_’ removed
在这里插入图片描述

查看pool详细信息

$ rados df

给一个pool创建快照

$ ceph osd pool mksnap sasta date-snap
created pool sasta snap date-snap

删除pool快照

$ ceph osd pool rmsnap sasta date-snap
removed pool sastasnap date-snap

查看pool池pg数量

$ ceph osd pool get sasta pg_num
pg_num: 100

设置pool池副本数

$ ceph osd pool set sasta size 3
set pool 18 size to 3

查看pool池副本数

$ ceph osd pool get sasta size
size: 3

设置pool池写最小副本

设置pool池写操作最小副本为2

$ ceph osd pool set sasta min_size 2
set pool 18 min_size to 2

查看集群所有pool副本尺寸

$ ceph osd dump | grep ‘replicated size’
pool 1 ‘rbd’ replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2048 pgp_num 2048 last_change 5493 lfor 0/187 flags hashpspool stripe_width 0 application rbd
pool 2 ‘test_data’ replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 1575 lfor 0/227 flags hashpspool stripe_width 0 application cephfs

获取pool的pg数量

$ ceph osd dump | grep ‘replicated size’
pool 1 ‘rbd’ replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2048 pgp_num 2048 last_change 5493 lfor 0/187 flags hashpspool stripe_width 0 application rbd
pool 2 ‘test_data’ replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 1575 lfor 0/227 flags hashpspool stripe_width 0 application cephfs

设置pool的pg数量

$ ceph osd pool set sasta pg_num 100
specified pg_num 100 <= current 100

$ ceph osd pool get sasta pg_num
pg_num: 100

设置pool的pgp数量

$ ceph osd pool set sasta pgp_num 100
set pool 18 pgp_num to 100

$ ceph osd pool get sasta pgp_num
pgp_num: 100

设置存储池类型

$ ceph osd pool application enable rbd rbd
enabled application ‘rbd’ on pool ‘rbd’

设置存储池crush rule

$ ceph osd pool set crush_ruleset
ceph osd pool set ssd crush_ruleset 4

获取存储池crush rule

$ ceph osd pool get crush_rule
ceph osd pool get test_pool crush_rule
crush_rule: replicated_rule

获取pool->pg->osd 关系

$ ceph osd getmap -o om

$ ceph osd getcrushmap -o cm

$ osdmaptool om --import-crush cm --test-map-pgs-dump --pool {pool_id}

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值