Ceph存储池相关操作

存储池的管理通常保存创建、列出、重命名和删除等操作,管理工具使用 ceph osd pool 的子命令及参数,比如 create/ls/rename/rm 等

存储池管理常用命令

#创建存储池命令格式:
ceph osd pool create <poolname> pg_num pgp_num {replicated|erasure}

#列出存储池:
[cephadmin@ceph-deploy ceph-cluster]$ ceph osd pool ls [detail] #不带 pool ID
mypool
myrdb1
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
cephfs-metadata
cephfs-data

[cephadmin@ceph-deploy ceph-cluster]$ ceph osd lspools #带 pool ID
1 mypool
2 myrdb1
3 .rgw.root
4 default.rgw.control
5 default.rgw.meta
6 default.rgw.log
7 cephfs-metadata
8 cephfs-data

#获取存储池的事件信息:
[cephadmin@ceph-deploy ceph-cluster]$ ceph osd pool stats mypool
pool mypool id 1
nothing is going on

#重命名存储池:
$ ceph osd pool rename old-name new-name
$ ceph osd pool rename myrbd1 myrbd2

显示存储池的用量信息:
$ rados df

在这里插入图片描述

删除存储池

如果把存储池删除会导致把存储池内的数据全部删除,因此 ceph 为了防止误删除存储池设置了两个机制来防止误删除操作。
第一个机制是 NODELETE 标志,需要设置为 false 但是默认就是 false 了。

$ ceph osd pool create mypool2 32 32
pool 'mypool2' created #创建一个测试 pool

$ ceph osd pool get mypool2 nodelete
nodelete: false

如果设置为了 true 就表示不能删除,可以使用 set 指令重新设置为 fasle
[cephadmin@ceph-deploy ceph-cluster]$ ceph osd pool set mypool2 nodelete true
set pool 9 nodelete to true
[cephadmin@ceph-deploy ceph-cluster]$ ceph osd pool set mypool2 nodelete false
set pool 9 nodelete to false
[cephadmin@ceph-deploy ceph-cluster]$ ceph osd pool get mypool2 nodelete
nodelete: false

第二个机制是集群范围的配置参数 mon allow pool delete,默认值为 false,即监视器不允许删除存储池,可以在特定场合使用 tell 指令临时设置为(true)允许删除,在删除指定的 pool之后再重新设置为 false。

$ ceph tell mon.* injectargs --mon-allow-pool-delete=true
mon.ceph-mon1: injectargs:mon_allow_pool_delete = 'true' mon.ceph-mon2: injectargs:mon_allow_pool_delete = 'true' mon.ceph-mon3: injectargs:mon_allow_pool_delete = 'true' $ ceph osd pool rm mypool2 mypool2 --yes-i-really-really-mean-it
pool 'mypool2' removed
$ ceph tell mon.* injectargs --mon-allow-pool-delete=false
mon.ceph-mon1: injectargs:mon_allow_pool_delete = 'false' mon.ceph-mon2: injectargs:mon_allow_pool_delete = 'false' mon.ceph-mon3: injectargs:mon_allow_pool_delete = 'false

存储池配额

存储池可以设置两个配对存储的对象进行限制,一个配额是最大空间(max_bytes),另外一个配额是对象最大数量(max_objects)

$ ceph osd pool get-quota mypool
quotas for pool 'mypool':
max objects: N/A #默认不限制对象数量(将超出 4 兆的文件进行切分后 object 数量)
max bytes : N/A #默认不限制存储空间的使用大小限制
$ ceph osd pool set-quota mypool max_objects 1000 #限制最大 1000 个对象
set-quota max_objects = 1000 for pool mypool

[cephadmin@ceph-deploy ceph-cluster]$ ceph osd pool set-quota mypool max_objects 1000
set-quota max_objects = 1000 for pool mypool #限制最多 1000 个对象

[cephadmin@ceph-deploy ceph-cluster]$ ceph osd pool set-quota mypool max_bytes 10737418240 #限制最大 10737418240 字节
set-quota max_bytes = 10737418240 for pool mypool

[cephadmin@ceph-deploy ceph-cluster]$ ceph osd pool get-quota mypool
quotas for pool 'mypool':
max objects: 1 k objects #最多 1000 对象
max bytes : 10 GiB #最大 10G 空间

存储池可用参数

size:存储池中的对象副本数,默认一主两个备 3 副本。
$ ceph osd pool get mypool size
size: 3

$ ceph osd pool get mypool min_size
min_size: 2

# min_size:提供服务所需要的最小副本数,如果定义 size 为 3,min_size 也为 3,坏掉一个OSD,如果 pool 池中有副本在此块 OSD 上面,那么此 pool 将不提供服务,如果将 min_size 定义为 2,那么还可以提供服务,如果提供为 1,表示只要有一块副本都提供服务。

pg_num:查看当前 PG 的数量
$ ceph osd pool get mypool pg_num
pg_num: 32

crush_rule:设置 crush 算法规则
$ ceph osd pool get mypool crush_rule
crush_rule: replicated_rule #默认为副本池


nodelete:控制是否可删除,默认可以
$ ceph osd pool get mypool nodelete
nodelete: false


nopgchange:控制是否可更改存储池的 pg num 和 pgp num
$ ceph osd pool get mypool nopgchange
nopgchange: false


$ ceph osd pool set mypool pg_num 64 #修改指定 pool 的 pg 数量
set pool 1 pg_num to 64


nosizechange:控制是否可以更改存储池的大小
$ ceph osd pool get mypool nosizechange
nosizechange: false #默认允许修改存储池大小


$ ceph osd pool get-quota mypool
quotas for pool 'mypool':
max objects: 1 k objects
max bytes : 10 GiB

$ ceph osd pool set-quota mypool max_bytes 21474836480
set-quota max_bytes = 21474836480 for pool mypool

$ ceph osd pool set-quota mypool max_objects 1000
set-quota max_objects = 1000 for pool mypool

$ ceph osd pool get-quota mypool
quotas for pool 'mypool':
max objects: 1 k objects
max bytes : 20 GiB

noscrub 和 nodeep-scrub:控制是否不进行轻量扫描或是否深层扫描存储池,可临时解决高I/O 问题
$ ceph osd pool get mypool noscrub
noscrub: false #查看当前是否关闭轻量扫描数据,默认为不关闭,即开启

$ ceph osd pool set mypool noscrub true
set pool 1 noscrub to true #可以修改某个指定的 pool 的轻量级扫描测量为 true,即不执行轻量级扫描
$ ceph osd pool get mypool noscrub
noscrub: true #再次查看就不进行轻量级扫描了

$ ceph osd pool get mypool nodeep-scrub
nodeep-scrub: false #查看当前是否关闭深度扫描数据,默认为不关闭,即开启

$ ceph osd pool set mypool nodeep-scrub true
set pool 1 nodeep-scrub to true #可以修改某个指定的 pool 的深度扫描测量为 true,即不执行深度扫描

$ ceph osd pool get mypool nodeep-scrub
nodeep-scrub: true #再次查看就不执行深度扫描了
scrub_min_interval:集群存储池的最小清理时间间隔,默认值没有设置,可以通过配置文件中的 osd_scrub_min_interval 参数指定间隔时间。
$ ceph osd pool get mypool scrub_min_interval
Error ENOENT: option 'scrub_min_interval' is not set on pool 'mypool' scrub_max_interval:整理存储池的最大清理时间间隔,默认值没有设置,可以通过配置文件中的 osd_scrub_max_interval 参数指定。

$ ceph osd pool get mypool scrub_max_interval
Error ENOENT: option 'scrub_max_interval' is not set on pool 'mypool' deep_scrub_interval:深层整理存储池的时间间隔,默认值没有设置,可以通过配置文件中的 osd_deep_scrub_interval 参数指定。

$ ceph osd pool get mypool deep_scrub_interval
Error ENOENT: option 'deep_scrub_interval' is not set on pool 'mypool' #ceph node 的默认配置:
[root@ceph-node1 ~]# ll /var/run/ceph/
total 0
srwxr-xr-x 1 ceph ceph 0 Nov 3 12:22 ceph-osd.3.asok
srwxr-xr-x 1 ceph ceph 0 Nov 3 12:22 ceph-osd.6.asok
srwxr-xr-x 1 ceph ceph 0 Nov 3 12:23 ceph-osd.9.asok

[root@ceph-node1 ~]# ceph daemon osd.3 config show | grep scrub
"mds_max_scrub_ops_in_progress": "5", "mon_scrub_inject_crc_mismatch": "0.000000", "mon_scrub_inject_missing_keys": "0.000000", "mon_scrub_interval": "86400", "mon_scrub_max_keys": "100", "mon_scrub_timeout": "300", "mon_warn_not_deep_scrubbed": "0", "mon_warn_not_scrubbed": "0", "osd_debug_deep_scrub_sleep": "0.000000", "osd_deep_scrub_interval": "604800.000000", #定义深度清洗间隔,604800 秒=7 天
"osd_deep_scrub_keys": "1024", "osd_deep_scrub_large_omap_object_key_threshold": "200000", "osd_deep_scrub_large_omap_object_value_sum_threshold": "1073741824", "osd_deep_scrub_randomize_ratio": "0.150000", "osd_deep_scrub_stride": "524288", "osd_deep_scrub_update_digest_min_age": "7200", "osd_max_scrubs": "1", #定义一个 ceph OSD daemon 内能够同时进行 scrubbing 的操作数
"osd_op_queue_mclock_scrub_lim": "0.001000", "osd_op_queue_mclock_scrub_res": "0.000000", "osd_op_queue_mclock_scrub_wgt": "1.000000", "osd_requested_scrub_priority": "120",
"osd_scrub_auto_repair": "false", "osd_scrub_auto_repair_num_errors": "5", "osd_scrub_backoff_ratio": "0.660000", "osd_scrub_begin_hour": "0", "osd_scrub_begin_week_day": "0", "osd_scrub_chunk_max": "25", "osd_scrub_chunk_min": "5", "osd_scrub_cost": "52428800", "osd_scrub_during_recovery": "false", "osd_scrub_end_hour": "24", "osd_scrub_end_week_day": "7", "osd_scrub_interval_randomize_ratio": "0.500000", "osd_scrub_invalid_stats": "true", #定义 scrub 是否有效
"osd_scrub_load_threshold": "0.500000", "osd_scrub_max_interval": "604800.000000", #定义最大执行 scrub 间隔,604800 秒=7天
"osd_scrub_max_preemptions": "5", "osd_scrub_min_interval": "86400.000000", #定义最小执行普通 scrub 间隔,86400 秒=1天
"osd_scrub_priority": "5", "osd_scrub_sleep": "0.000000",

存储池快照

$ ceph osd pool ls
#命令 1:ceph osd pool mksnap {pool-name} {snap-name}
$ ceph osd pool mksnap mypool mypool-snap
created pool mypool snap mypool-snap
#命令 2: rados -p {pool-name} mksnap {snap-name}
$ rados -p mypool mksnap mypool-snap2
created pool mypool snap mypool-snap2

验证快照

$ rados lssnap -p mypool
1 mypool-snap 2024.8.03 16:12:56
2 mypool-snap2 2024.8.03 16:13:40
2 snaps

回滚快照
测试上传文件后创建快照,然后删除文件再还原文件,基于对象还原。
rados rollback roll back object to snap

#上传文件
[cephadmin@ceph-deploy ceph-cluster]$ rados -p mypool put testfile /etc/hosts
#验证文件
[cephadmin@ceph-deploy ceph-cluster]$ rados -p mypool ls
msg1
testfile
my.conf
#创建快照
[cephadmin@ceph-deploy ceph-cluster]$ ceph osd pool mksnap mypool mypool-snapshot001
created pool mypool snap mypool-snapshot001
#验证快照
[cephadmin@ceph-deploy ceph-cluster]$ rados lssnap -p mypool
3 mypool-snap 2020.11.04 14:11:41
4 mypool-snap2 2020.11.04 14:11:49
5 mypool-conf-bak 2020.11.04 14:18:41
6 mypool-snapshot001 2020.11.04 14:38:50
4 snaps
#删除文件
[cephadmin@ceph-deploy ceph-cluster]$ rados -p mypool rm testfile
#删除文件后,无法再次删除文件,提升文件不存在
[cephadmin@ceph-deploy ceph-cluster]$ rados -p mypool rm testfile
error removing mypool>testfile: (2) No such file or directory
#通过快照还原某个文件
[cephadmin@ceph-deploy ceph-cluster]$ rados rollback -p mypool testfile mypool-snapshot001
rolled back pool mypool to snapshot mypool-snapshot001
#再次执行删除就可以执行成功
[cephadmin@ceph-deploy ceph-cluster]$ rados -p mypool rm testfile

删除存储池快照

[cephadmin@ceph-deploy ceph-cluster]$ rados lssnap -p mypool
3 mypool-snap 2020.11.04 14:11:41
4 mypool-snap2 2020.11.04 14:11:49
5 mypool-conf-bak 2020.11.04 14:18:41
6 mypool-snapshot001 2020.11.04 14:38:50
4 snaps
[cephadmin@ceph-deploy ceph-cluster]$ ceph osd pool rmsnap mypool mypool-snap
removed pool mypool snap mypool-snap
[cephadmin@ceph-deploy ceph-cluster]$ rados lssnap -p mypool
4 mypool-snap2 2020.11.04 14:11:49
5 mypool-conf-bak 2020.11.04 14:18:41
6 mypool-snapshot001 2020.11.04 14:38:50
3 snaps
  • 9
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值