ceph rbd mysql_Ceph 实战之 RBD

1.创建pool

ses01:~ # ceph osd pool create test_pool 10240 10240

pool 'test_pool' created

2.查看修改pool的副本数

ceph osd dump|grep 'replicated size'

pool 0 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 2048 pgp_num 2048 last_change 424 flags hashpspool stripe_width 0

pool 1 'cephfs_metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 175 flags hashpspool stripe_width 0

pool 2 'cephfs_data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 464 flags hashpspool crash_replay_interval 45 stripe_width 0

pool 4 'test_pool' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 10240 pgp_num 10240 last_change 11562 flags hashpspool stripe_width 0

ses01:~ # ceph osd pool set test_pool size 3

set pool 4 size to 3

ses01:~ # ceph osd dump|grep 'replicated size'

pool 0 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 2048 pgp_num 2048 last_change 424 flags hashpspool stripe_width 0

pool 1 'cephfs_metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 175 flags hashpspool stripe_width 0

pool 2 'cephfs_data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 464 flags hashpspool crash_replay_interval 45 stripe_width 0

pool 4 'test_pool' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 10240 pgp_num 10240 last_change 11564 flags hashpspool stripe_width 0

3.删除一个pool

ses01:~ # ceph osd pool delete test_pool test_pool --yes-i-really-really-mean-it

pool 'test_pool' removed

4.创建rbd image

ses01:~ # rbd map test_rbd --pool rbd --id admin

/dev/rbd0

5.查看mapping

ses01:~ # rbd showmapped

id pool image snap device

0 rbd test_rbd - /dev/rbd0

6.取消image的mapping

ses01:~ # rbd unmap /dev/rbd0

7.格式化盘,并创建挂载点

ses01:~ # mkfs.ext4 -q /dev/rbd0

ses01:~ # mkdir -p /mnt/ceph-rbd0

8.进行挂载

ses01:/mnt/ceph-rbd0 # mount /dev/rbd0 /mnt/ceph-rbd0

ses01:/mnt/ceph-rbd0 # df

Filesystem 1K-blocks Used Available Use% Mounted on

devtmpfs 131642164 8 131642156 1% /dev

tmpfs 131650892 144 131650748 1% /dev/shm

tmpfs 131650892 1968928 129681964 2% /run

tmpfs 131650892 0 131650892 0% /sys/fs/cgroup

/dev/sdg2 32900924 8505272 22717676 28% /

/dev/sdg1 151380 4612 146768 4% /boot/efi

/dev/sdg5 32900924 49172 31173776 1% /var/backup

/dev/sdg4 153428 0 153428 0% /var/backup/boot/efi

/dev/sdf1 11242668012 2618768324 8623899688 24% /var/lib/ceph/osd/ceph-5

/dev/sdc1 11242668012 2927767068 8314900944 27% /var/lib/ceph/osd/ceph-2

/dev/sdb1 11242668012 2295717280 8946950732 21% /var/lib/ceph/osd/ceph-1

/dev/sda1 11242668012 3100207472 8142460540 28% /var/lib/ceph/osd/ceph-0

/dev/sde1 11242668012 2510867344 8731800668 23% /var/lib/ceph/osd/ceph-4

/dev/sdd1 11242668012 2356968620 8885699392 21% /var/lib/ceph/osd/ceph-3

tmpfs 26330180 16 26330164 1% /run/use

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值