Ceph 不同的osd盘 创建不同类型的pool

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接: https://blog.csdn.net/zuopiezia/article/details/102394022

###查看crushmap


 
 
  1. [root@ceph01 ~]# ceph osd getcrushmap -o crushmap .txt
  2. 97
  3. [root@ceph01 ~]# crushtool -d crushmap .txt -o crushmap-decompile
  4. [root@ceph01 ~]# ls
  5. anaconda-ks .cfg crushmap-decompile crushmap .txt


# types bucket(桶)的类型有11 种,
 


 
 
  1. type 0 osd
  2. type 1 host
  3. type 2 chassis
  4. type 3 rack
  5. type 4 row
  6. type 5 pdu
  7. type 6 pod
  8. type 7 room
  9. type 8 datacenter
  10. type 9 region
  11. type 10 root

 

###查看 osd 设备类型


 
 
  1. [ root@ceph01 ~]# ceph osd crush class ls-osd hdd
  2. 0
  3. 1
  4. 2
  5. 3
  6. 4
  7. 5
  8. 6
  9. 7


###删除某些osd设备类型


 
 
  1. [ root@ceph01 ~]# ceph osd crush rm-device- class 0
  2. done removing class of osd(s): 0
  3. [ root@ceph01 ~]# ceph osd crush rm-device- class 2
  4. done removing class of osd(s): 2
  5. [ root@ceph01 ~]# ceph osd crush rm-device- class 4
  6. done removing class of osd(s): 4
  7. [ root@ceph01 ~]# ceph osd crush rm-device- class 6
  8. done removing class of osd(s): 6


###新增osd 设备类型


 
 
  1. [ root@ceph01 ~]# ceph osd crush set-device- class ssd 0
  2. set osd(s) 0 to class 'ssd'
  3. [ root@ceph01 ~]# ceph osd crush set-device- class ssd 2
  4. set osd(s) 2 to class 'ssd'
  5. [ root@ceph01 ~]# ceph osd crush set-device- class ssd 4
  6. set osd(s) 4 to class 'ssd'
  7. [ root@ceph01 ~]# ceph osd crush set-device- class ssd 6
  8. set osd(s) 6 to class 'ssd'

###查看osd 类型是否修改
 


 
 
  1. [root@ceph01 ~]# ceph osd tree
  2. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
  3. -1 0 .39038 root default
  4. -3 0 .09760 host ceph01
  5. 1 hdd 0 .04880 osd .1 up 1 .00000 1 .00000
  6. 0 ssd 0 .04880 osd .0 up 1 .00000 1 .00000
  7. -5 0 .09760 host ceph02
  8. 3 hdd 0 .04880 osd .3 up 1 .00000 1 .00000
  9. 2 ssd 0 .04880 osd .2 up 1 .00000 1 .00000
  10. -7 0 .09760 host ceph03
  11. 5 hdd 0 .04880 osd .5 up 1 .00000 1 .00000
  12. 4 ssd 0 .04880 osd .4 up 1 .00000 1 .00000
  13. -9 0 .09760 host ceph04
  14. 7 hdd 0 .04880 osd .7 up 1 .00000 1 .00000
  15. 6 ssd 0 .04880 osd .6 up 1 .00000 1 .00000
  16. [root@ceph01 ~]#

###osd 类型 有原来的hdd 变为2个类型


 
 
  1. [ root@ceph01 ~] # ceph osd crush class ls
  2. [
  3. "hdd",
  4. "ssd"
  5. ]


###创建一个类型为root 的bucket
 


 
 
  1. [root@ceph01 ~] # ceph osd crush add-bucket ceph-ssd root
  2. added bucket ceph-ssd type root to crush map

####创建一个rule
 

[root@ceph01 ~]# ceph osd crush rule create-simple ssd ceph-ssd host firstn
 
 


##创建4个类型为host 的 4个bucket
 


 
 
  1. [root@ceph01 ~] # ceph osd crush add-bucket ceph01-ssd host
  2. added bucket ceph01-ssd type host to crush map
  3. [root@ceph01 ~] # ceph osd crush add-bucket ceph02-ssd host
  4. added bucket ceph02-ssd type host to crush map
  5. [root@ceph01 ~] # ceph osd crush add-bucket ceph03-ssd host
  6. added bucket ceph03-ssd type host to crush map
  7. [root@ceph01 ~] # ceph osd crush add-bucket ceph04-ssd host
  8. added bucket ceph04-ssd type host to crush map


##把 4个bucket 放到 类型为root 的bucket 中
 


 
 
  1. ##ceph osd crush move {bucket-name} {bucket-type}={bucket-name}, [...]
  2. [root@ceph01 ~] # ceph osd crush move ceph04-ssd root=ceph-ssd
  3. moved item id - 35 name 'ceph04-ssd' to location {root=ceph-ssd} in crush map
  4. [root@ceph01 ~] # ceph osd crush move ceph03-ssd root=ceph-ssd
  5. moved item id - 34 name 'ceph03-ssd' to location {root=ceph-ssd} in crush map
  6. [root@ceph01 ~] # ceph osd crush move ceph02-ssd root=ceph-ssd
  7. moved item id - 33 name 'ceph02-ssd' to location {root=ceph-ssd} in crush map
  8. [root@ceph01 ~] # ceph osd crush move ceph01-ssd root=ceph-ssd
  9. moved item id - 32 name 'ceph01-ssd' to location {root=ceph-ssd} in crush map
  10. [root@ceph01 ~] #

####把ssd 类型的盘全部移到相应的ceph0X-ssd 下面
 


 
 
  1. ##ceph osd crush move osd.x {bucket-type}={bucket-name}, [...]
  2. [root@ceph01 ~] # ceph osd crush move osd.0 host=ceph01-ssd
  3. moved item id 0 name 'osd.0' to location {host=ceph01-ssd} in crush map
  4. [root@ceph01 ~] # ceph osd crush move osd.2 host=ceph02-ssd
  5. moved item id 2 name 'osd.2' to location {host=ceph02-ssd} in crush map
  6. [root@ceph01 ~] # ceph osd crush move osd.4 host=ceph03-ssd
  7. moved item id 4 name 'osd.4' to location {host=ceph03-ssd} in crush map
  8. [root@ceph01 ~] # ceph osd crush move osd.6 host=ceph04-ssd
  9. moved item id 6 name 'osd.6' to location {host=ceph04-ssd} in crush map
  10. [root@ceph01 ~] #

####查看osd 分布情况
 


 
 
  1. [root@ceph01 ~]# ceph osd tree
  2. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
  3. -31 0 .19519 root ceph-ssd
  4. -32 0 .04880 host ceph01-ssd
  5. 0 ssd 0 .04880 osd .0 up 1 .00000 1 .00000
  6. -33 0 .04880 host ceph02-ssd
  7. 2 ssd 0 .04880 osd .2 up 1 .00000 1 .00000
  8. -34 0 .04880 host ceph03-ssd
  9. 4 ssd 0 .04880 osd .4 up 1 .00000 1 .00000
  10. -35 0 .04880 host ceph04-ssd
  11. 6 ssd 0 .04880 osd .6 up 1 .00000 1 .00000
  12. -1 0 .19519 root default
  13. -3 0 .04880 host ceph01
  14. 1 hdd 0 .04880 osd .1 up 1 .00000 1 .00000
  15. -5 0 .04880 host ceph02
  16. 3 hdd 0 .04880 osd .3 up 1 .00000 1 .00000
  17. -7 0 .04880 host ceph03
  18. 5 hdd 0 .04880 osd .5 up 1 .00000 1 .00000
  19. -9 0 .04880 host ceph04
  20. 7 hdd 0 .04880 osd .7 up 1 .00000 1 .00000
  21. [root@ceph01 ~]#


####创建一个后端类型是ssd 的pool


 
 
  1. [root@ceph01 ~] # ceph osd pool create pool_ssd 32 replicated ssd
  2. pool 'pool_ssd' created

###查看pool
 


 
 
  1. [root@ceph01 ~]# ceph osd pool ls
  2. images
  3. images2
  4. images3
  5. .rgw .root
  6. default .rgw .control
  7. default .rgw .meta
  8. default .rgw .log
  9. cephfs_data
  10. cephfs_metadata
  11. pool_ssd
  12. [root@ceph01 ~]#


##https://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-map-bucket-types

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值