一起来学ceph 08.ceph crush

ceph crush

环境

192.168.126.101 ceph01
192.168.126.102 ceph02
192.168.126.103 ceph03
192.168.126.104 ceph04
192.168.126.105 ceph-admin

192.168.48.11 ceph01
192.168.48.12 ceph02
192.168.48.13 ceph03
192.168.48.14 ceph04
192.168.48.15 ceph-admin
###所有节点内核版本要求4.5以上
uname -r
5.2.2-1.el7.elrepo.x86_64
[cephadm@ceph-admin ceph-cluster]$ ceph -s
  cluster:
    id:     231d5528-bab4-49fa-9d68-d5382d2e9f6c
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 11m)
    mgr: ceph04(active, since 11m), standbys: ceph03
    mds: cephfs:2 {0=ceph02=up:active,1=ceph01=up:active} 1 up:standby
    osd: 8 osds: 8 up (since 11m), 8 in (since 17h)
    rgw: 1 daemon active (ceph01)
 
  data:
    pools:   9 pools, 352 pgs
    objects: 251 objects, 14 MiB
    usage:   8.1 GiB used, 64 GiB / 72 GiB avail
    pgs:     352 active+clean

crush:基于hash的数据分布算法

  • 输入 x,crush map,placement rule, obj——PG——pg_id
  • 输出osd集合 PG——OSD
  • 副本池
  • 纠删码池
  • 故障域

placement rule

  • take:制定入口
  • select :挑选符合需求的osd
    • 副本池:firstn
    • 纠删码池:indep
  • emit

cluster map

  • monitor map
  • osd map
  • pg map
  • crush map:存储设备列表,故障域树状结构,存储数据如何利用树状结构的规则
  • mds map

获取当前crush map

[cephadm@ceph-admin ceph-cluster]$ ceph osd getcrushmap -o crushmap.bin
17
[cephadm@ceph-admin ceph-cluster]$ crushtool -d crushmap.bin -o crushmap.txt
[cephadm@ceph-admin ceph-cluster]$ cat crushmap.txt 
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
device 3 osd.3 class hdd
device 4 osd.4 class hdd
device 5 osd.5 class hdd
device 6 osd.6 class hdd
device 7 osd.7 class hdd

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 zone
type 10 region
type 11 root

# buckets
host ceph01 {
	id -3		# do not change unnecessarily
	id -4 class hdd		# do not change unnecessarily
	# weight 0.018
	alg straw2
	hash 0	# rjenkins1
	item osd.0 weight 0.009
	item osd.4 weight 0.009
}
host ceph02 {
	id -5		# do not change unnecessarily
	id -6 class hdd		# do not change unnecessarily
	# weight 0.018
	alg straw2
	hash 0	# rjenkins1
	item osd.1 weight 0.009
	item osd.5 weight 0.009
}
host ceph03 {
	id -7		# do not change unnecessarily
	id -8 class hdd		# do not change unnecessarily
	# weight 0.018
	alg straw2
	hash 0	# rjenkins1
	item osd.2 weight 0.009
	item osd.6 weight 0.009
}
host ceph04 {
	id -9		# do not change unnecessarily
	id -10 class hdd		# do not change unnecessarily
	# weight 0.018
	alg straw2
	hash 0	# rjenkins1
	item osd.3 weight 0.009
	item osd.7 weight 0.009
}
root default {
	id -1		# do not change unnecessarily
	id -2 class hdd		# do not change unnecessarily
	# weight 0.070
	alg straw2
	hash 0	# rjenkins1
	item ceph01 weight 0.018
	item ceph02 weight 0.018
	item ceph03 weight 0.018
	item ceph04 weight 0.018
}

# rules
rule replicated_rule {
	id 0
	type replicated
	min_size 1
	max_size 10
	step take default
	step chooseleaf firstn 0 type host
	step emit
}

# end crush map

修改规则并映射

[cephadm@ceph-admin ceph-cluster]$ vim crushmap.txt
......
# rules
rule replicated_rule {
        id 0
        type replicated
        min_size 1
        max_size 15    ##修改
        step take default
        step chooseleaf firstn 0 type host
        step emit
}

[cephadm@ceph-admin ceph-cluster]$ crushtool -c crushmap.txt -o crushmapnew.bin
[cephadm@ceph-admin ceph-cluster]$ ceph osd setcrushmap -i crushmapnew.bin 
18
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值