CEPH自定义crushmap

CEPH自定义crushmap 实现储存隔离


一、更新规则

1、获取规则信息

ceph osd getcrushmap -o ./crushmap.bin
crushtool -d crushmap.bin -o ./crushmap.txt

2、更改规则以及规则说明

以下我们为hdd,sdd定义规则,用于缓存分层
1、hdd

rule hdd_rule {
	id 2                                   
	type replicated
	min_size 1
	max_size 10
	step take default class hdd
	step chooseleaf firstn 0 type host
	step emit
}

2、ssd

rule ssd_rule {
	id 1
	type replicated
	min_size 1
	max_size 10
	step take default class ssd
	step chooseleaf firstn 0 type host
	step emit
}

3、规则说明

rule <rulename> {
id <id >                  [整数,规则id]
type [replicated|erasure] [规则类型,用于复制池还是纠删码池]
min_size <min-size>       [如果池的最小副本数小于该值,则不会为当前池应用这条规则]
max_size <max-size>       [如果创建的池的最大副本大于该值,则不会为当前池应用这条规则]
step take <bucket type>   [这条规则作用的bucket,默认为default]
step [chooseleaf|choose] [firstn] <num> type <bucket-type> 
# num == 0 选择N(池的副本数)个bucket
# num > 0且num < N 选择num个bucket
# num < 0 选择N-num(绝对值)个bucket
step emit
}

如上设置主要是根据以下的class 类型来进行配置,儿默认情况下默认情况下,我们所有的 osd crush class 类型都是 hdd。如果你想设置更多规则,可以更改磁盘的crush class

# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
device 3 osd.3 class hdd
device 4 osd.4 class hdd
device 5 osd.5 class hdd
device 6 osd.6 class hdd
device 7 osd.7 class hdd
device 8 osd.8 class hdd
device 9 osd.9 class hdd
device 10 osd.10 class hdd
device 11 osd.11 class hdd
device 12 osd.12 class hdd
device 13 osd.13 class hdd
device 14 osd.14 class hdd
device 15 osd.15 class hdd
device 16 osd.16 class hdd
device 17 osd.17 class hdd
device 18 osd.18 class hdd
device 19 osd.19 class hdd
device 20 osd.20 class hdd
device 21 osd.21 class hdd
device 22 osd.22 class hdd
device 23 osd.23 class hdd
device 24 osd.24 class ssd
device 25 osd.25 class ssd
device 26 osd.26 class ssd

4、更改osd crush class
以下是已改好的

[root@k8s-node-2 ~]# ceph osd tree
ID CLASS WEIGHT    TYPE NAME           STATUS REWEIGHT PRI-AFF 
-1       171.38699 root default                                
-3        57.12900     host k8s-node-1                         
 0   hdd   7.12900         osd.0           up  1.00000 1.00000 
 1   hdd   7.12900         osd.1           up  1.00000 1.00000 
 2   hdd   7.12900         osd.2           up  1.00000 1.00000 
 3   hdd   7.12900         osd.3           up  1.00000 1.00000 
 4   hdd   7.12900         osd.4           up  1.00000 1.00000 
 5   hdd   7.12900         osd.5           up  1.00000 1.00000 
 6   hdd   7.12900         osd.6           up  1.00000 1.00000 
 7   hdd   7.12900         osd.7           up  1.00000 1.00000 
24   ssd   0.09799         osd.24          up  1.00000 1.00000 
-5        57.12900     host k8s-node-2                         
 8   hdd   7.12900         osd.8           up  1.00000 1.00000 
11   hdd   7.12900         osd.11          up  1.00000 1.00000 
12   hdd   7.12900         osd.12          up  1.00000 1.00000 
15   hdd   7.12900         osd.15          up  1.00000 1.00000 
16   hdd   7.12900         osd.16          up  1.00000 1.00000 
17   hdd   7.12900         osd.17          up  1.00000 1.00000 
18   hdd   7.12900         osd.18          up  1.00000 1.00000 
19   hdd   7.12900         osd.19          up  1.00000 1.00000 
25   ssd   0.09799         osd.25          up  1.00000 1.00000 
-7        57.12900     host k8s-node-3                         
 9   hdd   7.12900         osd.9           up  1.00000 1.00000 
10   hdd   7.12900         osd.10          up  1.00000 1.00000 
13   hdd   7.12900         osd.13          up  1.00000 1.00000 
14   hdd   7.12900         osd.14          up  1.00000 1.00000 
20   hdd   7.12900         osd.20          up  1.00000 1.00000 
21   hdd   7.12900         osd.21          up  1.00000 1.00000 
22   hdd   7.12900         osd.22          up  1.00000 1.00000 
23   hdd   7.12900         osd.23          up  1.00000 1.00000 
26   ssd   0.09799         osd.26          up  1.00000 1.00000 

查看crush class

[root@k8s-node-2 ~]# ceph osd crush class ls
[
    "hdd",
    "ssd"
]

下面复现以下操作
(1)将所有的 ssd 的 osd 从 hdd class 中删除

for i in 24 25 26; 
do 
	ceph osd crush rm-device-class osd.$i;
done

(2)将刚刚删除的 osd 添加到 ssd class:

for i in 24 25 26; 
do 
	ceph osd crush set-device-class ssd osd.$i;
done 

(3)重新获取规则信息并修改更新

###获取
ceph osd getcrushmap -o ./crushmap.bin
crushtool -d crushmap.bin -o ./crushmap.txt
###编辑修改
 vim crushmap.txt
###更新
crushtool -c crushmap.txt -o crushmap-new.bin
ceph osd setcrushmap -i crushmap-new.bin

(4) 创建该规则的pool hdd_msdk为规则名称,rbd_msdk 为池的名称
ceph osd pool create rbd_msdk 32 32 hdd_msdk

到此已经将这三个osd 隔离出来了

二、rbd pool 之间的数据迁移

Kubernetes ceph-csi 环境搭建

1、初始化

rbd pool init rbd_msdk

2、 为rbd_msdk和ceph-csi创建一个新的用户

ceph auth get-or-create client.msdk mon 'allow r' osd 'allow rwx pool=rbd_msdk' -o ceph.client.kubernetes.keyring

[root@k8s-master1 kubernetes]# cat ceph.client.kubernetes.keyring 
[client.msdk]
	key = AQCMaxdgQz8UBhAAm5XY3/OCZQU+xxxxxxx==
  • 注意:这里key后面对应的只是一个例子,实际配置中要以运行命令后产生的结果为准
  • 这里的key使用user的key,后面配置中是需要用到的
  • 如果是ceph n版本的集群,那么命令应该是ceph auth get-or-create client.msdk mon ‘profile rbd’ osd ‘profile rbd pool=rbd_msdk’ mgr ‘profile rbd pool=kubernetes’

3、生成ceph-csi的kubernetes configmap

[root@k8s-master1 kubernetes]# ceph mon dump
dumped monmap epoch 3
epoch 3
fsid e808ab4d-c7c8-xxxxxxxxxxx
last_changed 2020-10-20 17:01:35.808185
created 2020-10-15 20:28:28.739282
min_mon_release 14 (nautilus)
0: [v2:x.x.x.x:3300/0,v1:x.x.x.x:6789/0] mon.k8s-node-2
.....
  • 注意,这里一共有两个需要使用的信息,第一个是fsid(可以称之为集群id),第二个是监控节点信息0: [v2:x.x.x.x:3300/0,v1:x.x.x.x:6789/0] mon.k8s-node-2(可能有多个,实验中只配置了1个)
  • 另外,目前的ceph-csi只支持V1版本的协议,所以监控节点那里我们只能用v1的那个IP和端口号
[root@k8s-master1 kubernetes]# cat csi-config-map.yaml 
---
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    [
      {
        "clusterID": "e808ab4d-c7c8-xxxxxxxxxxx",
        "monitors": [
          "ip:6789",
          "ip:6789",
          "ip:6789"
        ]
      }
    ]
metadata:
  name: ceph-csi-config

在kubernetes集群上,将此configmap存储到集群

kubectl apply -f csi-config-map.yaml

4、生成ceph-csi cephx的secret

cat <<EOF > csi-rbd-secret-msdk.yaml
---
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret-msdk
  namespace: default
stringData:
  userID: msdk
  userKey: AQCMaxdgQz8UBhAAm5XY3/OCZQU+xxxxxx==
EOF

这里就用到了之前生成的用户的用户id(kubernetes)和key

5、创建storageclass

$ cat <<EOF > csi-rbd-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-rbd-sc-msdk
provisioner: rbd.csi.ceph.com
parameters:
   clusterID: e808ab4d-c7c8-xxxxxxxxxxx
   imageFeatures: layering
   pool: kubernetes
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret-msdk
   csi.storage.k8s.io/provisioner-secret-namespace: default
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret-msdk
   csi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Delete
mountOptions:
   - discard
EOF
$ kubectl apply -f csi-rbd-sc.yaml

6、测试创建pvc


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: msdk-demo-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-rbd-sc-msdk

## 输出结果,表明成功
[root@k8s-master1 msdk]# kubectl get pvc | grep msdk-demo-pvc
msdk-demo-pvc                    Bound     pvc-7668c7c3-fab5-4327-8778-90e65d2d5414   1Gi        RWO            csi-rbd-sc-msdk     89s

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值