ceph rbd mysql_怎样配置ceph rbd存储类型?

怎样配置ceph rbd存储类型?

发布时间:2020-05-25 16:19:50

来源:亿速云

阅读:184

作者:鸽子

栏目:云计算

kubeadm k8s配置 ceph rbd存储 类型storageClass

k8s的storageclass,实现pvc的动态创建,绑定

首先安装ceph参考https://blog.csdn.net/qq_42006894/article/details/88424199

1、部署rbd-provisioner

git clone https://github.com/kubernetes-incubator/external-storage.git

cd external-storage/ceph/rbd/deploy

NAMESPACE=ceph

sed -r -i "s/namespace: [^ ]+/namespace: $NAMESPACE/g" ./rbac/clusterrolebinding.yaml ./rbac/rolebinding.yaml

kubectl -n $NAMESPACE apply -f ./rbac

2、创建secret 和 pool

###创建admin secret

ceph auth get client.admin 2>&1 |grep "key = " |awk '{print  $3'} |xargs echo -n > /tmp/key

kubectl create secret generic ceph-admin-secret --from-file=/tmp/key --namespace=ceph --type=kubernetes.io/rbd

###创建ceph osd pool

ceph osd pool create kube 128 128

ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'

###创建 用户secret

ceph auth get-key client.kube > /tmp/key1

kubectl create secret generic ceph-secret --from-file=/tmp/key1 --namespace=ceph  --type=kubernetes.io/rbd

3、创建storageclass

cat ceph-sc-ceph.yml

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

name: ceph-rbd-ceph

namespace: ceph

annotations:

storageclass.kubernetes.io/is-default-class: "true"

provisioner: ceph.com/rbd

parameters:

monitors: 172.16.13.198:6789,172.16.13.199:6789,172.16.13.200:6789

adminId: admin

adminSecretName: ceph-admin-secret

adminSecretNamespace: ceph

pool: kubernetes

userId: kubernetes

userSecretName: ceph-secret

userSecretNamespace: ceph

fsType: ext4

imageFormat: "2"

imageFeatures: "layering"

4、配置rdb-provisioner的rbac权限控制

cd external-storage/ceph/rbd/deploy

修改里面的默认命名空间就即可

cat clusterrolebinding.yaml

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: rbd-provisioner1

subjects:kind: ServiceAccount

name: rbd-provisioner1

roleRef:

kind: ClusterRole

name: rbd-provisioner1

apiGroup: rbac.authorization.k8s.io

cat clusterrole.yaml

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: rbd-provisioner1

namespace: ceph

rules:apiGroups: [""]

resources: ["persistentvolumes"]

verbs: ["get", "list", "watch", "create", "delete"]

apiGroups: [""]

resources: ["persistentvolumeclaims"]

verbs: ["get", "list", "watch", "update"]

apiGroups: ["storage.k8s.io"]

resources: ["storageclasses"]

verbs: ["get", "list", "watch"]

apiGroups: [""]

resources: ["events"]

verbs: ["create", "update", "patch"]

apiGroups: [""]

resources: ["services"]

resourceNames: ["kube-dns","coredns"]

verbs: ["list", "get"]

apiGroups: [""]

resources: ["endpoints"]

verbs: ["get", "list", "watch", "create", "update", "patch"]

cat deployment.yaml

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: rbd-provisioner1

namespace: ceph

spec:

replicas: 1

strategy:

type: Recreate

template:

metadata:

labels:

app: rbd-provisioner1

spec:

containers:name: rbd-provisioner1

image: "quay.io/external_storage/rbd-provisioner:latest"

env:name: PROVISIONER_NAME

value: ceph.com/rbd

serviceAccount: rbd-provisioner1

cat rolebinding.yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: rbd-provisioner1

namespace: ceph

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: rbd-provisioner1

subjects:kind: ServiceAccount

name: rbd-provisioner1

namespace: ceph

cat role.yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

name: rbd-provisioner1

namespace: ceph

rules:apiGroups: [""]

resources: ["secrets"]

verbs: ["get"]

apiGroups: [""]

resources: ["endpoints"]

verbs: ["get", "list", "watch", "create", "update", "patch"]

cat serviceaccount.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

name: rbd-provisioner1

namespace: ceph

kubectl apply -f .

5、创建pvc

cat ceph-pv-ceph.yml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: test-claim-ceph

namespace: ceph

spec:

accessModes:ReadWriteOnce

storageClassName: ceph-rbd-ceph

resources:

requests:

storage: 10Gi

kubectl apply -f ceph-pv-ceph.yml

查看,如果是bound 状态说明成功

#kubectl get pvc

NAME     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE

claim1   Bound    pvc-ff3e450b-4629-11e9-9740-080027a073ff   1Gi        RWO            rbd            51m

如果出错,定位错误方法

kubectl describe pvc/claim1

kubectl get pods -o wide

NAME                               READY   STATUS    RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES

mysql-wayne-8744474d7-kz29c        1/1     Running   0          5d19h   10.244.6.85   node2              

rbd-provisioner-67b4857bcd-8vsb8   1/1     Running   0          63m     10.244.3.7    node04            

#查看到 rbd-provisioner 在 node04   上,然后再node04上查看容器log

docker logs -f 6ed00b76cb55         # 6ed00b76cb55   是容器的id (docker ps 查看)

6,遇到的坑

error retrieving resource lock default/ceph.com-rbd: endpoints “ceph.com-rbd” is forbidden: User “system:serviceaccount:default:rbd-provisioner” cannot get endpoints in the namespace “default”

在 rbd-provisioner.yaml 中 clusterrole最后添加apiGroups: [""]

resources: ["endpoints"]

verbs: ["get", "list", "watch", "create", "update", "patch"]

其他参考https://blog.csdn.net/qq_34857250/article/details/82562514

7、

参考:https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd

PS设置默认storageclass:

kubectl patch storageclass ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

错误提示:

MountVolume.WaitForAttach failed for volume "pvc-23555713-da9e-11e9-931b-000c29158017" : fail to check rbd image status with: (executable file not found in $PATH), rbd output: ()

解决办法

在K8s各个节点执行

yum install -y ceph-common

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值