基于kubernetes和ceph的存储限额和分配管理

目录

1. Ceph持久卷限额原理... 2

2.集群规划... 2

3.软件版本... 3

4.ceph持久卷搭建... 3

4.1.基础环境准备... 3

4.2.Cephfs Provisioner配置... 4

4.3.Rbd Provisioner配置... 7

4.4.ceph持久卷限额功能测试... 9

4.4.1 测试cephfs持久卷限额... 9

4.4.2 测试rbd持久卷限额... 11

4.5.ceph持久卷客户端... 13

5.常见问题... 13

5.1 cephfs持久卷限额未生效... 13

5.2 rbd持久卷不支持多节点挂载... 14

基于kubernetes的支持限额的ceph持久卷搭建

1. Ceph持久卷限额原理

- Cephfs采用用户态限额,依赖条件为ceph-fuse/attr(:由于内核态限额依赖内核版本>=4.17,考虑集群机器升级成本高未采用)

示例展示:

setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir(注:前提进行了ceph-fuse挂载)

- Rbd配额由image实现

示例展示:

Rbd配额由image实现,示例:rbd create mypool/myimage --size 102400

2.集群规划

存储挂载说明

挂载类型

Ceph远程目录

Kubernetes节点主机目录

Rbd

/dev/rbd0

: 对应image可以通过pv查询rbdimage名称

/var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-image- {RBD_IMAGE}

Cephfs

ceph-mon:/{ROOT_DIR} / {CEPHFS_PATH}

: CEPHFS_PATH可以通过pv查询cephfspath

/var/lib/kubelet/pods/{POD_ID} /volumes/kubernetes.io~cephfs/{PV_ID}

:cephfs客户端建议根据远程目录操作,rbd客户建议根据kubernetes节点主机目录操作

3.软件版本

操作系统:CentOS Linux release 7.9.2009 (Core)

Kubernetes版本:1.13.5

内核版本:4.14.49

ceph版本:13.2.10

4.ceph持久卷搭建

4.1.基础环境准备

a.准备ceph密钥

在ceph集群执行如下命令

ceph auth get-key client.admin | base64

b.初始化ceph基础环境

在ceph集群执行如下命令

#初始化cephfs

ceph osd pool create cephfs_data 2 2

ceph osd pool create cephfs_metadata 2 2

ceph osd pool application enable cephfs_data cephfs

ceph osd pool application enable cephfs_metadata cephfs

ceph fs new cephfs cephfs_metadata cephfs_data

#初始化rbd

ceph osd pool create rbd_data 2 2

ceph osd pool application enable rbd_data rbd

c.初始化kubernetes基础环境

在每个kubernetes节点执行以下命令,准备ceph基础命令

#准备cephfs依赖环境

yum install -y ceph-fuse attr

#准备rbd依赖环境

yum install -y ceph-common

4.2.Cephfs Provisioner配置

创建rbac

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: cephfs-provisioner

  namespace: cephfs

rules:

  - apiGroups: [""]

    resources: ["persistentvolumes"]

    verbs: ["get", "list", "watch", "create", "delete"]

  - apiGroups: [""]

    resources: ["persistentvolumeclaims"]

    verbs: ["get", "list", "watch", "update"]

  - apiGroups: ["storage.k8s.io"]

    resources: ["storageclasses"]

    verbs: ["get", "list", "watch"]

  - apiGroups: [""]

    resources: ["events"]

    verbs: ["create", "update", "patch"]

  - apiGroups: [""]

    resources: ["services"]

    resourceNames: ["kube-dns","coredns"]

    verbs: ["list", "get"]

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: cephfs-provisioner

subjects:

  - kind: ServiceAccount

    name: cephfs-provisioner

    namespace: cephfs

roleRef:

  kind: ClusterRole

  name: cephfs-provisioner

  apiGroup: rbac.authorization.k8s.io

---

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

  name: cephfs-provisioner

  namespace: cephfs

rules:

  - apiGroups: [""]

    resources: ["secrets"]

    verbs: ["create", "get", "delete"]

  - apiGroups: [""]

    resources: ["endpoints"]

    verbs: ["get", "list", "watch", "create", "update", "patch"]

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

  name: cephfs-provisioner

  namespace: cephfs

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: Role

  name: cephfs-provisioner

subjects:

- kind: ServiceAccount

  name: cephfs-provisioner

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: cephfs-provisioner

  namespace: cephfs

创建secret

apiVersion: v1

kind: Secret

metadata:

  name: ceph-admin-secret

  namespace: cephfs

#type: "kubernetes.io/rbd"

data:

  # ceph auth get-key client.admin | base64

  key: QVFCS0ZnZGV4UjFQRkJBQXhFemVGbUZsVUMvajV6aWl3cVRxZXc9PQ==

创建cephfs-provisoner的deploy

apiVersion: apps/v1

kind: Deployment

metadata:

  name: cephfs-provisioner

  namespace: cephfs

spec:

  replicas: 1

  selector:

    matchLabels:

      app: cephfs-provisioner

  strategy:

    type: Recreate

  template:

    metadata:

      labels:

        app: cephfs-provisioner

    spec:

      containers:

      - name: cephfs-provisioner

        image: "172.31.205.29/k8s/cephfs-provisioner:latest"

        env:

        - name: PROVISIONER_NAME

          value: ceph.com/cephfs

        - name: PROVISIONER_SECRET_NAMESPACE

          value: cephfs

        command:

        - "/usr/local/bin/cephfs-provisioner"

        args:

        - "-id=cephfs-provisioner-1"

        - "-enable-quota=true"

        - "-disable-ceph-namespace-isolation=true"

      serviceAccount: cephfs-provisioner

创建storageclass

kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

  name: cephfs

  namespace: cephfs

provisioner: ceph.com/cephfs

parameters:

    monitors: ceph-mon

    adminId: admin

    adminSecretName: ceph-admin-secret

    adminSecretNamespace: cephfs

    claimRoot: /volumes/kubernetes

4.3.Rbd Provisioner配置

创建rbac

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: rbd-provisioner

rules:

  - apiGroups: [""]

    resources: ["persistentvolumes"]

    verbs: ["get", "list", "watch", "create", "delete"]

  - apiGroups: [""]

    resources: ["persistentvolumeclaims"]

    verbs: ["get", "list", "watch", "update"]

  - apiGroups: ["storage.k8s.io"]

    resources: ["storageclasses"]

    verbs: ["get", "list", "watch"]

  - apiGroups: [""]

    resources: ["events"]

    verbs: ["create", "update", "patch"]

  - apiGroups: [""]

    resources: ["services"]

    resourceNames: ["kube-dns","coredns"]

    verbs: ["list", "get"]

  - apiGroups: [""]

    resources: ["endpoints"]

    verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: rbd-provisioner

subjects:

  - kind: ServiceAccount

    name: rbd-provisioner

    namespace: default

roleRef:

  kind: ClusterRole

  name: rbd-provisioner

  apiGroup: rbac.authorization.k8s.io

---

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

  name: rbd-provisioner

rules:

- apiGroups: [""]

  resources: ["secrets"]

  verbs: ["get"]

- apiGroups: [""]

  resources: ["endpoints"]

  verbs: ["get", "list", "watch", "create", "update", "patch"]

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

  name: rbd-provisioner

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: Role

  name: rbd-provisioner

subjects:

- kind: ServiceAccount

  name: rbd-provisioner

  namespace: default

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: rbd-provisioner

创建secret

同cephfs

创建rbd-provisoner的deploy

apiVersion: apps/v1

kind: Deployment

metadata:

  name: rbd-provisioner

spec:

  replicas: 1

  selector:

    matchLabels:

      app: rbd-provisioner

  strategy:

    type: Recreate

  template:

    metadata:

      labels:

        app: rbd-provisioner

    spec:

      containers:

      - name: rbd-provisioner

        image: "172.31.205.29/k8s/rbd-provisioner:latest"

        #image: "172.31.205.29/k8s/cephfs-provisioner:latest"

        env:

        - name: PROVISIONER_NAME

          value: ceph.com/rbd

      serviceAccount: rbd-provisioner

创建storageclass

kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

  name: rbd

provisioner: ceph.com/rbd

parameters:

  monitors: ceph-mon

  pool: rbd_test

  adminId: admin

  adminSecretNamespace: default

  adminSecretName: ceph-admin-secret

  #userId: kube

  userSecretNamespace: default

  userSecretName: ceph-admin-secret

  imageFormat: "2"

  imageFeatures: layering

4.4.ceph持久卷限额功能测试

4.4.1 测试cephfs持久卷限额

创建cephfs pvc

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

  name: cephfs-claim

  namespace: cephfs

spec:

  storageClassName: cephfs

  accessModes:

    - ReadWriteMany

  resources:

    requests:

      storage: 3Mi

创建使用cephfs-pvc的测试pod

kind: Pod

apiVersion: v1

metadata:

  name: test-cephfs

  namespace: cephfs

spec:

  containers:

  - name: test-cephfs

    image: 172.31.205.29/k8s/busybox:1.27

    command:

    - "/bin/sh"

    args:

    - "-c"

    - "sleep 100h"

    #- "touch /mnt/SUCCESS && exit 0 || exit 1"

    volumeMounts:

    - name: pvc

      mountPath: "/data"

  restartPolicy: "Never"

  volumes:

  - name: pvc

    persistentVolumeClaim:

      claimName: cephfs-claim

查询cephfs持久卷挂载情况(:空间大小略有偏差)

执行dd if=/dev/zero of=test bs=10M count=1后结果如下:

4.4.2 测试rbd持久卷限额

创建rbd pvc

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

  name: rbd-claim

spec:

  accessModes:

    - ReadWriteOnce

  storageClassName: rbd

  resources:

    requests:

      storage: 10Mi

创建使用rbd-pvc的测试pod

kind: Pod

apiVersion: v1

metadata:

  name: test-rbd

spec:

  containers:

  - name: test-rbd

    image: 172.31.205.29/k8s/busybox:1.27

    command:

    - "/bin/sh"

    args:

    - "-c"

    - "sleep 100h"

    #- "touch /mnt/SUCCESS && exit 0 || exit 1"

    volumeMounts:

    - name: pvc

      mountPath: "/data"

  restartPolicy: "Never"

  volumes:

  - name: pvc

    persistentVolumeClaim:

      claimName: rbd-claim

查询rbd持久卷挂载情况(:空间大小略有偏差)

执行dd if=/dev/zero of=test bs=10M count=1后结果如下:

显示磁盘已满,无法在追加。

4.5.ceph持久卷客户端

根据cephfs的远程目录和rbd对应的k8s节点主机目录进行文件操作

a.获取rbd挂载的主机目录

namespace=$1

pvc_name=$2

pv_name=`kubectl get pvc -n $namespace $pvc_name -o jsonpath='{.spec.volumeName}'`

rbd_image=`kubectl get pv $pv_name -o jsonpath='{.spec.rbd.image}'`

rbd_path="/var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-image-${rbd_image}"

echo $rbd_path

b.获取cephfs的远程目录

namespace=$1

pvc_name=$2

pv_name=`kubectl get pvc -n $namespace $pvc_name -o jsonpath='{.spec.volumeName}'`

pv_id=`kubectl get pv $pv_name -o jsonpath='{.metadata.name}'`

ceph_mon=`kubectl get pv $pv_name -o jsonpath='{.spec.cephfs.monitors}'`

path=`kubectl get pv $pv_name -o jsonpath='{.spec.cephfs.path}'`

cephfs_path="$ceph_mon:$path"

echo $cephfs_path

5.常见问题

5.1 cephfs持久卷限额未生效

截图如下

解决方法:需预装ceph-fuse\attr

5.2 rbd持久卷不支持多节点挂载

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值