k8s集成ceph

k8s集成ceph

一、ceph创建文件系统

安装mds

1、在需要安装的目标机器上创建mds目录

mkdir -p /var/lib/ceph/mds/ceph-0

2、生成mds的keyring,并将其写入/var/lib/ceph/mds/ceph-0/keyring文件中

ceph auth get-or-create mds.0 mon 'allow rwx' osd 'allow *' mds 'allow' -o /var/lib/ceph/mds/ceph-0/keyring

3、

yum install ceph-mds
ceph-mds  --cluster ceph -i 0 -m 10.20.20.160:6789

创建池

$ ceph osd pool create cephfs_data
$ ceph osd pool create cephfs_metadata

创建文件系统

ceph fs new cephfs cephfs_metadata cephfs_data    #使用刚刚创建的pool

创建文件系统后, MDS 将能够进入活动状态

$ ceph mds stat
cephfs-1/1/1 up {0=a=up:active}

二、k8s集成ceph

yaml资源

01-clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
  namespace: cephfs
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
02-clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
subjects:
  - kind: ServiceAccount
    name: cephfs-provisioner
    namespace: cephfs
roleRef:
  kind: ClusterRole
  name: cephfs-provisioner
  apiGroup: rbac.authorization.k8s.io
03-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cephfs-provisioner
  namespace: cephfs
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cephfs-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: cephfs-provisioner
    spec:
      containers:
      - name: cephfs-provisioner
        image: "quay.io/external_storage/cephfs-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/cephfs
        - name: PROVISIONER_SECRET_NAMESPACE
          value: cephfs
        command:
        - "/usr/local/bin/cephfs-provisioner"
        args:
        - "-id=cephfs-provisioner-1"
      serviceAccount: cephfs-provisioner
05-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cephfs-provisioner
  namespace: cephfs
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cephfs-provisioner
subjects:
- kind: ServiceAccount
  name: cephfs-provisioner
06-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cephfs-provisioner
  namespace: cephfs
07-ceph-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cephfs
  namespace: cephfs
provisioner: ceph.com/cephfs
parameters:
  monitors: 10.20.20.160:6789
  adminId: admin
  adminSecretName: ceph-secret  #对应secret名称
  adminSecretNamespace: cephfs
08-ceph-secret.yaml

通过命令获取ceph key

ceph auth get-key client.admin |base64

将key粘贴到yaml文件里

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFE****************NveUVnZkYzaFE9PQ==   #key

测试验证

建立一个pvc,看是否能成功动态绑定pv

使用此yaml创建一个pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: claim
  namespace: cephfs
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: cephfs
  resources:
    requests:
      storage: 1Gi

查看pvc状态

kubectl get pvc -n cephfs
NAME    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
claim   Bound    pvc-24686041-6273-4d79-a481-2f1de2f2a83d   1Gi        RWX            cephfs         15s

成功!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值