Kubernetes v1.14.0 之 nfs cephrbd cephfs 动态pv部署

1、部署准备:

说明:所有的动态pv所需要运行的容器组都运行在clusterstorage 命名空间
所有node节点安装动态pv所需要的依赖
yum -y install nfs-utils ceph-common

2、创建命名空间

vi clusterstorage-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: clusterstorage
执行clusterstorage-namespace.yaml
kubectl apply -f clusterstorage-namespace.yaml

3、部署nfs 动态pv

说明:nfs服务器部署请参考其它网络文档

3.1、创建rbac 授权

vi rbac.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: clusterstorage
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
   name: nfs-provisioner-runner
   namespace: clusterstorage
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: clusterstorage
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

3.2、创建storageClass

vi storageClass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage #以后动态挂载用到
  namespace: clusterstorage
provisioner: fuseim.pri/ifs
reclaimPolicy: Retain   # 回收策略

3.3、创建 nfs-deployment

vi nfs-deployment.yaml

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
   name: nfs-client-provisioner
   namespace: clusterstorage
spec:
   replicas: 1
   strategy:
     type: Recreate
   template:
      metadata:
         labels:
            app: nfs-client-provisioner
      spec:
         serviceAccount: nfs-provisioner
         containers:
            -  name: nfs-client-provisioner
               image: quay.io/external_storage/nfs-client-provisioner:latest
               volumeMounts:
                 -  name: nfs-client-root
                    mountPath:  /persistentvolumes
               env:
                 -  name: PROVISIONER_NAME
                    value: fuseim.pri/ifs
                 -  name: NFS_SERVER
                    value: 192.168.2.220 #nfs 服务器地址
                 -  name: NFS_PATH
                    value: /apps/data #nfs 服务器挂载路径
         volumes:
           - name: nfs-client-root
             nfs:
               server: 192.168.2.220 #nfs 服务器地址
               path: /apps/data #nfs 服务器挂载路径

3.4、创建测试动态pv

vi test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs-storage
  resources:
    requests:
      storage: 1Gi

3.5 创建test-pod

vi test-pod.yaml

kind: Pod
apiVersion: v1
metadata:
  name: test-nfs-pod
spec:
  containers:
  - name: test-nfs-pod
    image: juestnow/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

3.6、执行生成yaml 文件

kubectl apply -f  rbac.yaml
kubectl apply -f deployment.yaml
kubectl apply -f class.yaml
### 验证创建
kubectl get storageclass 
rbd                   ceph.com/rbd      31d
[root@jenkins nfs]# kubectl get storageclass 
NAME                  PROVISIONER       AGE
nfs-storage           fuseim.pri/ifs    48d
### 创建测试pvc
kubectl apply -f test-claim.yaml
kubectl get pvc -A
[root@jenkins nfs]# kubectl get pvc -A              
NAMESPACE     NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
default       test-claim                           Bound    pvc-108632ac-8d78-11e9-b48a-525400fe4293   1Gi        RWX            nfs-storage                              7s

kubectl get pv
[root@jenkins nfs]# kubectl get pv 
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                           STORAGECLASS          REASON   AGE
pvc-108632ac-8d78-11e9-b48a-525400fe4293   1Gi        RWX            Retain           Bound    default/test-claim                                       nfs-storage            82s
### 创建pod 测试挂载是否成功并生成测试文件
kubectl apply -f test-pod.yaml

kubectl get pod -o wide
[root@jenkins nfs]# kubectl get pod -o wide | grep nfs
test-nfs-pod                              0/1     Completed   0          2m33s   10.65.2.254   nginx-1   <none>           <none>
进入nfs服务查看是否成功挂载
ssh 192.168.2.220
cd /apps/data/
[root@ceph-2-220 data]# ll
总用量 20
drwxrwxrwx  2 nfsnobody nfsnobody   28 6月  13 09:14 default-test-claim-pvc-108632ac-8d78-11e9-b48a-525400fe4293
 cd  default-test-claim-pvc-108632ac-8d78-11e9-b48a-525400fe4293
 ls
 [root@ceph-2-220 default-test-claim-pvc-108632ac-8d78-11e9-b48a-525400fe4293]# ls
SUCCESS
文件已经创建成功
这里设置回收策略是删除pod 不会删除创建的挂载目录及文件所以需要手动到nfs服务器删除
### 删除创建的测试
kubectl delete -f test-pod.yaml 
kubectl delete -f test-claim.yaml 
rm -rf default-test-claim-pvc-108632ac-8d78-11e9-b48a-525400fe4293

4、创建ceph rbd

4.1、创建role

vi role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: rbd-provisioner
  namespace: clusterstorage
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]

4.2、创建clusterrole

vi clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
  namespace: clusterstorage
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "create", "delete"]
  - apiGroups: ["policy"]
    resourceNames: ["rbd-provisioner"]
    resources: ["podsecuritypolicies"]
    verbs: ["use"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

4.3、rolebinding

vi rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner
  namespace: clusterstorage
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner
subjects:
- kind: ServiceAccount
  name: rbd-provisioner

4.4、创建clusterrolebinding

vi clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: clusterstorage
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io

4.5、创建serviceaccount

apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-provisioner
  namespace: clusterstorage

4.6、secrets

进入rbd 创建存储空间
ceph osd pool ls
创建 pool
ceph osd pool create kube 50
创建授权ceph
ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube' -o kube.keyring
测试kube.keyring 是否 可用
ceph -n client.kube --keyring=kube.keyring health
[root@ceph-adm ~]# ceph -n client.kube --keyring=kube.keyring health
HEALTH_OK
ceph -n client.kube --keyring=kube.keyring osd pool ls
[root@ceph-adm ~]# ceph -n client.kube --keyring=kube.keyring osd pool ls
rbd
kube
获取key base64
client.admin
ceph auth get-key client.admin | base64
[root@ceph-adm ~]# ceph auth get-key client.admin | base64
QVFDcCtybGFsaU9XTGhBQWoyZTI1NUd1ZU9SSnl4NXpUeHFrWVE9PQ==
client.kube
ceph auth get-key client.kube | base64
[root@ceph-adm ~]# ceph auth get-key client.kube | base64
QVFDMTY5ZGN1a2dETWhBQVpDY2hOQ09mY1lwWGZXMU5HRU4wSGc9PQ==
创建 secrets.yaml
vi secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: kube-system
type: "kubernetes.io/rbd"
data:
  # ceph auth get-key client.admin | base64
  key: QVFDcCtybGFsaU9XTGhBQWoyZTI1NUd1ZU9SSnl4NXpUeHFrWVE9PQ==
---
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
  namespace: kube-system
type: "kubernetes.io/rbd"
data:
  # ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
  # ceph auth get-key client.kube | base64
  key: QVFDMTY5ZGN1a2dETWhBQVpDY2hOQ09mY1lwWGZXMU5HRU4wSGc9PQ==

4.7、创建deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: rbd-provisioner
  namespace: clusterstorage
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: "quay.io/external_storage/rbd-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
        - name: PROVISIONER_SECRET_NAMESPACE
          value: clusterstorage
      serviceAccount: rbd-provisioner

4.8、 创建class

vi class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: rbd
provisioner: ceph.com/rbd
parameters:
  monitors: 10.10.10.6,10.10.10.7,10.10.10.5:6789 #ceph mon服务器地址
  pool: kube
  adminId: admin
  adminSecretNamespace: kube-system
  adminSecretName: ceph-admin-secret
  userId: kube
  userSecretNamespace: kube-system
  userSecretName: ceph-secret
  fsType: xfs
  imageFormat: "2"
  imageFeatures: layering

4.9、创建测试claim

vi claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: rbd-claim1
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: rbd
  resources:
    requests:
      storage: 1Gi

4.10、创建测试pod

vi test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: test-rbd-pod
spec:
  containers:
  - name: test-rbd-pod
    image: juestnow/busybox:1.24
    command:
    - "/bin/sh"
    args:
    - "-c"
    - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
    - name: pvc
      mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
  - name: pvc
    persistentVolumeClaim:
      claimName: rbd-claim1

4.11、执行yaml

kubectl apply -f .

4.12、查看rbd是否创建成功并生成测试文件

[root@jenkins rbd]# kubectl get pvc -A
NAMESPACE     NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default       rbd-claim1                           Bound    pvc-ac0f5c7b-8d7d-11e9-8f3d-525400ea845f   1Gi        RWO            rbd            5m8s
[root@jenkins rbd]# kubectl get pv 
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                           STORAGECLASS          REASON   AGE
pvc-ac0f5c7b-8d7d-11e9-8f3d-525400ea845f   1Gi        RWO            Delete           Bound      default/rbd-claim1                              rbd                            3m39s

### 进入ceph 管理服务器
rbd ls kube
[root@nginx-1 ~]# rbd ls kube
kubernetes-dynamic-pvc-e658552d-8d7d-11e9-8f54-9a51e628640c
### 由于创建的rbd images 名字与 pv name 不一样查询容器挂载那个rbd 
[root@jenkins rbd]#  kubectl get  pv  -n clusterstorage | grep rbd-claim1 
pvc-ac0f5c7b-8d7d-11e9-8f3d-525400ea845f   1Gi        RWO            Delete           Bound    default/rbd-claim1                              rbd                     13m
 kubectl describe  pv pvc-ac0f5c7b-8d7d-11e9-8f3d-525400ea845f  -n clusterstorage
 [root@jenkins rbd]# kubectl describe  pv pvc-ac0f5c7b-8d7d-11e9-8f3d-525400ea845f  -n clusterstorage   
Name:            pvc-ac0f5c7b-8d7d-11e9-8f3d-525400ea845f
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: ceph.com/rbd
                 rbdProvisionerIdentity: ceph.com/rbd
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    rbd
Status:          Bound
Claim:           default/rbd-claim1
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:         
Source:
    Type:          RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:  [10.10.10.6 10.10.10.7 10.10.10.5:6789]
    RBDImage:      kubernetes-dynamic-pvc-e658552d-8d7d-11e9-8f54-9a51e628640c #images 具体名字
    FSType:        xfs
    RBDPool:       kube
    RadosUser:     kube
    Keyring:       /etc/ceph/keyring
    SecretRef:     &SecretReference{Name:ceph-secret,Namespace:kube-system,}
    ReadOnly:      false
Events:            <none>
可以在其它机器挂载 rbd 查看里面的内容
modprobe rbd 
rbd map kubernetes-dynamic-pvc-e658552d-8d7d-11e9-8f54-9a51e628640c –pool kude  –id kude
 rbd showmapped 
 mkdir -p /mnt/ceph-rdb
 mount /dev/rbd0 /mnt/ceph-rdb
 df
 /dev/rbd0      1014M   33M  982M   4% /mnt/ceph-rdb
 cd  /mnt/ceph-rdb
 [root@nginx-1 ~]#  cd  /mnt/ceph-rdb
[root@nginx-1 ceph-rdb]# ls
SUCCESS
文件创建成功
卸载 umount /mnt/ceph-rdb
kubectl delete -f test-pod.yaml
kubectl delete -f claim.yaml 

5、创建cephfs

5.1、创建role

vi role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cephfs-provisioner
  namespace: clusterstorage
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

5.2、创建clusterrole

vi clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
  namespace: clusterstorage
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "create", "delete"]
  - apiGroups: ["policy"]
    resourceNames: ["cephfs-provisioner"]
    resources: ["podsecuritypolicies"]
    verbs: ["use"]

5.3、创建rolebinding

vi rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cephfs-provisioner
  namespace: clusterstorage
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cephfs-provisioner
subjects:
- kind: ServiceAccount
  name: cephfs-provisioner

5.4、创建clusterrolebinding

vi clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
subjects:
  - kind: ServiceAccount
    name: cephfs-provisioner
    namespace: clusterstorage
roleRef:
  kind: ClusterRole
  name: cephfs-provisioner
  apiGroup: rbac.authorization.k8s.io

5.5、创建serviceaccount

vi serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cephfs-provisioner
  namespace: clusterstorage

5.6、创建secret

获取ceph secret  base64
ceph auth get-key client.admin | base64
[root@ceph-adm ~]# ceph auth get-key client.admin | base64
QVFDcCtybGFsaU9XTGhBQWoyZTI1NUd1ZU9SSnl4NXpUeHFrWVE9PQ==
vi secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-admin
  namespace: kube-system
data:
  key: QVFDcCtybGFsaU9XTGhBQWoyZTI1NUd1ZU9SSnl4NXpUeHFrWVE9PQ==

5.7、创建deployment

vi deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: cephfs-provisioner
  namespace: clusterstorage
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: cephfs-provisioner
    spec:
      containers:
      - name: cephfs-provisioner
        image: "quay.io/external_storage/cephfs-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/cephfs
        - name: PROVISIONER_SECRET_NAMESPACE
          value: clusterstorage
        command:
        - "/usr/local/bin/cephfs-provisioner"
        args:
        - "-id=cephfs-provisioner-1"
        - "-disable-ceph-namespace-isolation=true"
      serviceAccount: cephfs-provisioner

5.8、创建class

vi class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: cephfs
provisioner: ceph.com/cephfs
reclaimPolicy: Retain #回收策略
parameters:
    monitors: 10.10.10.6,10.10.10.7,10.10.10.5:6789 # ceph mon 集群地址
    adminId: admin
    adminSecretName: ceph-secret-admin
    adminSecretNamespace: "kube-system"
    claimRoot: /pvc-volumes

5.9、创建测试claim

vi claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cephfs-claim1
spec:
  storageClassName: cephfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

5.10、创建测试pod

vi test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: test-cephfs-pod
spec:
  containers:
  - name: test-cephfs-pod
    image: juestnow/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: pvc
      persistentVolumeClaim:
        claimName: cephfs-claim1

5.11、 执行yaml

kubectl apply -f .

5.12、查看cephfs 是否正常

[root@jenkins cephfs]# kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-claim1   Bound    pvc-44c96df3-8d84-11e9-b48a-525400fe4293   1Gi        RWX            cephfs         17s
[root@jenkins cephfs]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                           STORAGECLASS   REASON   AGE
pvc-44c96df3-8d84-11e9-b48a-525400fe4293   1Gi        RWX            Retain           Bound    default/cephfs-claim1                           cephfs                  24s

[root@jenkins cephfs]#  kubectl describe  pv pvc-44c96df3-8d84-11e9-b48a-525400fe4293
Name:            pvc-44c96df3-8d84-11e9-b48a-525400fe4293
Labels:          <none>
Annotations:     cephFSProvisionerIdentity: cephfs-provisioner-1
                 cephShare: kubernetes-dynamic-pvc-44d176d5-8d84-11e9-bc99-5e49027abac9
                 pv.kubernetes.io/provisioned-by: ceph.com/cephfs
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    cephfs
Status:          Bound
Claim:           default/cephfs-claim1
Reclaim Policy:  Retain
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:         
Source:
    Type:        CephFS (a CephFS mount on the host that shares a pod's lifetime)
    Monitors:    [10.10.10.6 10.10.10.7 10.10.10.5:6789]
    Path:        /pvc-volumes/kubernetes/kubernetes-dynamic-pvc-44d176d5-8d84-11e9-bc99-5e49027abac9 ##挂载地址
    User:        kubernetes-dynamic-user-44d17714-8d84-11e9-bc99-5e49027abac9
    SecretFile:  
    SecretRef:   &SecretReference{Name:ceph-kubernetes-dynamic-user-44d17714-8d84-11e9-bc99-5e49027abac9-secret,Namespace:clusterstorage,}
    ReadOnly:    false
Events:          <none>
挂载  cephfs 
      sudo ceph-fuse -m 10.10.10.6,10.10.10.7,10.10.10.5:6789 /mnt/cephfs 
        cd /mnt/cephfs/pvc-volumes/kubernetes/kubernetes-dynamic-pvc-44d176d5-8d84-11e9-bc99-5e49027abac9
        [root@nginx-1 kubernetes-dynamic-pvc-44d176d5-8d84-11e9-bc99-5e49027abac9]# ls
SUCCESS
文件创建成功
卸载 umount  /mnt/cephfs 
[root@jenkins cephfs]# kubectl delete -f test-pod.yaml
pod "test-cephfs-pod" deleted
[root@jenkins cephfs]# kubectl delete -f claim.yaml
persistentvolumeclaim "cephfs-claim1" deleted
需要手动清理创建pvc 挂载目录

下一篇: Kubernetes 生产环境安装部署 基于 Kubernetes v1.14.0 之heapster与influxdb部署

转载于:https://blog.51cto.com/juestnow/2408267

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值