kubernetes 数据持久化 使用ceph存储-cephfs

PersistentVolume持久化卷(即PV)是Kubernetes对存储的抽象,PV可以是网络存储,不属于任何Node,但可以在每个Node上访问。PV有以下三种访问模式(Access Mode)

  • ReadWriteOnce:只可被一个Node挂载,这个Node对PV拥有读写权限
  • ReadOnlyMany: 可以被多个Node挂载,这些Node对PV只有只读权限
  • ReadWriteMany: 可以被多个Node挂载,这些Node对PV拥有读写权限

我们之前使用Ceph RBD作为Kubernetes集群的PV,在使用场景上基于Kubernetes的Dynamic Storage Provision特性,使用provisioner: kubernetes.io/rbd的StorageClass来动态创建PV并由volumeClaimTemplates中声明自动创建的PVC去绑定,当前这种形式很稳定的支撑了我们业务的需求。 但RBD的访问模式是ReadWriteOnce的面对最近的ReadWriteMany的需求 各种存储文件系统的对比如下

 

Volume Plugin

ReadWriteOnce

ReadOnlyMany

ReadWriteMany

AWSElasticBlockStore

-

-

AzureFile

-

AzureDisk

-

-

CephFS

Cinder

-

-

FC

-

FlexVolume

-

Flocker

-

-

GCEPersistentDisk

-

Glusterfs

HostPath

-

-

iSCSI

-

PhotonPersistentDisk

-

-

Quobyte

NFS

RBD

-

VsphereVolume

-

-(works when pods are collocated)

PortworxVolume

-

ScaleIO

-

StorageOS

-

-

如果使用 ReadWriteMany, 通过上面的对比,cephfs 是最佳的选择!

CephFS方式支持k8s的pv的3种访问模式:

                 ReadWriteOnce,

                 ReadOnlyMany ,

                 ReadWriteMany

一: 首先Ceph端创建CephFS pool

 

二: 部署 配置 storageclass

创建系统级 Secret

 

# ceph auth get-key client.admin|more

AQAYz3lekwIKKxAA+HUR4UIIK2GrljL5k7sYbg==

# kubectl create secret generic cephfs-secret --type="kubernetes.io/rbd" --from-literal=key=AQAYz3lekwIKKxAA+HUR4UIIK2GrljL5k7sYbg==  --namespace=default

secret/cephfs-secret created

#

 

查看 secret

#kubectl get secret cephfs-secret  -o yaml

 

 

配置 StorageClass

cat >storageclass-cephfs.yaml<<EOF

kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

name: dynamic-cephfs

provisioner: ceph.com/cephfs

parameters:

   monitors: 100.100.100.100:6789,100.100.100.101:6789,100.100.100.102:6789

   adminId: admin

   adminSecretName: cephfs-screct

   adminSecretNamespace: "default"

   claimRoot: /volumes/kubernetes

EOF

 

创建

# kubectl apply -f storageclass-cephfs.yaml

storageclass.storage.k8s.io/dynamic-cephfs created

 

查看

# kubectl get sc|grep cephfs

dynamic-cephfs        ceph.com/cephfs   7s

 

三: 测试使用

创建pvc测试

# cat cephfs-pvc-test.yaml

 

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

  name: cephfs-claim

spec:

  accessModes:    

    - ReadWriteMany

  storageClassName: dynamic-cephfs

  resources:

    requests:

      storage: 2Gi

 

# kubectl apply -f cephfs-pvc-test.yaml

persistentvolumeclaim/cephfs-claim created

#

 

查看

 

# kubectl get pvc

NAME      STATUS   VOLUME        CAPACITY   ACCESS MODES                                              STORAGECLASS          AGE

cephfs-claim     Bound    pvc-e30e1c1f-fa5b-4bd3-8d9a-0f5f98f9fb95   2Gi        RWX       dynamic-cephfs        43s

 

 

# kubectl get pv

NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM    STORAGECLASS          REASON   AGE

pvc-e30e1c1f-fa5b-4bd3-8d9a-0f5f98f9fb95   2Gi        RWX            Delete           Bound       default/cephfs-claim                       dynamic-cephfs                 2m21s

 

创建 nginx pod 挂载测试

# cat cephfs-deployment.yaml

kind: Deployment

apiVersion: apps/v1

metadata:

  labels:

    app: cephfs-nginx-pod

  name: cephfs-nginx-pod

  namespace: default

spec:

  replicas: 1

  selector:

    matchLabels:

      app: cephfs-nginx-pod

  template:

    metadata:

      labels:

        app: cephfs-nginx-pod

    spec:

      containers:

      - name: cephfs-nginx-pod

        image: nginx

        imagePullPolicy: Always

        resources:

          requests:

            memory: "1Gi"

            cpu: "250m"

          limits:

            memory: "1Gi"

            cpu: "500m"

        volumeMounts:

          - name: cephfs

            mountPath: "/usr/share/nginx/html"

        ports:

          - containerPort: 80

            protocol: TCP

      volumes:

        - name: cephfs

          persistentVolumeClaim:

            claimName: cephfs-claim

#

#

 

查看

# kubectl apply -f cephfs-deployment.yaml

deployment.apps/cephfs-nginx-pod created

# kubectl get deployment|grep cephfs

cephfs-nginx-pod         0/1     1            0           9s

# kubectl get pod|grep cephfs

cephfs-nginx-pod-7569d79f87-5jsc7         0/1     ContainerCreating   0          15s

# kubectl get pod|grep cephfs

cephfs-nginx-pod-7569d79f87-5jsc7         1/1     Running             0          43s

# kubectl get pod -o wide|grep cephfs

cephfs-nginx-pod-7569d79f87-5jsc7         1/1     Running             0          87s     10.244.1.48   node-17   <none>           <none>

#

#

 

创建service

# cat cephfs-service.yaml

apiVersion: v1

kind: Service

metadata:

  name: cephfs-service

spec:

  type: NodePort

  ports:

   - protocol: TCP

     port: 80

     targetPort: 80

  selector:

    app: cephfs-nginx-pod

 

# kubectl apply -f cephfs-service.yaml

service/cephfs-service created

# kubectl get svc|grep cephfs

cephfs-service        NodePort    10.0.0.163   <none>        80:31214/TCP   24s

#

 

 

 

 

创建ingress

# cat cephfs-ingress.yaml

apiVersion: v1

kind: List

items:

- apiVersion: extensions/v1beta1

  kind: Ingress

  metadata:

    name: cephfs-ingress

    namespace: default

    annotations:

      nginx.ingress.kubernetes.io/rewrite-target: /

  spec:

    rules:

    - host: 04.test.cn

      http:

        paths:

        - path: /

          backend:

            serviceName: cephfs-service

            servicePort: 80

 

#

 

# kubectl apply -f cephfs-ingress.yaml

ingress.extensions/cephfs-ingress created

#

# kubectl get ingress

NAME                  HOSTS          ADDRESS   PORTS   AGE

 

cephfs-ingress        04.test.cn             80      15m

 

# kubectl get ep|grep cephfs

cephfs-service        10.244.1.48:80                  23m

#

 

域名访问

 

http://04.test.cn

 

 

 

 

 

 

 

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值