k8s-StoargClass的使用-基于nfs

1.先决条件

要已经创建好nfs-server,这里使用的是192.168.75.11
 

创建目录并且.给指定的目录进行授权读写权限

[root@nfs /etc]$vim exports
/data/nfs/rw 192.168.75.0/24(rw,sync,no_subtree_check,no_root_squash)
/data/nfs/ro 192.168.75.0/24(ro,sync,no_subtree_check,no_root_squash)

服务端:

启动nfs-servert并设置开机自启动

systemctl start nfs-server.service
systemctl enable nfs-server.service

客户端:

        并且在需要挂载的机器上安装nfs-util工具.

yum -y install nfs-utils
#设置开机自启动
systemctl start nfs-server.service
systemctl enable nfs-server.service

kubernetes数据持久化StorageClass动态供给(二)

  1. 存储类的好处之一便是支持PV的动态供给,它甚至可以直接被视作为PV的创建模版,用户用到持久性存储时,需要通过创建PVC来绑定匹配的PV,此类操作需求较大,或者当管理员手动创建的PV无法满足PVC的所有需求时,系统按PVC的需求标准动态创建适配的PV会为存储管理带来极大的灵活性,不过仅那些属于StorageClass的PVC和PV才能产生绑定关系,即没有指定StorageClass的PVC只能绑定同类的PV。
  2. 存储类对象的名称至关重要,它是用户调用的标识,创建存储类对象时,除了名称之外,还需要为其定义三个关键字段。provisioner、parameter和reclaimPolicy。
  3. 所以kubernetes提供了一种可以动态分配的工作机制,可用自动创建PV,该机制依赖于StorageClass的API,将某个存储节点划分1T给kubernetes使用,当用户申请5Gi的PVC时,会自动从这1T的存储空间去创建一个5Gi的PV,而后自动与之进行关联绑定。
  4. 动态PV供给的启用需要事先创建一个存储类,不同的Provisoner的创建方法各有不同,并非所有的存储卷插件都由Kubernetes内建支持PV动态供给。

2.基于NFS实现动态供应

由于kubernetes内部不包含NFS驱动,所以需要使用外部驱动nfs-subdir-external-provisioner是一个自动供应器,它使用NFS服务端来支持动态供应。
NFS-subdir-external- provisioner实例负责监视PersistentVolumeClaims请求StorageClass,并自动为它们创建NFS所支持的PresistentVolumes。
GitHub地址: GitHub - kubernetes-sigs/nfs-subdir-external-provisioner: Dynamic sub-dir volume provisioner on a remote NFS server.

2.1 准备NFS服务端的共享目录

这里的意思是要把哪个目录给kubernetes来使用。把目录共享出来。

[root@kn-server-node02-15 ~]# ll /data/
总用量 0
[root@kn-server-node02-15 ~]# showmount -e 10.0.0.15
Export list for 10.0.0.15:
/data        10.0.0.0/24

2.2 安装NFS-Server驱动。

首先创建RBAC权限。

[root@kn-server-master01-13 nfs-provisioner]# cat nfs-rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
[root@kn-server-master01-13 nfs-provisioner]# kubectl create -f nfs-rbac.yaml 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

2.3 部署NFS-Provisioner

[root@master /zpf/storageClass]$cat nfs-provisioner-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-beijing.aliyuncs.com/scorpio/nfs-subdir-external-provisioner:v4.0.0
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs  # NFS-Provisioner的名称,后续StorageClass中的provisioner要与该名称保持一致
            - name: NFS_SERVER    #FS服务器的地址
              value: 192.168.75.20
            - name: NFS_PATH
              value: /data/nfs/rw
      imagePullSecrets:
        - name: aliyun-docker-images-registry
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.75.20
            path: /data/nfs/rw

###创建
[root@kn-server-master01-13 nfs-provisioner]# kubectl create -f nfs-provisioner-deploy.yaml 
deployment.apps/nfs-client-provisioner created


Pod正常运行。
[root@kn-server-master01-13 nfs-provisioner]# kubectl get pods
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-57d6d9d5f6-dcxgq   1/1     Running   0          2m25s

describe查看Pod详细信息;
[root@master /zpf/storageClass]$kubectl describe po nfs-client-provisioner-686fbff764-44h55
Name:         nfs-client-provisioner-686fbff764-44h55
Namespace:    default
Priority:     0
Node:         node1/192.168.75.51
Start Time:   Thu, 11 Jan 2024 16:32:56 +0800
Labels:       app=nfs-client-provisioner
              pod-template-hash=686fbff764
Annotations:  cni.projectcalico.org/containerID: b15af839e6c4ffa513c5a2fd3ce3a6bc4ff14f010a2c90e134152fe0330c3536
              cni.projectcalico.org/podIP: 10.233.90.3/32
              cni.projectcalico.org/podIPs: 10.233.90.3/32
Status:       Running
IP:           10.233.90.3
IPs:
  IP:           10.233.90.3
Controlled By:  ReplicaSet/nfs-client-provisioner-686fbff764
Containers:
  nfs-client-provisioner:
    Container ID:   docker://e68a96840a4378a5f01f36dd6890c79b4025e264de170a7db14f538a9e341738
    Image:          registry.cn-beijing.aliyuncs.com/scorpio/nfs-subdir-external-provisioner:v4.0.0
    Image ID:       docker-pullable://registry.cn-beijing.aliyuncs.com/scorpio/nfs-subdir-external-provisioner@sha256:f93d46d5d58fb908701b09e7738f23bca806031cfe3e52b484e8a6c0147c8667
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 11 Jan 2024 16:32:57 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      PROVISIONER_NAME:  k8s-sigs.io/nfs-subdir-external-provisioner
      NFS_SERVER:        192.168.75.11
      NFS_PATH:          /data/nfs/rw
    Mounts:
      /persistentvolumes from nfs-client-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nx74d (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  nfs-client-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.75.11
    Path:      /data/nfs/rw
    ReadOnly:  false
  kube-api-access-nx74d:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  74s   default-scheduler  Successfully assigned default/nfs-client-provisioner-686fbff764-44h55 to node1
  Normal  Pulled     73s   kubelet            Container image "registry.cn-beijing.aliyuncs.com/scorpio/nfs-subdir-external-provisioner:v4.0.0" already present on machine
  Normal  Created    73s   kubelet            Created container nfs-client-provisioner
  Normal  Started    73s   kubelet            Started container nfs-client-provisioner

2.4 创建StorageClass

创建NFS StorageClass动态供应商。

[root@master /zpf/nfs-provisioner]$cat storageClass.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
  namespace: kube-system
provisioner: fuseim.pri/ifs # 外部制备器提供者,编写为提供者的名称,这里要与deploy文件中的PROVISIONER_NAME 字段一致。否则pvc/pv无法创建成功。一致处于pending状态
parameters:
  archiveOnDelete: "false" # 是否存档,false 表示不存档,会删除 oldPath 下面的数据,true 表示存档,会重命名路径
reclaimPolicy: Retain # 回收策略,默认为 Delete 可以配置为 Retain 如果是delete的话删除pvc的话pv也会一同删除.这里根据生产情况而定
volumeBindingMode: Immediate # 默认为 Immediate,表示创建 PVC 立即进行绑定,只有 azuredisk 和 AWSelasticblockstore 支持其他值
allowVolumeExpansion: true  #是否允许对pv进行扩容

#创建storageclass
[root@master /zpf/nfs-provisioner]$kubectl create -f storageClass.yml
storageclass.storage.k8s.io/managed-nfs-storage created
#查询sc详情
[root@master /zpf/nfs-provisioner]$kubectl get sc
NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   fuseim.pri/ifs   Retain          Immediate           true                   8m39s
[root@master /zpf/nfs-provisioner]$kubectl describe sc managed-nfs-storage
Name:                  managed-nfs-storage
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           fuseim.pri/ifs
Parameters:            archiveOnDelete=false
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Retain
VolumeBindingMode:     Immediate
Events:                <none>

2.5 创建PVC,自动关联PV

[root@master /zpf/service/prometheus]$vim pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim  #指定类型
metadata:
  name: prometheus-pvc  #指定pvc的名称
spec:
  storageClassName: "managed-nfs-storage"  #这里指定的名称就是上面创建的sc的名称,否则是无法创建成功的
  accessModes:
  - ReadWriteMany   #读写权限
  resources:
    requests:
      storage: 5Gi  #分配大小

#创建pvc
[root@master /zpf/service/prometheus]$kubectl create -f pvc.yml
persistentvolumeclaim/prometheus-pvc created

这里的PV的名字是随机的,数据的存储路径是根据pathPattern来定义的。



[root@master /zpf/nfs-provisioner]$kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS          REASON   AGE
pvc-5665843a-f00d-458a-9faf-ae184de851fd   5Gi        RWX            Delete           Bound    default/prometheus-pvc   managed-nfs-storage      81s
[root@master /zpf/nfs-provisioner]$kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
prometheus-pvc   Bound    pvc-5665843a-f00d-458a-9faf-ae184de851fd   5Gi        RWX            managed-nfs-storage   83s



describe可用看到更详细的信息
[root@master /zpf/nfs-provisioner]$kubectl describe pv pvc-5665843a-f00d-458a-9faf-ae184de851fd
Name:            pvc-5665843a-f00d-458a-9faf-ae184de851fd
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: fuseim.pri/ifs
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    managed-nfs-storage
Status:          Bound
Claim:           default/prometheus-pvc
Reclaim Policy:  Delete
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        5Gi
Node Affinity:   <none>
Message:
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.75.11
    Path:      /data/nfs/rw/default-prometheus-pvc-pvc-5665843a-f00d-458a-9faf-ae184de851fd
    ReadOnly:  false
Events:        <none>

[root@master /zpf/nfs-provisioner]$kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
prometheus-pvc   Bound    pvc-5665843a-f00d-458a-9faf-ae184de851fd   5Gi        RWX            managed-nfs-storage   83s
[root@master /zpf/nfs-provisioner]$kubectl describe pv pvc-5665843a-f00d-458a-9faf-ae184de851fd
Name:            pvc-5665843a-f00d-458a-9faf-ae184de851fd
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: fuseim.pri/ifs
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    managed-nfs-storage
Status:          Bound
Claim:           default/prometheus-pvc
Reclaim Policy:  Delete
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        5Gi
Node Affinity:   <none>
Message:
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.75.11
    Path:      /data/nfs/rw/default-prometheus-pvc-pvc-5665843a-f00d-458a-9faf-ae184de851fd
    ReadOnly:  false
Events:        <none>
[root@master /zpf/nfs-provisioner]$kubectl describe pv pvc-5665843a-f00d-458a-9faf-ae184de851fd
Name:            pvc-5665843a-f00d-458a-9faf-ae184de851fd
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: fuseim.pri/ifs
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    managed-nfs-storage
Status:          Bound
Claim:           default/prometheus-pvc
Reclaim Policy:  Delete
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        5Gi
Node Affinity:   <none>
Message:
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.75.11
    Path:      /data/nfs/rw/default-prometheus-pvc-pvc-5665843a-f00d-458a-9faf-ae184de851fd
    ReadOnly:  false
Events:        <none>

2.6 创建Pod,测试数据是否持久。

[root@master /zpf/nfs-provisioner]$vim test-pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-sc
spec:
  containers:
  - name: nginx
    image: nginx
    volumeMounts:
    - name: nginx-page
      mountPath: /usr/share/nginx/html

  volumes:
  - name: nginx-page
    persistentVolumeClaim:
      claimName: prometheus-pvc



#和上面名称是一致的。
[root@master /zpf/nfs-provisioner]$kubectl get po
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7bb7ffcdcf-rjxmd   1/1     Running   0          11m
nginx-sc                                  1/1     Running   0          10m


#尝试写入数据,这这里可以通过pvc的名称找到对应的nfs上的目录
[root@master /zpf/nfs-provisioner]$kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS          REASON   AGE
pvc-5665843a-f00d-458a-9faf-ae184de851fd   5Gi        RWX            Delete           Bound    default/prometheus-pvc   managed-nfs-storage      66m

#这里的目录后缀是ae184de851fd 
到75.11机器的/data/nfs/rw/ 目录下找到ae184de851fd   后缀的目录,然后创建一个文件
[root@nfs /data/nfs/rw/default-prometheus-pvc-pvc-5665843a-f00d-458a-9faf-ae184de851fd]$cat index.html
大佬牛逼!!


# 访问测试。这里看到nginx的地址是10.233.90.16
[root@master /zpf/nfs-provisioner]$kubectl get po -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
nfs-client-provisioner-7bb7ffcdcf-rjxmd   1/1     Running   0          13m   10.233.90.15   node1   <none>           <none>
nginx-sc                                  1/1     Running   0          12m   10.233.90.16   node1   <none>           <none>
[root@master /zpf/nfs-provisioner]$curl 10.233.90.16
大佬牛逼!!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值