NFS PV自动供给

一 NFS基本使用

1.1 服务端

  • 安装NFS,nfs必须在rpcbind后面启动,因为要注册nfs的信息到rpcbind服务中
  • 改完配置需要重启nfs或者重新加载nfs配置文件,配置文件默认为 /etc/exports
### 离线环境下,nfs-utils和rpcbind都可以使用yum install  --downloadonly --downloaddir=路径  把rpm包下载到本地目录
### 然后使用rpm -ivh --force --nodeps *.rpm安装nfs-utils、rpcbind
# yum install nfs-utils
# yum install rpcbind
# systemctl start rpcbind && systemctl enable rpcbind
# systemctl start nfs && systemctl enable nfs
  • 创建共享目录,并把共享目录加入到nfs配置中
  • rw(读写权限)、ro(只读)、sync(客户端服务端数据同步强一致性),no_root_squash(服务端文件组用户root)
# mkdir /k8s
# vi /etc/exports
/k8s 192.168.217.0/24(rw,sync,no_root_squash)
# systemctl reload nfs

至此,服务端nfs安装配置工作已经完成,下面需要用客户端使用共享目录(/k8s)

1.2 客户端

  • 只需安装nfs、rpcbind,不需要启动
# yum install -y nfs rpcbind
  • 挂载服务端的/k8s目录到本地(测试nfs是否正常运行)
# mkdir /mnt/k8s
# mount -t nfs 192.168.217.200:/k8s /mnt/k8s
# echo "hello nfs" > /mnt/k8s/hello.txt
  • 查看服务端是否有hello.txt
# ll /k8s/  # root root由no_root_squash属性决定的
总用量 4
-rw-r--r--. 1 root root 2 5月  10 12:47 hello.txt

到此,nfs服务器已经正常工作了。

二 NFS自动供给

2.1 什么是pv自动供给?

当集群中有上万个pv的时候,挨个创建会十分麻烦,于是乎有了一种可以动态创建pv的方式,这便是pv动态供给。pv动态供给需要nfs-client-provisioner这个deployment绑定一个nfs的pv共用目录。然后通过storageClass来自动分配nfs存储。

2.2helm安装NFS自动供给

  • 目录结构
# tree .
.
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── rbac.yaml
│   └── storage.yaml
└── values.yaml
  • rbac.yaml(不在default ns时,需要手动添加权限)
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: {{ .Release.Namespace }}
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-provisioner-runner
  namespace: {{ .Release.Namespace }}
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services", "endpoints"]
    verbs: ["get","create","list", "watch","update"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: {{ .Release.Namespace }}
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
  • deployment.yaml 使用vbouchaud/nfs-client-provisioner:latest代替google源
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
  namespace: {{ .Release.Namespace }}
  labels:
    name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      name: nfs-client-provisioner
  template:
    metadata:
      labels:
        name: nfs-client-provisioner
    spec:
      nodeSelector:
       kubernetes.io/hostname: k8s-master
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-client-provisioner
          image: vbouchaud/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.217.200
            - name: NFS_PATH
              value: /k8s
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.217.200
            path: /k8s
  • storage.yaml指定存储类,provisioner需要和上面的一致(绑定deployment)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: grafana-nfs
  namespace: {{ .Release.Namespace }}
provisioner: fuseim.pri/ifs
reclaimPolicy: Retain

provisioner: 该字段指定使用存储卷类型,不同的存储卷提供者类型这里要修改成对应的值。
reclaimPolicy:有两种策略:Delete、Retain。默认是Delet
用户删除PVC释放对PV的占用后,系统根据PV的"reclaim policy"决定对PV执行何种回收操作。 目前,"reclaim policy"有三种方式:Retained、Recycled、Deleted。推荐使用Retained,pv改为Released状态、数据卷进行保留。

  • 安装
# kubectl create ns nfs
# helm install nfs . -n nfs

2.3 动态pv测试

使用nginx statefulset 测试动态pv

  • kubectl apply -f nginx.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx:1.17
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "grafana-nfs"
      resources:
        requests:
          storage: 1Gi
  • 查看三个pod的运行状态
# kubectl get pods -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
web-0   1/1     Running   0          15m   10.244.169.144   k8s-node2    <none>           <none>
web-1   1/1     Running   0          15m   10.244.36.68     k8s-node1    <none>           <none>
web-2   1/1     Running   0          15m   10.244.235.203   k8s-master   <none>           <none>
  • 查看通过grafana-nfs这个存储类创建的pvc的状态,全部绑定中。
# kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
www-web-0   Bound    pvc-e4255760-824d-437f-b6db-0417970207a0   1Gi        RWO            grafana-nfs    17m
www-web-1   Bound    pvc-8b7fa2e3-c0e6-4a3f-9516-9b28f554bbde   1Gi        RWO            grafana-nfs    17m
www-web-2   Bound    pvc-df1c4198-d955-4c3f-b101-d74c99281454   1Gi        RWO            grafana-nfs    17m
  • 当三个pod都成功运行之后,可以在nfs这台的服务器下面看到,自动生成的pv持久卷
# ls /k8s/
default-www-web-0-pvc-e4255760-824d-437f-b6db-0417970207a0  default-www-web-2-pvc-df1c4198-d955-4c3f-b101-d74c99281454
default-www-web-1-pvc-8b7fa2e3-c0e6-4a3f-9516-9b28f554bbde

到此为止,通过nfs实现pv自动供给的功能就完成了。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值