K8S部署NFS动态供给+PVC动态挂载示例

1. 安装nfs客户端

  • 在k8s集群中的每个节点安装nfs-utils
yum install nfs-utils -y
  • 启动服务
systemctl restart nfs && systemctl enable nfs
  • 选择一台机器创建共享总目录 (为了方便,这里选择主节点[172.16.16.11])
mkdir -p /data/nfs
  • 编辑配置

注:*表示内网所有机器都可访问并挂载目录,这里最好添加白名单限制,仅对K8S集群内节点开放

# 打开配置文件
vim /etc/exports

# 未设置权限
/data/nfs *(rw,no_root_squash)

# 添加挂载IP限制
/data/nfs 172.16.16.0/16(rw,async,no_root_squash)

image-20210804172945369

  • 使挂载配置生效并验证
# 重新挂载并显示(无需重启服务)
exportfs -rv

# 本机查看挂载情况
showmount -e

# 在其它节点查看挂载情况
showmount -e 172.16.16.11

image-20210804181022842

  • 为需要持久化的服务创建子目录(必须提前手动创建)

后文演示动态供给功能,因此添加子目录为“dynamic”,亦可根据需求创建其它子目录

mkdir -p /data/nfs/dynamic
  • 挂载测试
# 登录其它节点
mkdir test
mount -t nfs 172.16.16.11:/data/nfs test

# 查看挂载情况
mount |grep 172.16.16.11

# 卸载挂载目录
umount test

image-20210804182158710

2. 部署nfs-client-provisioner插件

2.1 配置授权(RBAC)

vim rbac.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
kubectl apply -f rbac.yaml

2.2 Deployment

需配置挂载目录172.16.16.11: /data/nfs/dynamic

vim nfs-client-provisioner.yaml
kind: Deployment
apiVersion: apps/v1 
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs.provisioner
            - name: NFS_SERVER
              value: 172.16.16.11
            - name: NFS_PATH
              value: /data/nfs/dynamic
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.16.16.11
            path: /data/nfs/dynamic
kubectl apply -f nfs-client-provisioner.yaml

2.3 创建StorageClass

vim storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-sc
provisioner: nfs.provisioner
parameters:
  archiveOnDelete: "true"
allowVolumeExpansion: true

image-20210804201759761

  • archiveOnDelete: “true”,PVC被删除后,挂载的文件夹将会被标记为“archived”

image-20210804203706418

3. 使用示例

NFS动态供给可以根据访问模式主要分为ReadWriteOnce和ReadWriteMany两种方式,类似于块存储和文件存储的使用:

  • ReadWriteOnce(RWO):该卷能够以读写模式被加载到一个节点上
  • ReadWriteMany(RWX):该卷能够以读写模式被多个节点同时加载

3.1 Deployment+RWO

一对一挂载

vim 1-deployment-rwo.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-deploy-rwo
spec:
  storageClassName: "nfs-sc"
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deploy-rwo
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:stable-alpine
        name: nginx
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: nginx-deploy-rwo
kubectl apply -f 1-deployment-rwo.yaml

image-20210804195745446

  • 文件挂载测试
# 进入pod
kubectl exec -it nginx-deploy-rwo-6c7cf9ccdf-mkbcx -- sh

# 在持久化目录下生成文件
echo "hello,1-deployment-rwo" > /usr/share/nginx/html/1-deployment-rwo.html
  • 到本地挂载目录查看文件

image-20210804201029272

3.2 Statefulset+RWO

Statefulset多副本一对一挂载使用volumeClaimTemplates,并指定storageClass,将自动创建pvc、pv

vim 2-nginx-sfs-rwo.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx-sfs-rwo
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 3 
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:stable-alpine
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs-sc"
      resources:
        requests:
          storage: 1Gi
kubectl apply -f 2-nginx-sfs-rwo.yaml

image-20210804201733423

  • 文件挂载测试
# 进入nginx-sfs-rwo-0
kubectl exec -it nginx-sfs-rwo-0 -- sh
echo "hello,this is nginx-sfs-rwo-0" > /usr/share/nginx/html/index.html

# 进入nginx-sfs-rwo-1
kubectl exec -it nginx-sfs-rwo-1 -- sh
echo "hello,this is nginx-sfs-rwo-1" > /usr/share/nginx/html/index.html
  • 然后分别访问nginx-sfs-rwo-0和nginx-sfs-rwo-1

image-20210804201642905

3.3 Deployment/Statefulset+RWX

多pod挂载同一个卷,示例中使用Deployment或Statefulset部署皆可

vim 3-nginx-rwx.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-rwx
spec:
  storageClassName: "nfs-sc"
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-rwx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:stable-alpine
        name: nginx
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: nginx-rwx
kubectl apply -f 3-nginx-rwx.yaml
  • 文件挂载测试
# 进入第一个pod
kubectl exec -it nginx-rwx-64598fd68d-49hxv -- sh
echo "hello,this is nginx-rwx-0" > /usr/share/nginx/html/index.html
  • 访问第二个pod

image-20210804202633006


若本篇内容对您有所帮助,请三连点赞,关注,收藏支持下,谢谢~

  • 3
    点赞
  • 24
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
这是一个比较复杂的问题,需要详细说明。我会尽可能地给您提供一些指导。 首先,您需要了解一些基础知识,如 Kubernetes 和 Prometheus。Kubernetes 是一个容器编排平台,可以帮助您自动管理多个容器应用程序的部署、扩展和故障恢复。Prometheus 是一个开源的监控系统,可以收集和存储应用程序的指标,并支持告警和可视化。 以下是大致的步骤: 1. 部署 Kubernetes 集群,并准备好部署 Prometheus 和 Grafana 的节点。您可以使用各种 Kubernetes 发行版,如 kops、kubeadm 等。 2. 安装和配置 Prometheus。您可以使用 Prometheus 的 Helm Chart 进行部署,并通过 Prometheus Operator 进行管理。在部署 Prometheus 时,您需要配置它来收集应用程序的指标,并将其存储在 Prometheus 存储中。 3. 部署 Grafana。您可以使用 Grafana 的 Helm Chart 进行部署,并配置它来连接到 Prometheus 存储。在 Grafana 中,您可以创建仪表板并可视化 Prometheus 存储中的指标数据。 4. 配置告警。您可以使用 Prometheus 的 Alertmanager 进行告警,并将告警发送到 Slack、Email 等渠道。在配置告警时,您需要定义告警规则,并配置 Alertmanager 来发送告警。 以上是部署 Prometheus、Grafana 和告警的大致步骤。由于每个环境的部署和配置都有所不同,所以具体的细节可能会有所不同。我建议您查阅官方文档,并根据您的需求进行调整。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值