k8s StorageClass 动态创建pv

14 篇文章 0 订阅
14 篇文章 0 订阅


一、创建权限 auth.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nacos-ns
    # replace with namespace where provisioner is deployed
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

二、创建提供者

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      tolerations:             #设置容忍master节点污点
      - key: node-role.kubernetes.io/master
        operator: Equal
        value: "true"
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/jiayu-kubernetes/nfs-subdir-external-provisioner:v4.0.0
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: etcdcluster-nfs-storage-provisioner
            - name: NFS_SERVER
              value: 192.168.7.2  # NFS SERVER_IP
            - name: NFS_PATH
              value: /www/data/etcdcluster ####nfs服务器上面的路径
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.7.2  # NFS SERVER_IP
            path: /www/data/etcdcluster ####nfs服务器上面的路径

三、StorageClass yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: etcdcluster-nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"  # 是否设置为默认的storageclass
provisioner: etcdcluster-nfs-storage-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME' 这里的值需要和PROVISIONER_NAME的值一样
allowVolumeExpansion: true
parameters:
  archiveOnDelete: "true" # 设置为"false"时删除PVC不会保留数据,"true"则保留数据   

四、etcd yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: etcd-config-map
data:
  ALLOW_NONE_AUTHENTICATION: "yes"                       # 运训不用密码登陆   
  ETCD_LISTEN_PEER_URLS: "http://0.0.0.0:2380"            # 用于监听伙伴通讯的URL列表
  ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2379"         # 用于监听客户端通讯的URL列表
  ETCD_INITIAL_CLUSTER_TOKEN: "etcd-cluster"             # 在启动期间用于 etcd 集群的初始化集群记号
  ETCD_INITIAL_CLUSTER_STATE: new                      # 初始化集群状态
  ETCD_INITIAL_CLUSTER: "etcd-0=http://etcd-0.etcd-hs:2380,etcd-1=http://etcd-1.etcd-hs:2380,etcd-2=http://etcd-2.etcd-hs:2380"    
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: etcd
spec:
  selector:
    matchLabels:
      app: etcd
  serviceName: "etcd-hs"
  replicas: 3
  template:
    metadata:
      labels:
        app: etcd
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: etcd
          image: bitnami/etcd:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 2379
              name: "gfgfhfg"
            - containerPort: 2380
              name: "hgfhfg"          
          envFrom:
          - configMapRef:
             name: etcd-config-map
          env:
            - name: ETCD_NAME
              valueFrom:
               fieldRef:
                fieldPath: metadata.name
            - name: ETCD_INITIAL_ADVERTISE_PEER_URLS
              value: "http://$(ETCD_NAME).etcd-hs:2380" 
            - name: ETCD_ADVERTISE_CLIENT_URLS
              value: "http://$(ETCD_NAME).etcd-hs:2379"   
          volumeMounts:
            - name: etcd-persistent-storage
              mountPath: /bitnami/etcd
  volumeClaimTemplates:
    - metadata:
        name: etcd-persistent-storage
        annotations:
           volume.beta.kubernetes.io/storage-class: "etcdcluster-nfs-storage" #
      spec:
          accessModes:
             - ReadWriteMany
          resources:
            requests:
              storage: 1Gi
---
# headless 无头服务(提供域名供StatefulSet内部pod访问使用)
apiVersion: v1
kind: Service
metadata:
  name: etcd-hs
spec:
  ports:
    - port: 2380
      name: "2380"
      targetPort: 2380
    - port: 2379
      name: "2379"
      targetPort: 2379      
  clusterIP: None
  selector:
    app: etcd
 

总结

这里是创建etcd集群为例子,可以通过StorageClass动态创建pv,不过有一个坑,我这个例子是创建3个etcd的pod,当我删除其中一个pod的时候k8s会自动再重新创建一个pod,这时候这个pod对应的pv好像就失效了,不能再读写了,我当时是需要把StatefulSet删除,和对应的pvc,pv都删除了,之后再重新创建StatefulSet才可以用,具体原因是什么就不知道了,k8s pv pvc这块我现在还是有点一知半解,需要更加多的实验去摸索一下。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值