8s通过yaml创建pod_k8s数据持久化之statefulset的数据持久化,并自动创建PV与PVC

5e2ed97ad47757c346e421fdf748d15c.png

Statefulset

StatefulSet是为了解决有状态服务的问题,对应的Deployment和ReplicaSet是为了无状态服务而设计,其应用场景包括:

  1. 稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现
  2. 稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没有Cluster IP的Service)来实现
  3. 有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次依次进行(即从0到N-1,在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态),基于init containers来实现
  4. 有序收缩,有序删除(即从N-1到0)

因为statefulset要求Pod的名称是有顺序的,每一个Pod都不能被随意取代,也就是即使Pod重建之后,名称依然不变。为后端的每一个Pod去命名。

从上面的应用场景可以发现,StatefulSet由以下几部分组成: 1. 用于定义网络标志的Headless Service(headless-svc:无头服务。因为没有IP地址,所以它不具备负载均衡的功能了。) 2. 用于创建PersistentVolumes的volumeClaimTemplates 3. 定义具体应用的StatefulSet

StatefulSet:Pod控制器。RC、RS、Deployment、DS。 无状态的服务。

template(模板):根据模板创建出来的Pod,它们的状态都是一模一样的(除了名称、IP、域名之外)可以理解为:任何一个Pod,都可以被删除,然后用新生成的Pod进行替换。

有状态的服务:需要记录前一次或者多次通信中的相关时间,以作为下一次通信的分类标准。比如:MySQL等数据库服务。(Pod的名称,不能随意变化。数据持久化的目录也是不一样,每一个Pod都有自己独有的数据持久化存储目录。)

每一个Pod-----对应一个PVC-----每一个PVC对应一个PV。

以自己的名称创建一个名称空间,以下所有资源都运行在此空间中。

用statefuset资源运行一个httpd web服务,要求3个Pod,但是每个Pod的主界面内容不一样,并且都要做专有的数据持久化,尝试删除其中一个Pod,查看新生成的Pod,是否数据与之前一致。

基于NFS服务,创建NFS服务。

[root@master ~]# yum -y install nfs-utils rpcbind  br/>2.[root@master ~]# mkdir /nfsdata  
[root@master ~]# vim /etc/exports  br/>4./nfsdata  *(rw,sync,no_root_squash)  
[root@master ~]# systemctl start nfs-server.service   
[root@master ~]# systemctl start rpcbind  br/>7.[root@master ~]# showmount -e  
Export list for master:  
./nfsdata *

2.创建RBAC权限

vim rbac-rolebind.yaml

apiVersion: v1
kind: Namespace
metadata: 
  name: lbs-test
apiVersion: v1
    kind: ServiceAccount  创建rbac授权用户。及定义权限
metadata:
  name: nfs-provisioner
  name:lbs-test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
  name:lbs-test
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace:  lbs-test            如没有名称空间需要添加这个default默认否则报错
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

执行yaml文件:

[root@master yaml]# kubectl apply -f rbac-rolebind.yaml   
namespace/lbh-test created  
serviceaccount/nfs-provisioner created  
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created  
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created

3.创建Deployment资源对象

[root@master yaml]# vim nfs-deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  name:lbs-test
spec:
  replicas: 1#副本数量为1
  strategy:
    type: Recreate#重置
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner#指定账户
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner使用的是这个镜像。
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes#指定容器内的挂载目录
          env:
            - name: PROVISIONER_NAME#容器内置变量
              value: bdqn#这是变量的名字
            - name: NFS_SERVER
              value: 192.168.1.1
            - name: NFS_PATH#指定Nfs的共享目录
              value: /nfsdata
      volumes:#指定挂载到容器内的nfs路径与IP
        - name: nfs-client-root
          nfs:
            server: 192.168.1.1
            path: /nfsdata

执行yaml文件,查看Pod

[root@master yaml]# kubectl apply -f nfs-deployment.yaml   
deployment.extensions/nfs-client-provisioner created   
[root@master yaml]# kubectl get pod -n lbs-test   
NAME                                      READY   STATUS    RESTARTS   AGE  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          13s

4.创建Storageclass资源对象(sc):

root@master yaml]# vim sc.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sc-nfs
  namespace:lbs-test #名称空间 名
provisioner: lbs-test#与deployment资源的env环境变量value值相同
reclaimPolicy: Retain #回收策略

执行yaml文件,查看SC

[root@master yaml]# kubectl apply -f sc.yaml   
storageclass.storage.k8s.io/sc-nfs created  
[root@master yaml]# kubectl get sc -n lbs-test   
NAME     PROVISIONER   AGE  
sc-nfs   lbs-test      8s

5.创建StatefulSet资源对象,自动创建PVC:

vim statefulset.yaml
apiVersion: v1
kind: Service
metadata:
  name: headless-svc
  namespace: lbs-test
  labels:
    app: headless-svc
spec:
  ports:
  - port: 80
    name: myweb
  selector:
    app: headless-pod
  clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: statefulset-test
  namespace: lbs-test
spec:
  serviceName: headless-svc
  replicas: 3
  selector:
    matchLabels:
      app: headless-pod
  template:
    metadata:
      labels:
        app: headless-pod
    spec:
      containers:
      - image: httpd
        name: myhttpd
        ports:
        - containerPort: 80
          name: httpd
        volumeMounts:
        - mountPath: /mnt
          name: test
  volumeClaimTemplates:     这个字段:自动创建PVC
  - metadata:
      name: test
      annotations:   //这是指定storageclass,名称要一致
        volume.beta.kubernetes.io/storage-class: sc-nfs
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

执行yaml文件,查看Pod:

[root@master yaml]# kubectl apply -f statefulset.yaml   
service/headless-svc created
statefulset.apps/statefulset-test created  
[root@master yaml]# kubectl get pod -n lbs-test   
NAME                                      READY   STATUS    RESTARTS   AGE  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          22m  
statefulset-test-0                        1/1     Running   0          8m59s  
statefulset-test-1                        1/1     Running   0          2m30s  
statefulset-test-2                        1/1     Running   0          109s

查看是否自动创建PV及PVC

PV:

[root@master yaml]# kubectl get pv -n lbs-test   
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                              STORAGECLASS   REASON   AGE  
pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5   100Mi      RWO            Delete           Bound    lbh-test/test-statefulset-test-2   sc-nfs                  4m23s  
pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5   100Mi      RWO            Delete           Bound    lbh-test/test-statefulset-test-0   sc-nfs                  11m  
pvc-99137753-ccd0-4524-bf40-f3576fc97eba   100Mi      RWO            Delete           Bound    lbh-test/test-statefulset-test-1   sc-nfs                  5m4s

PVC:

[root@master yaml]# kubectl get pvc -n lbs-test   
NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE  
test-statefulset-test-0   Bound    pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5   100Mi      RWO            sc-nfs         13m  
test-statefulset-test-1   Bound    pvc-99137753-ccd0-4524-bf40-f3576fc97eba   100Mi      RWO            sc-nfs         6m42s  
test-statefulset-test-2   Bound    pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5   100Mi      RWO            sc-nfs         6m1s

查看是否创建持久化目录:

[root@master yaml]# ls /nfsdata/  
lbh-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5  
lbh-test-test-statefulset-test-1-pvc-99137753-ccd0-4524-bf40-f3576fc97eba  
lbh-test-test-statefulset-test-2-pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5

6.在pod资源内创建数据。并访问测试。

[root@master yaml]# cd /nfsdata/  
[root@master nfsdata]# echo 111 > lbs-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5/index.html  
[root@master nfsdata]# echo 222 > lbs-test-test-statefulset-test-1-pvc-99137753-ccd0-4524-bf40-f3576fc97eba/index.html  
[root@master nfsdata]# echo 333 > lbs-test-test-statefulset-test-2-pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5/index.html  
[root@master nfsdata]# kubectl get pod -o wide -n lbs-test   
NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          30m     10.244.2.2   node02   <none>           <none>  
statefulset-test-0                        1/1     Running   0          17m     10.244.1.2   node01   <none>           <none>  
statefulset-test-1                        1/1     Running   0          10m     10.244.2.3   node02   <none>           <none>  
statefulset-test-2                        1/1     Running   0          9m57s   10.244.1.3   node01   <none>           <none>  
[root@master nfsdata]# curl 10.244.1.2  
111  
[root@master nfsdata]# curl 10.244.2.3  
222  
[root@master nfsdata]# curl 10.244.1.3  
333

7.删除其中一个pod,查看该pod资源的数据是否会重新创建并存在。

[root@master ~]# kubectl get pod -n lbs-test   
NAME                                      READY   STATUS    RESTARTS   AGE  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          33m  
statefulset-test-0                        1/1     Running   0          20m  
statefulset-test-1                        1/1     Running   0          13m  
statefulset-test-2                        1/1     Running   0          13m  
[root@master ~]# kubectl delete pod -n lbs-test statefulset-test-0   
pod "statefulset-test-0" deleted

删除后会重新创建pod资源

[root@master ~]# kubectl get pod -n lbs-test -o wide  
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          35m   10.244.2.2   node02   <none>           <none>  
statefulset-test-0                        1/1     Running   0          51s   10.244.1.4   node01   <none>           <none>  
statefulset-test-1                        1/1     Running   0          15m   10.244.2.3   node02   <none>           <none>  
statefulset-test-2                        1/1     Running   0          14m   10.244.1.3   node01   <none>           <none>

数据依旧存在

[root@master ~]# curl 10.244.1.4  
111  
[root@master ~]# cat /nfsdata/lbs-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5/index.html   
111

StatefulSet资源对象,针对有状态的服务的数据持久化测试完成。 通过测试,即使删除Pod,重新生成调度后,依旧能访问到之前的持久化数据

StatefulSet是为了解决有状态服务的问题,对应的Deployment和ReplicaSet是为了无状态服务而设计,其应用场景包括:

  1. 稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现
  2. 稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没有Cluster IP的Service)来实现
  3. 有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次依次进行(即从0到N-1,在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态),基于init containers来实现
  4. 有序收缩,有序删除(即从N-1到0)

因为statefulset要求Pod的名称是有顺序的,每一个Pod都不能被随意取代,也就是即使Pod重建之后,名称依然不变。为后端的每一个Pod去命名。

从上面的应用场景可以发现,StatefulSet由以下几部分组成: 1. 用于定义网络标志的Headless Service(headless-svc:无头服务。因为没有IP地址,所以它不具备负载均衡的功能了。) 2. 用于创建PersistentVolumes的volumeClaimTemplates 3. 定义具体应用的StatefulSet

StatefulSet:Pod控制器。RC、RS、Deployment、DS。 无状态的服务。

template(模板):根据模板创建出来的Pod,它们的状态都是一模一样的(除了名称、IP、域名之外)可以理解为:任何一个Pod,都可以被删除,然后用新生成的Pod进行替换。

有状态的服务:需要记录前一次或者多次通信中的相关时间,以作为下一次通信的分类标准。比如:MySQL等数据库服务。(Pod的名称,不能随意变化。数据持久化的目录也是不一样,每一个Pod都有自己独有的数据持久化存储目录。)

每一个Pod-----对应一个PVC-----每一个PVC对应一个PV。

以自己的名称创建一个名称空间,以下所有资源都运行在此空间中。

用statefuset资源运行一个httpd web服务,要求3个Pod,但是每个Pod的主界面内容不一样,并且都要做专有的数据持久化,尝试删除其中一个Pod,查看新生成的Pod,是否数据与之前一致。

基于NFS服务,创建NFS服务。

[root@master ~]# yum -y install nfs-utils rpcbind  br/>2.[root@master ~]# mkdir /nfsdata  
[root@master ~]# vim /etc/exports  br/>4./nfsdata  *(rw,sync,no_root_squash)  
[root@master ~]# systemctl start nfs-server.service   
[root@master ~]# systemctl start rpcbind  br/>7.[root@master ~]# showmount -e  
Export list for master:  
./nfsdata *

2.创建RBAC权限

vim rbac-rolebind.yaml

apiVersion: v1
kind: Namespace
metadata: 
  name: lbs-test
apiVersion: v1
    kind: ServiceAccount  创建rbac授权用户。及定义权限
metadata:
  name: nfs-provisioner
  name:lbs-test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
  name:lbs-test
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace:  lbs-test            如没有名称空间需要添加这个default默认否则报错
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

执行yaml文件:

[root@master yaml]# kubectl apply -f rbac-rolebind.yaml   
namespace/lbh-test created  
serviceaccount/nfs-provisioner created  
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created  
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created

3.创建Deployment资源对象

[root@master yaml]# vim nfs-deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  name:lbs-test
spec:
  replicas: 1#副本数量为1
  strategy:
    type: Recreate#重置
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner#指定账户
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner使用的是这个镜像。
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes#指定容器内的挂载目录
          env:
            - name: PROVISIONER_NAME#容器内置变量
              value: bdqn#这是变量的名字
            - name: NFS_SERVER
              value: 192.168.1.1
            - name: NFS_PATH#指定Nfs的共享目录
              value: /nfsdata
      volumes:#指定挂载到容器内的nfs路径与IP
        - name: nfs-client-root
          nfs:
            server: 192.168.1.1
            path: /nfsdata

执行yaml文件,查看Pod

[root@master yaml]# kubectl apply -f nfs-deployment.yaml   
deployment.extensions/nfs-client-provisioner created   
[root@master yaml]# kubectl get pod -n lbs-test   
NAME                                      READY   STATUS    RESTARTS   AGE  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          13s

4.创建Storageclass资源对象(sc):

root@master yaml]# vim sc.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sc-nfs
  namespace:lbs-test #名称空间 名
provisioner: lbs-test#与deployment资源的env环境变量value值相同
reclaimPolicy: Retain #回收策略

执行yaml文件,查看SC

[root@master yaml]# kubectl apply -f sc.yaml   
storageclass.storage.k8s.io/sc-nfs created  
[root@master yaml]# kubectl get sc -n lbs-test   
NAME     PROVISIONER   AGE  
sc-nfs   lbs-test      8s

5.创建StatefulSet资源对象,自动创建PVC:

vim statefulset.yaml
apiVersion: v1
kind: Service
metadata:
  name: headless-svc
  namespace: lbs-test
  labels:
    app: headless-svc
spec:
  ports:
  - port: 80
    name: myweb
  selector:
    app: headless-pod
  clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: statefulset-test
  namespace: lbs-test
spec:
  serviceName: headless-svc
  replicas: 3
  selector:
    matchLabels:
      app: headless-pod
  template:
    metadata:
      labels:
        app: headless-pod
    spec:
      containers:
      - image: httpd
        name: myhttpd
        ports:
        - containerPort: 80
          name: httpd
        volumeMounts:
        - mountPath: /mnt
          name: test
  volumeClaimTemplates:     这个字段:自动创建PVC
  - metadata:
      name: test
      annotations:   //这是指定storageclass,名称要一致
        volume.beta.kubernetes.io/storage-class: sc-nfs
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

执行yaml文件,查看Pod:

[root@master yaml]# kubectl apply -f statefulset.yaml   
service/headless-svc created
statefulset.apps/statefulset-test created  
[root@master yaml]# kubectl get pod -n lbs-test   
NAME                                      READY   STATUS    RESTARTS   AGE  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          22m  
statefulset-test-0                        1/1     Running   0          8m59s  
statefulset-test-1                        1/1     Running   0          2m30s  
statefulset-test-2                        1/1     Running   0          109s

查看是否自动创建PV及PVC

PV:

[root@master yaml]# kubectl get pv -n lbs-test   
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                              STORAGECLASS   REASON   AGE  
pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5   100Mi      RWO            Delete           Bound    lbh-test/test-statefulset-test-2   sc-nfs                  4m23s  
pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5   100Mi      RWO            Delete           Bound    lbh-test/test-statefulset-test-0   sc-nfs                  11m  
pvc-99137753-ccd0-4524-bf40-f3576fc97eba   100Mi      RWO            Delete           Bound    lbh-test/test-statefulset-test-1   sc-nfs                  5m4s

PVC:

[root@master yaml]# kubectl get pvc -n lbs-test   
NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE  
test-statefulset-test-0   Bound    pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5   100Mi      RWO            sc-nfs         13m  
test-statefulset-test-1   Bound    pvc-99137753-ccd0-4524-bf40-f3576fc97eba   100Mi      RWO            sc-nfs         6m42s  
test-statefulset-test-2   Bound    pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5   100Mi      RWO            sc-nfs         6m1s

查看是否创建持久化目录:

[root@master yaml]# ls /nfsdata/  
lbh-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5  
lbh-test-test-statefulset-test-1-pvc-99137753-ccd0-4524-bf40-f3576fc97eba  
lbh-test-test-statefulset-test-2-pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5

6.在pod资源内创建数据。并访问测试。

[root@master yaml]# cd /nfsdata/  
[root@master nfsdata]# echo 111 > lbs-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5/index.html  
[root@master nfsdata]# echo 222 > lbs-test-test-statefulset-test-1-pvc-99137753-ccd0-4524-bf40-f3576fc97eba/index.html  
[root@master nfsdata]# echo 333 > lbs-test-test-statefulset-test-2-pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5/index.html  
[root@master nfsdata]# kubectl get pod -o wide -n lbs-test   
NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          30m     10.244.2.2   node02   <none>           <none>  
statefulset-test-0                        1/1     Running   0          17m     10.244.1.2   node01   <none>           <none>  
statefulset-test-1                        1/1     Running   0          10m     10.244.2.3   node02   <none>           <none>  
statefulset-test-2                        1/1     Running   0          9m57s   10.244.1.3   node01   <none>           <none>  
[root@master nfsdata]# curl 10.244.1.2  
111  
[root@master nfsdata]# curl 10.244.2.3  
222  
[root@master nfsdata]# curl 10.244.1.3  
333

7.删除其中一个pod,查看该pod资源的数据是否会重新创建并存在。

[root@master ~]# kubectl get pod -n lbs-test   
NAME                                      READY   STATUS    RESTARTS   AGE  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          33m  
statefulset-test-0                        1/1     Running   0          20m  
statefulset-test-1                        1/1     Running   0          13m  
statefulset-test-2                        1/1     Running   0          13m  
[root@master ~]# kubectl delete pod -n lbs-test statefulset-test-0   
pod "statefulset-test-0" deleted

删除后会重新创建pod资源

[root@master ~]# kubectl get pod -n lbs-test -o wide  
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          35m   10.244.2.2   node02   <none>           <none>  
statefulset-test-0                        1/1     Running   0          51s   10.244.1.4   node01   <none>           <none>  
statefulset-test-1                        1/1     Running   0          15m   10.244.2.3   node02   <none>           <none>  
statefulset-test-2                        1/1     Running   0          14m   10.244.1.3   node01   <none>           <none>

数据依旧存在

[root@master ~]# curl 10.244.1.4  
111  
[root@master ~]# cat /nfsdata/lbs-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5/index.html   
111
整整172页!这是一份阿里云内部超全K8s实战手册​mp.weixin.qq.com
bfa1b3ed70b465d9b1c7da8dc7c1bba9.png

StatefulSet资源对象,针对有状态的服务的数据持久化测试完成。 通过测试,即使删除Pod,重新生成调度后,依旧能访问到之前的持久化数据。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值