数据管理:
为了持久化保存数据 可以使用kubernetes volume
Volume的生命周期独立于容器
Pod中的容器有可能出现意外,而volume会被保留
Volume:
1.emptyDir
是主机的一个空目录,对于容器他是持久的,而对于pod则不是,删除pod,emptydir volume也会被删除
示例:
[root@k8smaster ~]# mkdir 7-22
[root@k8smaster ~]# cd 7-22/
[root@k8smaster 7-22]# mkdir emptydir
[root@k8smaster 7-22]# cd emptydir/
[root@k8smaster emptydir]# vim empty.yml
apiVersion: v1
kind: Pod
metadata:
name: empty-pod
spec:
containers:
- image: busybox
name: test
volumeMounts:
- mountPath: /aaa
name: empty
args:
- /bin/sh
- -c
- echo "hahahahhahahhaha" >/aaa/aaa; sleep 300000000000
- image: busybox
name: test1
volumeMounts:
- mountPath: /hello
name: empty
args:
- /bin/sh
- -c
- cat /hello/aaa; sleep 3000000
volumes:
- name: empty
emptyDir: {}
在一个pod下创建了两个容器,两个容器都挂载了一个相同的volume,虽然在容器内的名字不一样,但是都是指向同一个volume。
查看pod:
[root@k8smaster emptydir]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
empty-pod 2/2 Running 0 2m8s 10.244.1.86 k8snode2 <none> <none>
查看kubectl logs:
[root@k8smaster emptydir]# kubectl logs empty-pod test
[root@k8smaster emptydir]# kubectl logs empty-pod test1
hahahahhahahhaha
可以看到第一个的日志没有显示 因为我们只是往里面写入
第二个日志显示了我们在第一个容器写入的信息
去节点查看一下挂载的目录:
[root@k8snode2 ~]# docker inspect 3eda
查看第二个容器:
[root@k8snode2 ~]# docker inspect bf4d2
可以看到2个容器但是挂载的目标位置是一样的
我们查看一下这个目录
[root@k8snode2 default-token-vmm27]# cd /var/lib/kubelet/pods/43ba2f73-184a-49a1-a0ec-6b859c9f4cd8/volumesty-dir/emptyio~emp
[root@k8snode2 empty]# ls
aaa
[root@k8snode2 empty]# cat aaa
hahahahhahahhaha
删除pod 这个目录也就不存在了
[root@k8smaster emptydir]# kubectl delete pod empty-pod
pod "empty-pod" deleted
在节点上查看:
[root@k8snode2 empty]# cd …
cd: 获取当前目录时出错: getcwd: 无法访问父目录: 没有那个文件或目录
可以看到那个目录没有了
这个优点是能够方便的为pod中的容器提供共享存储,但是pod消失了,这个共享存储也就没了
2.hostpath volume:
是把主机的目录共享到pod里的容器上
我们来查看 一下:
[root@k8smaster emptydir]# kubectl edit pod --namespace=kube-system kube-apiserver-k8smaster
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
这种volume是pod删除了 数据还会保存,但是主机坏掉了数据也就没了
3.外部 storage provider
如ceph glusterfs 独立于kubernetes集群的 就算集群瘫痪了 数据也不会丢失
PV(persistentVolume)和PVC(persistentVolumeClaim):
Pv是外部存储系统的一块存储空间,具有持久性,生命周期独立于pod
Pvc是对pv的申请,用户创建pvc时,指明存储资源的大小和访问模式等信息,kubernetes会查找符合条件的pv
Kubernetes支持多中persistentVolume:NFS Ceph EBS等
在创建pv和pvc之前,先搭建一个NFS。
Nfs 192.168.2.200
所有节点都安装:
yum -y install rpcbind nfs-utils
在nfs节点上:
[root@localhost ~]# vim /etc/exports
/volume *(rw,sync,no_root_squash)
创键这个目录:
[root@localhost ~]# mkdir /volume
启动服务:所有节点
systemctl start rpcbind && systemctl enable rpcbind
在nfs节点:
[root@localhost ~]# systemctl start nfs-server.service &&systemctl enable nfs-server.service
关闭防火墙和selinux:
[root@localhost ~]# setenforce 0
[root@localhost ~]# systemctl stop firewalld
除了nfs节点 随便找一个节点测试一下可以访问nfs节点的目录么:
[root@k8smaster emptydir]# showmount -e 192.168.2.200
Export list for 192.168.2.200:
/volume *
创键pv:
[root@k8smaster emptydir]# cd ..
[root@k8smaster 7-22]# mkdir pv-pvs
[root@k8smaster 7-22]# cd pv-pvs/
[root@k8smaster pv-pvs]# vim mypv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /volume/mypv
server: 192.168.2.200
kind: PersistentVolume 类型是pv
capacity:
storage: 1Gi 指定pv的容量
accessModes:
- ReadWriteOnce 指定访问的模式 读写模式mount到单个节点
ReadWriteMany 读写模式mount到多个节点
ReadOnlyMany 只读的模式mount到多个节点
persistentVolumeReclaimPolicy: Recycle 指定pv的回收策略
Recycle 删除pv中的数据 Retain 需要手动清除数据
Delete 删除 storage provider上对应的存储资源
storageClassName: nfs 指定pv的class pvc可以指定class中的pv
path: /volume/mypv 需要手动在nfs节点上创建 不然会报错
运行文件:
[root@k8smaster pv-pvs]# kubectl apply -f mypv.yml
persistentvolume/mypv created
查看:
[root@k8smaster pv-pvs]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv 1Gi RWO Recycle Available nfs 14s
创建pvc:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
指定资源类型,访问模式 ,容量大小,指定class
运行文件:
[root@k8smaster pv-pvs]# kubectl apply -f mypvc.yml
persistentvolumeclaim/mypvc created
查看:
[root@k8smaster pv-pvs]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc Bound mypv 1Gi RWO nfs 14s
在查看一下pv:
[root@k8smaster pv-pvs]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv 1Gi RWO Recycle Bound default/mypvc nfs 4m22s
可以看到pvc已经成功bound到pv上了 之前没创建pvc时 pv的状态为Available
创建一个pod,使用这个存储:
[root@k8smaster pv-pvs]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- image: busybox
name: test
args:
- /bin/sh
- -c
- sleep 300000000000
volumeMounts:
- mountPath: /aaaa
name: myvolu
volumes:
- name: myvolu
persistentVolumeClaim:
claimName: mypvc
运行文件:
[root@k8smaster pv-pvs]# kubectl apply -f pod.yml
pod/mypod created
查看:
[root@k8smaster pv-pvs]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mypod 1/1 Running 0 38s
测试创建一个文件试一下:
[root@k8smaster pv-pvs]# kubectl exec mypod touch /aaaa/hello
在nfs节点的目录查看:
[root@localhost ~]# cd /volume/mypv/
[root@localhost mypv]# ls
hello
可以看到已经保存到nfs 节点的/volume/mypv/下了
回收pv(删除pvc):
先删除pod:
[root@k8smaster pv-pvs]# kubectl delete pod mypod
pod "mypod" deleted
删除pvc:
[root@k8smaster pv-pvs]# kubectl delete pvc mypvc
persistentvolumeclaim "mypvc" deleted
查看一下pv的状态:
[root@k8smaster pv-pvs]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv 1Gi RWO Recycle Available nfs 16m
可以看到状态又变为 Available了
到nfs节点查看还有数据么:
[root@localhost mypv]# ls 可以看到数据已经没了
数据被清除 是因为pv的回收策略是recycle 所以数据被删除了
要想不被删除 需要改成retain
[root@k8smaster pv-pvs]# vim mypv.yml
执行文件
查看一下更新后的pv:
[root@k8smaster pv-pvs]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv 1Gi RWO Retain Available nfs 20m
再次创建pvc和pod:
[root@k8smaster pv-pvs]# kubectl apply -f mypvc.yml
[root@k8smaster pv-pvs]# kubectl apply -f pod.yml
查看一下这两个的状态确保开启了
创建一个文件:
[root@k8smaster pv-pvs]# kubectl exec mypod touch /aaaa/hello
再次删除pod和pvc:
[root@k8smaster pv-pvs]# kubectl delete pod mypod
pod "mypod" deleted
[root@k8smaster pv-pvs]# kubectl delete pvc mypvc
persistentvolumeclaim "mypvc" deleted
查看一下pv的状态:
[root@k8smaster pv-pvs]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mypv 1Gi RWO Retain Released default/mypvc nfs 25m
状态变为了Released 会一直保持这个状态,不能被其他pvc申请,可以删除pv,
存储空间中的数据不会被删除
删除pv:
[root@k8smaster pv-pvs]# kubectl delete pv mypv
persistentvolume "mypv" deleted
在nfs节点查看数据是否保存下来了:
[root@localhost mypv]# ls
hello
Pv的动态供给:
没有满足pvc的条件会自动创建pv ,是通过storage class实现的
MYSQL使用pv,pvc:
创建pv,pvc:
[root@k8smaster 7-22]# mkdir mysql
[root@k8smaster 7-22]# cd mysql/
[root@k8smaster mysql]# vim mysql-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /volume/mysql-pv
server: 192.168.2.200
path: /volume/mysql-pv 需要提前在nfs节点创建出来
Pvc:
[root@k8smaster mysql]# vim mysql-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
运行文件并查看:
[root@k8smaster mysql]# kubectl apply -f mysql-pv.yml
[root@k8smaster mysql]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mysql-pv 1Gi RWO Retain Available nfs 45s
[root@k8smaster mysql]# kubectl apply -f mysql-pvc.yml
[root@k8smaster mysql]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pvc Bound mysql-pv 1Gi RWO nfs 5s
创建MYSQL:
[root@k8smaster mysql]# vim mysql.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-volume
mountPath: /var/lib/mysql
volumes:
- name: mysql-volume
persistentVolumeClaim:
claimName: mysql-pvc
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mysql
name: mysql
spec:
ports:
- protocol: TCP
port: 3306
targetPort: 3306
selector:
app: mysql
执行文件
进入mysql:
[root@k8smaster mysql]# kubectl run -it --rm --image=mysql:5.7 --restart=Never mysql-client -- mysql -h mysql -ppassword
回车一下
进入数据库创建库,表,插入数据
mysql> create database aaa;
Query OK, 1 row affected (0.00 sec)
mysql> use aaa;
Database changed
mysql> create table test(id int,name varchar(20));
Query OK, 0 rows affected (0.03 sec)
mysql> insert into test values (1,"aa");
Query OK, 1 row affected (0.03 sec)
mysql> insert into test values (2,"bb");
Query OK, 1 row affected (0.00 sec)
mysql> insert into test values (3,"cc"),(4,"dd");
mysql> select * from test;
+------+------+
| id | name |
+------+------+
| 1 | aa |
| 2 | bb |
| 3 | cc |
| 4 | dd |
+------+------+
退出
查看pod:
[root@k8smaster mysql]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-7774bd7c76-5sk2l 1/1 Running 0 6m33s 10.244.1.90 k8snode2 <none> <none>
可以看到 pod在node2上 关闭node2模拟故障
等待一段时间会发现 mysql服务转移到node1上了
[root@k8smaster mysql]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-7774bd7c76-5sk2l 1/1 Terminating 0 14m 10.244.1.90 k8snode2 <none> <none>
mysql-7774bd7c76-6w49h 1/1 Running 0 73s 10.244.2.83 k8snode1 <none> <none>
登陆mysql,验证数据:
[root@k8smaster mysql]# kubectl run -it --rm --image=mysql:5.7 --restart=Never mysql-client -- mysql -h mysql -ppassword
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| aaa |
| mysql |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.01 sec)
mysql> use aaa;
mysql> show tables;
+---------------+
| Tables_in_aaa |
+---------------+
| test |
+---------------+
1 row in set (0.00 sec)
mysql> select * from test;
+------+------+
| id | name |
+------+------+
| 1 | aa |
| 2 | bb |
| 3 | cc |
| 4 | dd |
+------+------+
4 rows in set (0.00 sec)
可以看到数据没有丢失