一、linux下命令行删除上一个字符使用ctrl-h,删除光标到行首使用ctrl-u,删除光标到行尾使用ctrl-k,跳转到上一个单词使用alt-b,跳转到下一个单词使用alt-f,跳转到行首使用ctrl-a,跳转到行尾使用ctrl-e,搜索历史命令使用ctrl-r。以上这些是常用命令,删除上一个单词的快捷键ctrl-w
二、k8s volumes 实践
实践1:应用创建hostPath类型(node节点本机)存储卷
[root@k8s-master vol]# mkdir -p /k8s/vol
[root@k8s-master vol]# cd /k8s/vol
[root@k8s-master vol]# vi nginx-deploy-vol.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-vol-deploy
namespace: default
spec:
selector:
matchLabels:
app: nginx-vol-pod
replicas: 2
template:
metadata:
labels:
app: nginx-vol-pod
spec:
containers:
- name: nginx-containers
image: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html/
volumes:
- name: html
hostPath:
path: /k8s/vol/nginx-vol
type: DirectoryOrCreate
[root@k8s-node1 /]# mkdir -p /k8s/vol/nginx-vol && cd /k8s/vol/nginx-vol
[root@k8s-node1 /]# echo 161 > index.html
注1:如果在node上没有事先创建挂载路径,根据type: DirectoryOrCreate会在node节点上创建对应的路径;
注2:如果在此目录下没有加入任何文件,curl pod的url,则nginx会返回403,因为外面的存储卷会覆盖原来在镜像中的数据!
[root@k8s-node2 /]# mkdir -p /k8s/vol/nginx-vol && cd /k8s/vol/nginx-vol
[root@k8s-node2 /]# echo 171 > index.html
此时,curl pod的url,则nginx会返回161或171
实践2:应用创建nfs类型(nfs server)存储卷(适用场景:nfs server更改文件,可以同步到多个容器中)
[root@store01 ~]# yum install -y nfs-utils
[root@store01 ~]# systemctl stop firewalld.service
[root@store01 ~]# systemctl disable firewalld.service
[root@store01 ~]# mkdir -p /k8s/volumes/
[root@store01 ~]# cd /k8s/volumes/
[root@store01 volumes]# vim /etc/exports
/k8s/volumes 192.168.0.0/16(rw,no_root_squash)
[root@store01 volumes]# systemctl restart nfs
注1:如果改变挂载路径一定要重启nfs service
[root@store01 volumes]# systemctl status nfs
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: active (exited) since 二 2019-04-23 06:04:07 CST; 8min ago
Process: 2842 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
Process: 2839 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
Process: 2838 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
Process: 2869 ExecStartPost=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl restart gssproxy ; fi (code=exited, status=0/SUCCESS)
Process: 2852 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 2851 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 2852 (code=exited, status=0/SUCCESS)
Tasks: 0
CGroup: /system.slice/nfs-server.service
4月 23 06:04:07 store01 systemd[1]: Starting NFS server and services...
4月 23 06:04:07 store01 systemd[1]: Started NFS server and services.
[root@store01 volumes]# cat /var/log/messages | grep mount
[root@k8s-node1 /]# yum install -y nfs-utils
[root@k8s-node2 /]# yum install -y nfs-utils
[root@k8s-master vol]# vi nginx-deploy-vol-nfs.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-vol-deploy-nfs
namespace: default
spec:
selector:
matchLabels:
app: nginx-vol-pod-nfs
replicas: 2
template:
metadata:
labels:
app: nginx-vol-pod-nfs
spec:
containers:
- name: nginx-containers-nfs
image: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html/
volumes:
- name: html
nfs:
path: /k8s/volumes
server: k8s-store1 => 先加到k8s各节点hosts文件中
[root@k8s-master vol]# kubectl apply -f nginx-deploy-vol-nfs.yaml
[root@k8s-master vol]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-vol-deploy-nfs-bf7458b65-4xsnc 1/1 Running 0 3m2s 10.244.1.35 k8s-node1
nginx-vol-deploy-nfs-bf7458b65-xjlml 1/1 Running 0 3m2s 10.244.2.29 k8s-node2
[root@k8s-master vol]# curl 10.244.1.35
<html>
<head><title>403 Forbidden</title></head> ==> 这是因为外部挂载的路径下空文件夹覆盖了容器内部的文件,即删除了index.html文件
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.15.12</center>
</body>
</html>
[root@k8s-master vol]# kubectl exec -it nginx-vol-deploy-nfs-bf7458b65-4xsnc -- /bin/bash
root@nginx-vol-deploy-nfs-bf7458b65-4xsnc:/# cd /usr/share/nginx/html/
root@nginx-vol-deploy-nfs-bf7458b65-4xsnc:/usr/share/nginx/html# echo '<h1>success</h1>' > index.html
root@nginx-vol-deploy-nfs-bf7458b65-4xsnc:/usr/share/nginx/html# exit
exit
[root@k8s-master vol]# curl 10.244.2.29
<h1>success</h1>
[root@store01 volumes]# cat index.html
<h1>success</h1>
实践3:应用创建 pvc+pv+nfs 类型存储卷(运维、存储工程师分工明细;动态绑定存储卷)
1.创建 nfs 挂载目录:
[root@store01 k8s]# mkdir volume0{1,2,3}
[root@store01 k8s]# ls
volume01 volume02 volume03
[root@store01 k8s]# vi /etc/exports
/k8s/volume01 192.168.0.0/16(rw,no_root_squash)
/k8s/volume02 192.168.0.0/16(rw,no_root_squash)
/k8s/volume03 192.168.0.0/16(rw,no_root_squash)
[root@store01 k8s]# systemctl restart nfs
2.在k8s上创建pv资源
[root@k8s-master vol]# vi nginx-pv.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
labels:
name: pv01
spec:
nfs:
path: /k8s/volume01
server: k8s-store1
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv02
labels:
name: pv02
spec:
nfs:
path: /k8s/volume02
server: k8s-store1
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv03
labels:
name: pv03
spec:
nfs:
path: /k8s/volume03
server: k8s-store1
accessModes: ["ReadWriteOnce"]
capacity:
storage: 5Gi
[root@k8s-master vol]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv01 2Gi RWO,RWX Retain Available 9s
pv02 1Gi RWO,RWX Retain Available 9s
pv03 5Gi RWO Retain Available 7s
3.创建pod与pvc(同一个命名空间)
[root@k8s-master vol]# vi nginx-deploy-pvc-vol.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-vol-deploy
namespace: default
spec:
selector:
matchLabels:
app: nginx-vol-pod
replicas: 2
template:
metadata:
labels:
app: nginx-vol-pod
spec:
containers:
- name: nginx-containers
image: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html/
volumes:
- name: html
persistentVolumeClaim:
claimName: nginx-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
namespace: default
spec:
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 2Gi
4.初始化pod,并测试 pvc+pv+nfs 类型存储卷
[root@k8s-master vol]# kubectl apply -f nginx-deploy-pvc-vol.yaml
deployment.apps/nginx-vol-deploy created
persistentvolumeclaim/nginx-pvc created
[root@k8s-master vol]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-pvc Bound pv01 2Gi RWO,RWX 3h8m
[root@k8s-master vol]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-vol-deploy-5b7d779688-6vhcl 1/1 Running 0 3h7m 10.244.2.30 k8s-node2
nginx-vol-deploy-5b7d779688-jcm2d 1/1 Running 0 3h7m 10.244.1.36 k8s-node1
[root@k8s-master vol]# curl 10.244.2.30
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.15.12</center>
</body>
</html>
[root@store01 volume01]# echo '<h1>pod-pvc-pv-nfs success</h1>' > index.html
[root@k8s-master vol]# curl 10.244.2.30
<h1>pod-pvc-pv-nfs success</h1>
[root@k8s-master vol]# curl 10.244.1.36
<h1>pod-pvc-pv-nfs success</h1>
5.当删除了一次pvc时,PV在Retain策略Released状态下,pv资源不可用,解决办法
[root@k8s-master vol]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv01 2Gi RWO,RWX Retain Released default/nginx-pvc 92m
pv02 1Gi RWO,RWX Retain Available 92m
pv03 5Gi RWO Retain Available 92m
[root@k8s-master vol]# kubectl edit pv pv01
删除 claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: pvc-test
namespace: default
resourceVersion: "11751559"
uid: 069c4486-d773-11e8-bd12-000c2931d938
persistentvolume/pv01 edited
[root@k8s-master vol]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv01 2Gi RWO,RWX Retain Available 93m
pv02 1Gi RWO,RWX Retain Available 93m
pv03 5Gi RWO Retain Available 93m
参考文献
1.mount命令+nfs挂载失败原因
2.Kubernetes PV在Retain策略Released状态下重新分配到PVC恢复数据
附一:虚机挂载nfs操作
[root@k8s-node1 vol]# mount -t nfs k8s-store1:/k8s/volumes1 /k8s/vol
注1:umount -l /k8s/vol 强制取消挂载,但选项–l并不是马上umount,而是在该目录空闲后再umount。还可以先用命令 ps aux 来查看占用设备的程序PID,然后用命令kill来杀死占用设备的进程,这样就umount的非常放心了