本篇文章继续给大家讲解运维界的超神器Kubernetes的内容,带你短时间内入门,Pod基础管理命令让你快速上手,命令敲到飞起;修改镜像实战,让别人的镜像为你所用;标签管理,实现一个标签删除所有节点的pod;Pod数据持久化,不管是不是同一Pod,同一节点都能实现保留数据,configMap配置文件挂载,让你修改配置文件更加方便!
目录
Pod基础管理命令与示例
kubectl explain 查看文档帮助信息
kubectl explain po
kubectl explain po.spec
kubectl explain po.spec.containers
kubectl explain po.spec.containers.ports
kubectl explain po.spec.containers.ports.name
kubectl get 查看资源列表
kubectl get pods
kubectl get pods -o wide
kubectl get pods -o yaml
kubectl get pods -o json
kubectl delete 删除资源
kubectl delete pod <Pod_name>
kubectl delete pods <Pod_name>
kubectl delete -f xxx.yaml
kubectl delete pods --all
kubectl apply 更新资源
kubectl apply -f xxx.yaml
kubectl exec 连接Pod的容器
kubectl exec <Pod_name> -- COMMAND/ARGS
kubectl exec -it <Pod_name> -c <container_name> -- COMMANND/ARGS
kubectl describe 查询资源的详细信息
kubectl describe pods [<Pod_name>] #不加Pod_name则查看全部Pod详细信息
kubectl describe nodes [<Node_name>]
已经介绍到的资源清单
apiVersion: v1
kind: Pod
meta_data:
name:
spec:
nodeName:
hostNetwork:
RestartPolicy: Always|Never|OnFailure
containers:
- name:
image:
command:
env:
- name:
value:
imagePullPolicy: Always|Never|IfNotPresent
resources:
requests:
cpu:
memory:
limits:
cpu:
memory:
ports:
- containerPort: 9200
hostIP: 0.0.0.0
hostPort: 9200
name: es-webui
protocol: TCP|UDP|SCTP
- containerPort: 9300
name: es-data
修改镜像实战案例
一、需求
1、将"koten/2023-games:v0.4"镜像上传到habor仓库的"koten-games"项目下并命名为"harbor.koten.com/koten-games/games:v0.4";
2、运行上一步镜像,并将"killbird"游戏加载到该游戏镜像中,要求windows访问为"game18.koten.com"为该游戏;
3、将上一步展示的提交新的镜像"harbor.koten.com/koten-games/games:v0.5"
二、实操
1、打标签,上传镜像
[root@Worker232 ~]# docker tag koten2023/koten-games:v0.4 harbor.koten.com/koten-games/games:v0.4
[root@Worker232 ~]# docker push harbor.koten.com/koten-games/games:v0.4
2、编写资源清单
[root@Master231 pod]# cat 15-pods-games.yaml
apiVersion: v1
kind: Pod
metadata:
name: koten-games-001
spec:
nodeName: worker232
hostNetwork: true
containers:
- name: games
image: harbor.koten.com/koten-games/games:v0.4
[root@Master231 pod]# kubectl apply -f 15-pods-games.yaml
pod/koten-games-002 created
[root@Master231 pod]# kubectl get pods
NAME READY STATUS RESTARTS AGE
koten-games-001 0/1 CrashLoopBackOff 4 (74s ago) 3m6s
[root@Worker232 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b7b0aa376dbc 0a7345562077 "/entrypoint.sh sh -…" 1 second ago Up Less than a second k8s_games_koten-games-001_default_a7813d97-08b3-42b0-afcf-858f51bb454f_4
此处有坑,因为镜像内做了22的端口映射,所以用主机网络无法启动,修改资源清单为映射80端口
[root@Master231 pod]# cat 15-pods-games.yaml
apiVersion: v1
kind: Pod
metadata:
name: koten-games-001
spec:
nodeName: worker232
containers:
- name: games
image: harbor.koten.com/koten-games/games:v0.4
ports:
- containerPort: 80
hostPort: 80
[root@Master231 pod]# kubectl apply -f 15-pods-games.yaml
pod/koten-games-001 created
[root@Master231 pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
koten-games-001 1/1 Running 0 78s 10.100.1.46 worker232 <none> <none>
3、拷贝代码目录
[root@Master231 pod]# kubectl cp killbird koten-games-001:/usr/local/nginx/html
4、修改nginx配置文件
[root@Master231 pod]# kubectl exec -it koten-games-001 -- sh
/ # cat /etc/nginx/conf.d/games.conf
......
server {
listen 0.0.0.0:80;
root /usr/local/nginx/html/killbird/;
server_name game18.koten.com;
}
/ # nginx -s reload
5、windows访问
windows进行hosts解析,访问game18.koten.com
6、直接提交为镜像
可以将上面做好的镜像,直接command提交成镜像,也可以用dockerfile把这些流程再走一遍,前者方便,但是某些情况下,镜像启动会有缓存,可能会导致体积大,后者比较麻烦,这两个方法我都演示一下。
[root@Worker232 ~]# docker container commit adc4b111a0ea harbor.koten.com/koten-games/games:v0.5
sha256:82651cb3f996f415bca40e11b2ca036b8670d925fb4f0efdb191553e5fe78d9c
7、用dockerfile提交成镜像
打包代码文件
[root@Master231 pod]# tar zcf game-killbird.tar.gz killbird
修改启动脚本
[root@Master231 pod]# kubectl exec -it koten-games-001 -- sh
/ # cat entrypoint.sh
......
server {
listen ${IP:-0.0.0.0}:${PORT:-80};
root ${DATA_DIR:-/usr/local/nginx/html}/killbird/;
server_name game18.koten.com;
}
......
[root@Master231 pod]# kubectl cp koten-games-001:/entrypoint.sh ./entrypoint.sh
将文件推送至工作节点并编写dockerfile
[root@Master231 pod]# scp entrypoint.sh game-killbird.tar.gz 10.0.0.232:/root
[root@Worker232 dockerfile]# cp /root/entrypoint.sh .
[root@Worker232 dockerfile]# cp /root/game-killbird.tar.gz .
[root@Worker232 dockerfile]# cat dockerfile
FROM koten2023/koten-games:v0.4
COPY entrypoint.sh /entrypoint.sh
ADD game-killbird.tar.gz /usr/local/nginx/html
打包镜像
[root@Worker232 dockerfile]# docker build -t harbor.koten.com/koten-games/games:v0.5 .
Sending build context to Docker daemon 911.9kB
Step 1/3 : FROM koten2023/koten-games:v0.4
---> 0a7345562077
Step 2/3 : COPY entrypoint.sh /entrypoint.sh
---> 56f841bb72de
Step 3/3 : ADD game-killbird.tar.gz /usr/local/nginx/html
---> e3ae79f2462b
Successfully built e3ae79f2462b
Successfully tagged harbor.koten.com/koten-games/games:v0.6
8、对比两种方式大小,选择体积小的推送至harbor仓库
[root@Worker232 dockerfile]# docker images | grep harbor.koten.com/koten-games/games
harbor.koten.com/koten-games/games v0.6 7d5505a9a116 17 seconds ago 291MB
harbor.koten.com/koten-games/games v0.5 e3ae79f2462b 2 minutes ago 291MB
由此可见相差不大,比较我们用kubectl在里面操作的时候也没产生啥缓存
[root@Worker232 dockerfile]# docker push harbor.koten.com/koten-games/games:v0.5
标签管理
标签管理可以通过一个标签删除所有节点的Pod,类似于群组的作用,标签管理分为声明式和响应式两种方式,各有利弊。
一、响应式管理方式
优点:修改后立即生效;缺点:不支持持久化,每次创建资源都需要手动更新标签
1、查看pod标签
[root@Master231 pod]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
koten-games-001 1/1 Running 0 66m <none>
2、 给pod资源打标签
[root@Master231 pod]# kubectl label -f 15-pods-games.yaml name=koten
pod/koten-games-001 labeled
[root@Master231 pod]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
koten-games-001 1/1 Running 0 66m name=koten
[root@Master231 pod]# kubectl label pod koten-games-001 hobby=linux
pod/koten-games-001 labeled
[root@Master231 pod]# kubectl label pod/koten-games-001 k8s=easy
pod/koten-games-001 labeled
[root@Master231 pod]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
koten-games-001 1/1 Running 0 68m hobby=linux,k8s=easy,name=koten
3、修改标签
[root@Master231 pod]# kubectl label pod koten-games-001 --overwrite hobby=k8s
pod/koten-games-001 unlabeled
[root@Master231 pod]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
koten-games-001 1/1 Running 0 69m hobby=k8s,k8s=easy,name=koten
4、移除标签
[root@Master231 pod]# kubectl label pod koten-games-001 hobby-
pod/koten-games-001 unlabeled
[root@Master231 pod]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
koten-games-001 1/1 Running 0 70m k8s=easy,name=koten
二、声明式管理方式
优点:可以持久化配置;缺点:需要手动加载配置文件
1、创建标签
[root@Master231 pod]# cat 15-pods-games.yaml
apiVersion: v1
kind: Pod
metadata:
name: koten-games-001
labels:
name: koten
hobby: k8s
spec:
nodeName: worker232
containers:
- name: games
image: harbor.koten.com/koten-games/games:v0.4
ports:
- containerPort: 80
hostPort: 80
[root@Master231 pod]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
koten-games-001 1/1 Running 0 20s hobby=k8s,name=koten
2、更新或移除标签
[root@Master231 pod]# cat 15-pods-games.yaml
apiVersion: v1
kind: Pod
metadata:
name: koten-games-001
labels:
hobby: nginx
spec:
nodeName: worker232
containers:
- name: games
image: harbor.koten.com/koten-games/games:v0.4
ports:
- containerPort: 80
hostPort: 80
[root@Master231 pod]# kubectl apply -f 15-pods-games.yaml
pod/koten-games-001 configured
[root@Master231 pod]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
koten-games-001 1/1 Running 0 89s hobby=nginx
Pod数据持久化
参考链接:Volumes | Kubernetes
当Pod被删除时,数据丢失的问题如何解决,我们可以使用emptyDir、hostPath、NFS等数据卷进行挂载。
一、emptyDir数据卷
1、参考案例
编写资源清单,两个容器在同一个Pod
[root@Master231 pod]# cat 11-pods-volumes-emptyDir.yaml
apiVersion: v1
kind: Pod
metadata:
name: volumes-emptydir-001
spec:
# 定义存储卷
volumes:
# 声明存储卷的名称
- name: data
# 声明存储卷类型为emptyDir.
emptyDir: {}
containers:
- name: web
image: harbor.koten.com/koten-web/nginx:1.25.1-alpine
# 挂载存储卷
volumeMounts:
# 要挂载的存储卷名称
- name: data
# 指定容器的挂载点
mountPath: /usr/share/nginx/html
- name: linux
image: harbor.koten.com/koten-linux/alpine:latest
# 分配一个标准输入
stdin: true
volumeMounts:
- name: data
mountPath: /data
运行资源清单,尝试在linux容器中的/data目录创建个index.html,也就相当于nginx容器的代码目录创建了index.html
[root@Master231 pod]# kubectl apply -f 11-pods-volumes-emptyDir.yaml
pod/volumes-emptydir-001 created
[root@Master231 pod]# kubectl get pods
NAME READY STATUS RESTARTS AGE
volumes-emptydir-001 2/2 Running 0 99s
[root@Master231 pod]# kubectl exec -it volumes-emptydir-001 -c linux -- sh
/ # cd /data/
/data # echo "<h1>koten</h1>" > index.html
/data # cat index.html
<h1>koten</h1>
/data #
curl下pod的IP,验证下是否为刚刚编辑的代码文件
[root@Master231 pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
volumes-emptydir-001 2/2 Running 0 3m58s 10.100.1.40 worker232 <none> <none>
[root@Master231 pod]# curl 10.100.1.40
<h1>koten</h1>
我们再次进入linux容器,试试下载本地的代码文件,发现可以wget下来,说明不仅共享了数据目录,还共享了网络名称空间(也可以理解为共享了网卡)
[root@Master231 pod]# kubectl exec -it volumes-emptydir-001 -c linux -- sh
/ # wget 127.0.0.1:80/index.html
Connecting to 127.0.0.1:80 (127.0.0.1:80)
saving to 'index.html'
index.html 100% |*******************| 15 0:00:00 ETA
'index.html' saved
/ # cat index.html
<h1>koten</h1>
[root@Master231 pod]#
分别进入两个容器查看网卡,发现两个网卡一模一样
[root@Master231 pod]# kubectl exec -it volumes-emptydir-001 -c linux -- sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
link/ether de:e6:21:f3:5b:c4 brd ff:ff:ff:ff:ff:ff
inet 10.100.1.40/24 brd 10.100.1.255 scope global eth0
valid_lft forever preferred_lft forever
/ #
[root@Master231 pod]# kubectl exec -it volumes-emptydir-001 -c web -- sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
link/ether de:e6:21:f3:5b:c4 brd ff:ff:ff:ff:ff:ff
inet 10.100.1.40/24 brd 10.100.1.255 scope global eth0
valid_lft forever preferred_lft forever
/ #
[root@Master231 pod]#
2、emptyDir原理介绍
emptyDir是一个临时存储卷,与Pod的生命周期绑定在一起,如果Pod被删除,数据也会被删除
在工作节点可以找到存储卷,删除pod后消失
[root@Worker232 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e15b0d8bbd7e harbor.koten.com/koten-linux/alpine "tail -f /etc/hosts" 12 minutes ago Up 12 minutes k8s_linux_volumes-emptydir-001_default_4ac58329-bcc9-4076-876b-8233f5c60b01_0
#在Pod随机生成的ID下
[root@Worker232 ~]# ls /var/lib/kubelet/pods/4ac58329-bcc9-4076-876b-8233f5c60b01/
containers/ etc-hosts plugins/ volumes/
[root@Worker232 ~]# ls /var/lib/kubelet/pods/4ac58329-bcc9-4076-876b-8233f5c60b01/volumes/kubernetes.io~empty-dir/
data
[root@Master231 pod]# kubectl delete pods --all
pod "volumes-emptydir-001" deleted
[root@Worker232 ~]# ls /var/lib/kubelet/pods/ #找不到上面的PodID了,自然也就没数据了
3、emptyDir作用
1、可以实现数据持久化
2、同一个Pod的多个容器可以实现数据共享,多个不同的Pod之间不能进行数据通信;
3、随着Pod的生命周期而存在,当我们删除Pod时,其数据也会被删除
4、emptyDir的应用场景
1、临时缓存空间,比如基于磁盘的归并排序;
2、为较耗时计算任务提供检查点,以便任务能方便的从崩溃前状态恢复执行;
3、存储Web访问日志及错误日志等信息;
4、可以一个Pod中存放两个容器,一个输出日志,一个filebeat采集日志。
5、emptyDir优缺点
优点
1、可以实现同一个Pod内多个容器之间数据共享;
2、当Pod内的某个容器被强制删除时,数据并不会丢失,因为Pod没有删除。
缺点
1、当Pod被删除时,数据也会被随之删除;
2、不同的Pod之间无法实现数据共享。
二、hostPath数据卷
参考链接:Volumes | Kubernetes
挂载Pod所在节点上的文件或者目录到Pod中的容器,如果Pod删除了,宿主机的数据并不会被删除
1、参考案例
编写同一节点不同Pod的资源清单
[root@Master231 pod]# cat 12-podes-volumes-hostPath.yaml
apiVersion: v1
kind: Pod
metadata:
name: volumes-hostpath-001
spec:
# 定义存储卷
volumes:
# 声明存储卷的名称
- name: data
# 声明存储卷类型为hostPath
hostPath:
# 指定宿主机的路径
path: /data
nodeName: worker233
containers:
- name: web
image: harbor.koten.com/koten-web/nginx:1.25.1-alpine
# 挂载存储卷
volumeMounts:
# 要挂载的存储卷名称
- name: data
# 指定容器的挂载点
mountPath: /usr/share/nginx/html
---
apiVersion: v1
kind: Pod
metadata:
name: volumes-hostpath-002
spec:
volumes:
- name: data
hostPath:
path: /data
nodeName: worker233
containers:
- name: linux
image: harbor.koten.com/koten-linux/alpine:latest
stdin: true
volumeMounts:
- name: data
mountPath: /data
运行资源清单
[root@Master231 pod]# kubectl apply -f 12-podes-volumes-hostPath.yaml
pod/volumes-hostpath-001 created
pod/volumes-hostpath-002 created
[root@Master231 pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
volumes-hostpath-001 1/1 Running 0 9s 10.100.2.17 worker233 <none> <none>
volumes-hostpath-002 1/1 Running 0 9s 10.100.2.18 worker233 <none> <none>
测试在volumes-hostpath-002的节点上增加数据volumes-hostpath-001能否访问到
#给002pod增加数据
[root@Master231 pod]# kubectl exec -it volumes-hostpath-002 -- sh
/ # cd /data/
/data # echo "koten" > index.html
/data #
[root@Master231 pod]#
#在001pod上查询数据
[root@Master231 pod]# kubectl exec -it volumes-hostpath-001 -- sh
/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # cat index.html
koten
#在宿主机curl 001的IP
[root@Master231 pod]# curl 10.100.2.17
koten
#在工作节点查看被挂载目录的数据
[root@Worker233 ~]# ls /data/
index.html
[root@Worker233 ~]# cat /data/index.html
koten
2、应用场景
1、Pod中的容器需要访问宿主机文件
2、同一Node节点中启不同Pod,不同Pod里的容器需要共享一个目录
3、在宿主机修改配置文件,相当于在容器里修改了
3、hostPath优缺点
优点
1、可以实现同一个Pod不同容器之间的数据共享;
2、可以实现同一个Node节点不同Pod之间的数据共享。
缺点
无法满足跨节点的Pod之间的数据共享
三、NFS数据卷
参考链接:Volumes | Kubernetes
资源清单提供对NFS挂载支持,可以自动将NFS共享路径挂载Pod中,实现不同节点的数据同步。
https://kubernetes.io/docs/concepts/storage/volumes/#nfs
NFS: 英文全称为"Network File System"(网络文件系统),是由SUN公司研制的UNIX表示层协议(presentation layer protocol),能使使用者访问网络上别处的文件就像在使用自己的计算机一样。
NFS是一个主流的文件共享服务器,但存在单点故障,我们需要对数据进行备份,如果有必要可以使用分布式文件系统哈。
1、各节点部署NFS
我这边使用主节点为NFS的服务端,另外两个节点为客户端
1、所有节点按照NFS相关软件包
[root@Master231 pod]# yum -y install nfs-utils
2、master231节点设置共享目录
[root@Master231 pod]# mkdir -pv /koten/data/kubernetes
[root@Master231 pod]# cat > /etc/exports <<'EOF'
/koten/data/kubernetes *(rw,no_root_squash)
EOF
3、配置NFS服务开机自启动
[root@Master231 pod]# systemctl enable --now nfs
4、服务端检查NFS挂载信息
[root@Master231 pod]# exportfs
/koten/data/kubernetes
<world>
5、客户端手动挂载测试
[root@Worker232 ~]# mount -t nfs master231:/koten/data/kubernetes /mnt/
[root@Worker232 ~]# cp /etc/os-release /mnt/
[root@Worker232 ~]# ll /mnt/
total 4
-rw-r--r-- 1 root root 393 Jun 16 17:01 os-release
[root@Worker232 ~]# umount /mnt
[root@Worker232 ~]# ll /mnt/
total 0
[root@Master231 pod]# ll /koten/data/kubernetes
total 4
-rw-r--r-- 1 root root 393 Jun 16 17:01 os-release
2、参考案例
1、编写资源清单,一个pod运行在232,一个运行在233,231为服务节点
[root@Master231 pod]# cat 13-podes-volumes-nfs.yaml
apiVersion: v1
kind: Pod
metadata:
name: volumes-nfs-001
spec:
# 定义存储卷
volumes:
# 声明存储卷的名称
- name: data
# 声明存储卷类型为hostPath
nfs:
# 指定宿主机的路径
server: master231
path: /koten/data/kubernetes
nodeName: worker232
containers:
- name: web
image: harbor.koten.com/koten-web/nginx:1.25.1-alpine
# 挂载存储卷
volumeMounts:
# 要挂载的存储卷名称
- name: data
# 指定容器的挂载点
mountPath: /usr/share/nginx/html
---
apiVersion: v1
kind: Pod
metadata:
name: volumes-nfs-002
spec:
volumes:
- name: data
nfs:
# 指定宿主机的路径
server: master231
path: /koten/data/kubernetes
nodeName: worker233
containers:
- name: linux
image: harbor.koten.com/koten-linux/alpine:latest
stdin: true
volumeMounts:
- name: data
mountPath: /data
2、运行资源清单,写入数据,观察挂载信息
[root@Master231 pod]# kubectl apply -f 13-podes-volumes-nfs.yaml
pod/volumes-nfs-001 created
pod/volumes-nfs-002 created
[root@Master231 pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
volumes-nfs-001 1/1 Running 0 102s 10.100.1.42 worker232 <none> <none>
volumes-nfs-002 1/1 Running 0 102s 10.100.2.20 worker233 <none> <none>
# 002Pod写入数据,001Pod发现数据可读取
[root@Master231 pod]# kubectl exec -it volumes-nfs-002 -- sh
/ # cd /data/
/data # echo koten > 1.txt
/data #
[root@Master231 pod]# kubectl exec -it volumes-nfs-001 -- sh
/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # ls
1.txt
/usr/share/nginx/html # cat 1.txt
koten
/usr/share/nginx/html #
[root@Master231 pod]#
# NFS服务端查看挂载数据
[root@Master231 pod]# ls /koten/data/kubernetes/
1.txt
[root@Master231 pod]# cat /koten/data/kubernetes/1.txt
koten
configMap配置文件挂载
configMap可以对应用程序的配置文件进行单独挂载,例如其他人在拉取镜像的时候不方便对配置文件进行修改,就可以直接在configMap的yaml中进行修改,再运行,容器中的配置文件就会变成已经修改过的了,我个人感觉这种方式只是向配置文件的修改规范化了,其实用上面普通的挂载也一样能修改。
kubectl explain cm查看文档
一、一般用法
编写configmap的资源清单,需要注意缩进
[root@Master231 configMap]# cat 01-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: games-conf
# 指定cm的数据
data:
# 键值对数据类型
author: koten
# 类文件数据类型,多行模式
author.info: |
name: koten
hobby: "k8s,nginx,docker"
[root@Master231 configMap]# kubectl apply -f 01-cm.yaml
[root@Master231 configMap]# kubectl describe configmap games-conf
Name: games-conf
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
author:
----
koten
author.info:
----
name: koten
hobby: "k8s,nginx,docker"
BinaryData
====
Events: <none>
编写kind资源,调用configmap
[root@Master231 ~]# kubectl get cm
NAME DATA AGE
games-conf 2 7m16s
kube-root-ca.crt 1 2d23h
[root@Master231 pod]# cat 16-pods-games-cm.yaml
apiVersion: v1
kind: Pod
metadata:
name: koten-games-001
spec:
nodeName: worker232
volumes:
- name: data
# 指定configMap的名称
configMap:
name: games-conf
containers:
- name: games
image: harbor.koten.com/koten-games/games:v0.4
ports:
- containerPort: 80
hostPort: 80
volumeMounts:
- name: data
mountPath: /data
运行资源清单,查看里面的数据
[root@Master231 pod]# kubectl apply -f 16-pods-games-cm.yaml
pod/koten-games-001 created
[root@Master231 pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
koten-games-001 1/1 Running 0 22s 10.100.1.48 worker232 <none> <none>
[root@Master231 pod]# kubectl exec -it koten-games-001 -- sh
/ # ls /data/
author author.info
/ # cat /data/author
koten/ #
/ # cat /data/author.info
name: koten
hobby: "k8s,nginx,docker"
/ #
[root@Master231 pod]#
二、指定调用配置与名称
编写资源清单,指定调用的key,修改为文件名称
[root@Master231 pod]# cat 17-pods-games-cm2.yaml
apiVersion: v1
kind: Pod
metadata:
name: koten-games-002
spec:
nodeName: worker232
volumes:
- name: data
# 指定configMap的名称
configMap:
name: games-conf
# 引用configMap中的某个key,若不指定,则引用configMap资源的所有key
items:
# 指定configMap中的key
- key: author.info
# 暂时理解为在容器挂载点的文件名称
path: koten.info
containers:
- name: games
image: harbor.koten.com/koten-games/games:v0.4
ports:
- containerPort: 80
hostPort: 80
volumeMounts:
- name: data
mountPath: /data
运行资源清单,查看调用configMap的效果
[root@Master231 pod]# kubectl apply -f 17-pods-games-cm2.yaml
pod/koten-games-002 created
[root@Master231 pod]# kubectl exec -it koten-games-002 -- sh
/ # ls -l /data/
total 0
lrwxrwxrwx 1 root root 17 Jun 17 08:23 koten.info -> ..data/koten.info
/ # cat /data/koten.info
name: koten
hobby: "k8s,nginx,docker"
/ #
[root@Master231 pod]#
如果需要调用多个key后面继续跟key和path即可
[root@Master231 pod]# cat 17-pods-games-cm2.yaml
apiVersion: v1
kind: Pod
metadata:
name: koten-games-002
spec:
nodeName: worker232
volumes:
- name: data
# 指定configMap的名称
configMap:
name: games-conf
# 引用configMap中的某个key,若不指定,则引用configMap资源的所有key
items:
# 指定configMap中的key
- key: author.info
# 暂时理解为在容器挂载点的文件名称
path: koten.info
- key: author
path: koten
containers:
- name: games
image: harbor.koten.com/koten-games/games:v0.4
ports:
- containerPort: 80
hostPort: 80
volumeMounts:
- name: data
mountPath: /data
三、修改配置文件实战项目
将游戏中配置文件的80端口修改为88端口
编写configMap资源清单
[root@Master231 pod]# cat ../configMap/02-cm-game.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: games-conf
# 指定cm的数据
data:
games-conf: |
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/bird/;
server_name game01.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/pinshu/;
server_name game02.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/tanke/;
server_name game03.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/chengbao/;
server_name game04.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/motuo/;
server_name game05.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/liferestart/;
server_name game06.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/huangjinkuanggong/;
server_name game07.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/feijidazhan/;
server_name game08.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/zhiwudazhanjiangshi/;
server_name game09.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/xiaobawang/;
server_name game10.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/pingtai/;
server_name game11.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/dayu/;
server_name game12.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/maliao/;
server_name game13.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/menghuanmonizhan/;
server_name game14.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/qieshuiguo/;
server_name game15.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/wangzhezhicheng/;
server_name game16.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/zhiwuVSjiangshi/;
server_name game17.koten.com;
}
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/killbird/;
server_name game18.koten.com;
}
运行configMap,编写资源清单调用configMap
[root@Master231 pod]# kubectl get cm
NAME DATA AGE
games-conf 2 44m
kube-root-ca.crt 1 3d
[root@Master231 pod]# kubectl delete cm games-conf
configmap "games-conf" deleted
[root@Master231 pod]# kubectl get cm
NAME DATA AGE
kube-root-ca.crt 1 3d
[root@Master231 pod]# kubectl apply -f ../configMap/02-cm-game.yaml
configmap/games-conf created
[root@Master231 pod]# cat 18-pods-games-cm3.yaml
apiVersion: v1
kind: Pod
metadata:
name: koten-games-003
spec:
nodeName: worker232
volumes:
- name: data
configMap:
name: games-conf
items:
- key: games-conf
path: games.conf
containers:
- name: games
image: harbor.koten.com/koten-games/games:v0.4
volumeMounts:
- name: data
mountPath: /etc/nginx/conf.d
运行资源清单,查看端口是否修改,是否可以访问
[root@Master231 pod]# kubectl apply -f 18-pods-games-cm3.yaml
pod/koten-games-003 created
[root@Master231 pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
koten-games-003 1/1 Running 0 7s 10.100.1.57 worker232 <none> <none>
[root@Master231 pod]# kubectl exec -it koten-games-003 -- sh
/ # cat /etc/nginx/conf.d/games-conf
server {
listen 0.0.0.0:88;
root /usr/local/nginx/html/bird/;
server_name game01.koten.com;
}
......
[root@Master231 pod]# curl -sH 'host:game17.koten01.koten.com' 10.100.1.57:88
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta id="viewport" name="viewport" content="width=device-width,user-scalable=no" />
<script type="text/javascript" src="21.js"></script>
<title>小鸟飞飞飞-文章库小游戏</title>
<style type="text/css">
body {
margin:0px;
......
我的容器运行脚本加入的还是80端口,但是实际变成88,可见容器运行时是先执行命令再去挂载配置文件,如果需要修改配置文件只需要修改挂载的cm即可了,我们尝试将88端口改成99
由于是挂载的目录,更改后更新cm,重启nginx。
如果挂载的某个文件,需要更新cm后重启Pod,即更新了挂载信息,也重启了服务,因为有些服务不支持热加载,建议重启Pod
还有一个情况是如果没有指定挂载configMap资源的某个key,会挂载所有的key,此时更新cm资源,容器挂载点的数据会随之更新;若指定了挂载某个key,则更新configMap资源时,不会立刻更新挂载点的数据,而是需要手动apply资源清单或者重新创建容器。
[root@Master231 configMap]# cat 02-cm-game.yaml|head
apiVersion: v1
kind: ConfigMap
metadata:
name: games-conf
# 指定cm的数据
data:
games-conf: |
server {
listen 0.0.0.0:99;
root /usr/local/nginx/html/bird/;
......
[root@Master231 configMap]# kubectl apply -f 02-cm-game.yaml
configmap/games-conf configured
[root@Master231 configMap]# kubectl exec -it koten-games-003 -- nginx
[root@Master231 configMap]# curl -sH 'host:game01.koten.com' 10.100.1.57:99|head
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta id="viewport" name="viewport" content="width=device-width,user-scalable=no" />
<script type="text/javascript" src="21.js"></script>
<title>小鸟飞飞飞-文章库小游戏</title>
<style type="text/css">
body {
margin:0px;
我是koten,10年运维经验,持续分享运维干货,感谢大家的阅读和关注!