pod高级管理
pod的资源控制
Docker中我们可以对容器进行资源控制,在k8s中当然也有对pod资源进行控制,我们可以在yaml中进行限制
Pod的每个容器可以指定以下一项或多项:
‘//resources表示资源限制字段’
‘//requests表示基本资源’
‘//limits表示资源上限,即这个pod最大能用到多少资源’
spec.containers[].resources.limits.cpu CPU上限
spec.containers[].resources.limits.memory 内存上限
spec.containers[].resources.requests.cpu 创建时分配的基本CPU资源
spec.containers[].resources.requests.memory 创建时分配的基本内存资源
尽管只能在单个容器上指定请求和限制,但是谈论Pod资源请求和限制很方便。特定资源类型的 Pod资源请求/限制是Pod中每个Container的该类型资源请求/限制的总和。
编辑yaml文件
[root@localhost bate2]# vim pod-bate1.yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db ##第一个容器
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "password"
resources:
requests:
memory: "64Mi" ##请求内存为64M
cpu: "250m" ##请求CPU使用25%
limits:
memory: "128Mi" ##内存最大为128M
cpu: "500m" ##CPU使用最大为50%
- name: wp ##第二个容器
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
使用这个文件创建pod资源
[root@localhost bate2]# kubectl create -f pod-bate1.yaml
pod/frontend created
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m16s default-scheduler Successfully assigned default/frontend to 20.0.0.4
Normal Pulling 5m15s kubelet, 20.0.0.4 pulling image "mysql"
Normal Pulled 2m47s kubelet, 20.0.0.4 Successfully pulled image "mysql"
Normal Created 2m47s kubelet, 20.0.0.4 Created container
Normal Started 2m46s kubelet, 20.0.0.4 Started container
Normal Pulling 2m46s kubelet, 20.0.0.4 pulling image "wordpress"
node节点查看容器
[root@localhost ~]# docker ps -a |grep mysql
ea37e261a49b mysql "docker-entrypoint.s…" 6 minutes ago Up 6 minutes k8s_db_frontend_default_e476422c-0d42-11eb-ad59-000c29aff78f_0
[root@localhost ~]# docker ps -a |grep word
19afa0f4c125 wordpress "docker-entrypoint.s…" 15 seconds ago Up 14 seconds k8s_wp_frontend_default_e476422c-0d42-11eb-ad59-000c29aff78f_0
master查看node节点资源分配
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default frontend 500m (12%) 1 (25%) 128Mi (7%) 256Mi (14%)
default mytomcat-59bc9fdc84-7db4p 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 500m (12%) 1 (25%)
memory 128Mi (7%) 256Mi (14%)
查看命名空间
[root@localhost bate2]# kubectl get ns
NAME STATUS AGE
default Active 13d
kube-public Active 13d
kube-system Active 13d
重启策略
重启策略:Pod在遇到故障之后重启的动作
1:Always:当容器终止退出后,总是重启容器,默认策略
2:OnFailure:当容器异常退出(退出状态码非0)时,重启容器
3:Never:当容器终止退出,从不重启容器。
(注意:k8s中不支持重启Pod资源,只有删除重建)
[root@localhost bate2]# kubectl edit deploy
restartPolicy: Always
编辑配置文件
[root@localhost bate2]# vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- sleep 30; exit 3
[root@localhost bate2]# kubectl create -f pod2.yaml
pod/foo created
查看重启次数
[root@localhost bate2]# kubectl get pods
NAME READY STATUS RESTARTS AGE
foo 1/1 Running 0 87s
[root@localhost bate2]# kubectl get pods
NAME READY STATUS RESTARTS AGE
foo 1/1 Running 2 4m21s
在来一个
[root@localhost bate2]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: fish
spec:
containers:
- name: cent
image: centos:7
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- sleep 10
restartPolicy: Never
restartPolicy: Never
[root@localhost bate2]# kubectl create -f pod3.yaml
pod/fish created
[root@localhost bate2]# kubectl get pod
NAME READY STATUS RESTARTS AGE
fish 1/1 Running 0 2s
[root@localhost bate2]# kubectl get pod
NAME READY STATUS RESTARTS AGE
fish 0/1 Completed 0 22s
探针
pod的健康检查又被称为探针,来检查pod资源,探针的规则可以同时定义
探针的类型分为两类:
1、亲和性探针(LivenessProbe)
判断容器是否存活(running),若不健康,则kubelet杀死该容器,根据Pod的restartPolicy来操作。
若容器中不包含此探针,则kubelet人为该容器的亲和性探针返回值永远是success
2、就绪性探针(ReadinessProbe)
判断容器服务是否就绪(ready),若不健康,kubernetes会把Pod从service endpoints中剔除,后续在把恢复到Ready状态的Pod加回后端的Endpoint列表。这样就能保证客户端在访问service’时不会转发到服务不可用的pod实例上
endpoint是service负载均衡集群列表,添加pod资源的地址
探针有三种检查方式:亲和性探针和就绪型探针都可以配置这三种检查方式
1、exec(最常用):执行shell命令返回状态码为0代表成功,exec检查后面所有pod资源,触发策略就执行
2、httpGet:发送http请求,返回200-400范围状态码为成功
3、tcpSocket :发起TCP Socket建立成功
使用exec方式检查
[root@localhost bate2]# vim pod4.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy;sleep 30
livenessProbe:
exec: ##执行存活的exec探针策略
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5 ##容器启动5秒后开始探测
periodSeconds: 5 ##每5秒创建一次
[root@localhost bate2]# kubectl create -f pod4.yaml
pod/liveness-exec created
能重启就成
[root@localhost bate2]# kubectl get pods liveness-exec -w
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 0 59s
liveness-exec 1/1 Running 1 90s
使用httpGet方式检查
[root@localhost bate2]# vim pod5.yaml
apiVersion: v1
labels:
test: liveness
name: liveness-http
spec:
containers:
- name: nginx
image: nginx
# args:
# - /server
livenessProbe:
httpGet:
path: /healthz ##指定探针方式
port: 80
httpHeaders:
- name: Custom-Header
value: Awesome
initialDelaySeconds: 3 ##第一次探测前等待3秒
periodSeconds: 3 ##每隔3秒探测一次
[root@master test]# kubectl create -f pod5-test.yaml
pod/liveness-http created
[root@master test]# kubectl get pod
NAME READY STATUS RESTARTS AGE
liveness-http 1/1 Running 1 39s
[root@master test]# kubectl describe pod liveness-http
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 61s default-scheduler Successfully assigned default/liveness-http to 192.168.233.133
Normal Pulling 18s (x3 over 60s) kubelet, 192.168.233.133 pulling image "nginx"
Normal Killing 18s (x2 over 36s) kubelet, 192.168.233.133 Killing container with id docker://nginx:Container failed liveness probe.. Container will be killed and recreated.
Normal Pulled 5s (x3 over 48s) kubelet, 192.168.233.133 Successfully pulled image "nginx"
Normal Created 5s (x3 over 47s) kubelet, 192.168.233.133 Created container
Normal Started 5s (x3 over 47s) kubelet, 192.168.233.133 Started container
Warning Unhealthy 0s (x7 over 42s) kubelet, 192.168.233.133 Liveness probe failed: HTTP probe failed with statuscode: 404 ##页面返回码是404,是错误状态,进行重启
使用tcpSocket方式检查
[root@master test]# vim pod6-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: liveness-tcp
labels:
app: liveness-tcp
spec:
containers:
- name: liveness-tcp
image: nginx
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 20
##通过配置,kubelet 会尝试在指定端口和容器建立套接字链接。如果可以建立连接,这个容器就被看作是健康的,否则就被看作是有问题的。
创建pod资源并查看重启状态,与httpget方式差不多