pod高级管理(资源控制、重启策略与探针)

文章目录

一 . 资源限制

Docker中我们可以对容器进行资源控制,在k8s中当然也有对pod资源进行控制

资源限制,官方网站介绍

https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

[root@master ~]# kubectl get node
NAME            STATUS   ROLES    AGE    VERSION
192.168.100.5   Ready    <none>   3d9h   v1.12.3
192.168.100.6   Ready    <none>   3d8h   v1.12.3

资源限制

Pod和Container的资源请求和限制:

Pod的每个容器可以指定以下一项或多项:

'//resources表示资源限制字段'
'//requests表示基本资源'
'//limits表示资源上限,即这个pod最大能用到多少资源'


spec.containers[].resources.limits.cpu     //cpu上限 
spec.containers[].resources.limits.memory   //内存上限
spec.containers[].resources.requests.cpu   //创建时分配的基本CPU资源
spec.containers[].resources.requests.memory  //创建时分配的基本内存资源

尽管只能在单个容器上指定请求和限制,但是谈论Pod资源请求和限制很方便。特定资源类型的 Pod资源请求/限制是Pod中每个Container的该类型资源请求/限制的总和。

我们可以在yaml中进行限制:如下

[root@master demo]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: frontend
spec:
  containers:
  - name: db                                  '//容器1'
    image: mysql
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: "password"
    resources:
      requests:
        memory: "64Mi"                         '//基础内存为64M'
        cpu: "250m"                                 '//基础cpu使用为25%'
      limits:
        memory: "128Mi"                      '//这个容器内存上限为128M'
        cpu: "500m"                             '//这个容器cpu上限为50%'
  - name: wp                                        '//容器2'
    image: wordpress
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

创建资源

[root@master demo]# kubectl create -f pod1.yaml
pod/frontend created 
[root@master demo]# kubectl get pods

发现有一个mysql失败

在这里插入图片描述
在这里插入图片描述

进行修改mysql 的资源限制

在这里插入图片描述

重新启动,发现正常了

在这里插入图片描述

查看具体事件

[root@localhost demo]# kubectl describe pod frontend          '//详细查看pod信息'            

[root@master demo]# kubectl describe nodes 192.168.100.5            查看node节点资源分配      
Name:               192.168.100.5
  Namespace                  Name                               CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                               ------------  ----------  ---------------  -------------
  default                    frontend                           500m (12%)    1 (25%)     564Mi (30%)      1152Mi (61%)
  default                    my-tomcat-8884884f6-gk7rw          0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    nginx-7697996758-j6tfj             0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    nginx-7697996758-ldfvx             0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    nginx-deployment-d55b94fd-6f6hm    0 (0%)        0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests     Limits
  --------  --------     ------
  cpu       500m (12%)   1 (25%)
  memory    564Mi (30%)  1152Mi (61%)
Events:
  Type    Reason                   Age                From                    Message
  ----    ------                   ----               ----                    -------
  Normal  NodeHasSufficientDisk    16m (x3 over 26h)  kubelet, 192.168.100.5  Node 192.168.100.5 status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  16m (x3 over 26h)  kubelet, 192.168.100.5  Node 192.168.100.5 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    16m (x3 over 26h)  kubelet, 192.168.100.5  Node 192.168.100.5 status is now: NodeHasNoDiskPressure
  Normal  NodeReady                16m (x2 over 21h)  kubelet, 192.168.100.5  Node 192.168.100.5 status is now: NodeReady

二 . 重启策略

重启策略:Pod在遇到故障之后重启的动作

1:Always:当容器终止退出后,总是重启容器,默认策略

2:OnFailure:当容器异常退出(退出状态码非0)时,重启容器

3:Never:当容器终止退出,从不重启容器。

(注意:k8s中不支持重启Pod资源,只有删除重建)

[root@localhost demo]# kubectl edit deploy
 restartPolicy: Always
//示例
[root@localhost demo]# vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
  - name: busybox
    image: busybox
    args:
    - /bin/sh
    - -c
    - sleep 30; exit 3
[root@localhost demo]# kubectl apply -f pod2.yaml 
pod/foo created

查看重启次数加1

因为exit 退出的返回码非0 值,检测到非正常退出,控制器会自动重启pode

[root@master demo]# kubectl get pods

在这里插入图片描述

修改参数,让其不自动重启

[root@master demo]# vim pod2.yaml

在这里插入图片描述
[

root@master demo]# kubectl delete -f pod2.yaml
pod "foo" deleted
[root@master demo]# kubectl apply -f pod2.yaml
pod/foo created
[root@master demo]# kubectl get pods

在这里插入图片描述

三 . 探针创建及检查方式

健康检查:又称为探针(Probe)

(注意:)规则可以同时定义
livenessProbe 如果检查失败,将杀死容器,根据Pod的restartPolicy来操作。
ReadinessProbe 如果检查失败,kubernetes会把Pod从service endpoints中剔除。

pod的健康检查又被称为探针,来检查pod资源,探针的规则可以同时定义

探针的类型分为两类:

1、亲和性探针(LivenessProbe)

判断容器是否存活(running),若不健康,则kubelet杀死该容器,根据Pod的restartPolicy来操作。
若容器中不包含此探针,则kubelet人为该容器的亲和性探针返回值永远是success

2、就绪性探针(ReadinessProbe)

判断容器服务是否就绪(ready),若不健康,kubernetes会把Pod从service endpoints中剔除,后续在把恢复到Ready状态的Pod加回后端的Endpoint列表。这样就能保证客户端在访问service’时不会转发到服务不可用的pod实例上
endpoint是service负载均衡集群列表,添加pod资源的地址

探针(Probe)支持三种检查方法:

httpGet 发送http请求,返回200-400范围状态码为成功。

exec 执行Shell命令返回状态码是0为成功。

tcpSocket 发起TCP Socket建立成功

探针 官网介绍地址

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

示例1:exec方式 (适合基本服务,检查PID文件存在)

编辑yaml文件

[root@master demo]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy;sleep 30
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5

创建资源

[root@master demo]# kubectl create -f pod3.yaml
pod/liveness-exec created

查看资源状态

[root@master demo]# kubectl get pods -w
NAME                              READY   STATUS              RESTARTS   AGE
foo                               0/1     Error               0          11m
frontend                          2/2     Running             0          32m
liveness-exec                     0/1     ContainerCreating   0          8s
liveness-exec   1/1   Running   0     8s
liveness-exec   1/1   Running   1     71s
liveness-exec   1/1   Running   2     3m1s
liveness-exec   1/1   Running   3     4m28s
liveness-exec   1/1   Running   4     5m40s

修改参数,继续测试

在这里插入图片描述

[root@master demo]# kubectl delete -f pod3.yaml
pod "liveness-exec" deleted
[root@master demo]# kubectl apply -f pod3.yaml
pod/liveness-exec created
[root@master demo]# kubectl get pods -w
NAME                              READY   STATUS              RESTARTS   AGE
foo                               0/1     Error               0          19m
liveness-exec   1/1   Running   0     13s
liveness-exec   0/1   Completed   0     43s
liveness-exec   1/1   Running   1     55s
liveness-exec   0/1   Completed   1     85s
liveness-exec   0/1   CrashLoopBackOff   1     87s

示例2:httpGet方式

编辑yaml文件

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
  - name: nginx
    image: nginx
    args:
    - /server
    livenessProbe:
      httpGet:
        path: /healthz
        port: 80
        httpHeaders:
        - name: Custom-Header
          value: Awesome
      initialDelaySeconds: 3
      periodSeconds: 3

查看重启状态

[root@master demo]# kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
liveness-http                     1/1     Running   2          81s

查看详细信息

[root@master demo]#  kubectl describe pod liveness-http

Events:
  Type     Reason     Age                From                    Message
  ----     ------     ----               ----                    -------
  Normal   Scheduled  37s                default-scheduler       Successfully assigned default/liveness-http to 192.168.100.5
  Normal   Pulled     21s                kubelet, 192.168.100.5  Successfully pulled image "nginx"
  Normal   Created    21s                kubelet, 192.168.100.5  Created container
  Normal   Started    21s                kubelet, 192.168.100.5  Started container
  Normal   Pulling    12s (x2 over 36s)  kubelet, 192.168.100.5  pulling image "nginx"
  Warning  Unhealthy  12s (x3 over 18s)  kubelet, 192.168.100.5  Liveness probe failed: HTTP probe failed with statuscode: 404    '//页面返回码是404,是错误状态,进行重启'
  Normal   Killing    12s                kubelet, 192.168.100.5  Killing container with id docker://nginx:Container failed liveness probe.. Container will be killed and recreated.

因为path 路径不对,下面创建一个正确的

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: tomcat2
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: tomcat2
    spec:
      containers:
      - name: tomcat2
        image: docker.io/tomcat:8.0.52
        ports:
        - containerPort: 8080
        livenessProbe:
          httpGet:
            path: /index.jsp     //站点目录首页文件路径
            port: 8080
            httpHeaders:
            - name: Custom-Header
              value: Awesome
          initialDelaySeconds: 3
          periodSeconds: 3
---
apiVersion: v1
kind: Service
metadata:
  name: tomcat2
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 31111
  selector:
    app: tomcat2

查看状态,运行正常,探测正常

[root@master demo]# kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
liveness-tcp                      1/1     Running   0          9h
nginx-7697996758-gzqms            1/1     Running   1          2d22h
nginx-7697996758-j6tfj            1/1     Running   1          2d22h
nginx-7697996758-ldfvx            1/1     Running   1          2d22h
nginx-deployment-d55b94fd-5zhjt   1/1     Running   1          2d21h
nginx-deployment-d55b94fd-6f6hm   1/1     Running   1          2d21h
nginx-deployment-d55b94fd-kr7c6   1/1     Running   1          2d21h
tomcat2-565685d5bd-2hsvs          1/1     Running   0          62s
tomcat2-565685d5bd-drdbx          1/1     Running   0          62s
[root@master demo]# kubectl describe pod tomcat2

查看详细事件信息

Events:
  Type    Reason     Age   From                    Message
  ----    ------     ----  ----                    -------
  Normal  Scheduled  95s   default-scheduler       Successfully assigned default/tomcat2-565685d5bd-drdbx to 192.168.100.6
  Normal  Pulled     94s   kubelet, 192.168.100.6  Container image "docker.io/tomcat:8.0.52" already present on machine
  Normal  Created    94s   kubelet, 192.168.100.6  Created container
  Normal  Started    94s   kubelet, 192.168.100.6  Started container

示例3 . 使用tcpSocket方式检查

编写yaml文件

apiVersion: v1
kind: Pod
metadata:
  name: liveness-tcp
  labels:
    app: liveness-tcp
spec:
  containers:
  - name: liveness-tcp
    image: nginx
    ports:
    - containerPort: 80
    readinessProbe:
      tcpSocket:
        port: 80
      initialDelaySeconds: 5
      periodSeconds: 10
    livenessProbe:
      tcpSocket:
        port: 80
      initialDelaySeconds: 15
      periodSeconds: 20

查看状态

[root@master demo]# kubectl get pods -w
NAME                              READY   STATUS             RESTARTS   AGE
liveness-tcp                      1/1     Running            0          63s




Events:
  Type    Reason     Age    From                    Message
  ----    ------     ----   ----                    -------
  Normal  Scheduled  2m28s  default-scheduler       Successfully assigned default/liveness-tcp to 192.168.100.5
  Normal  Pulling    2m27s  kubelet, 192.168.100.5  pulling image "nginx"
  Normal  Pulled     2m12s  kubelet, 192.168.100.5  Successfully pulled image "nginx"
  Normal  Created    2m12s  kubelet, 192.168.100.5  Created container
  Normal  Started    2m11s  kubelet, 192.168.100.5  Started container

四 . tcp socket方式,这个方式比较好理解(适合web 测试)

实例一

比如说,起一个nginx容器,nginx服务提供的端口是80端口.
配置tcp socket 探针,设定隔一个时间,用tcp方式连接80端口,如果连接成功,就返回容器健康或者就绪,如果连接失败,返回容器不健康或者不就绪,kubelet重启容器.

逆向思维示例:
简单思路:探针tcp socket连接不存在的8080端口,必然连接失败报错,从而实现pod不断重启.

创建资源

[root@master demo]# vim tcp-socket1.yaml
[root@master demo]# kubectl create -f tcp-socket1.yaml


apiVersion: v1
kind: Pod
metadata:
  name: httpd
  labels:
    app: httpd
spec:
  containers:
  - name: httpd
    image: nginx
    ports:
    - containerPort: 80
    readinessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 45
      periodSeconds: 20
    livenessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 45
      periodSeconds: 20

查看信息,不断重启,因为端口设置的不对

[root@master demo]# kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
httpd                             0/1     Running   3          6m14s

起一个nginx的pod容器,提供服务端口80.
配置探针连接端口8080,第一次监测时间为pod容器启动后的45s,第一次监测后每隔20s监测一次.
测试结果,pod容器一直在重启.

describe报错

Events:
  Type     Reason     Age                   From                    Message
  ----     ------     ----                  ----                    -------
  Normal   Scheduled  13m                   default-scheduler       Successfully assigned default/httpd to 192.168.100.5
  Warning  Unhealthy  10m (x5 over 12m)     kubelet, 192.168.100.5  Readiness probe failed: dial tcp 172.17.22.6:8080: connect: connection refused
  Normal   Pulling    10m (x3 over 13m)     kubelet, 192.168.100.5  pulling image "nginx"
  Normal   Killing    10m (x2 over 12m)     kubelet, 192.168.100.5  Killing container with id docker://httpd:Container failed liveness probe.. Container will be killed andrecreated.
  Normal   Pulled     10m (x3 over 13m)     kubelet, 192.168.100.5  Successfully pulled image "nginx"
  Normal   Created    10m (x3 over 13m)     kubelet, 192.168.100.5  Created container
  Normal   Started    10m (x3 over 13m)     kubelet, 192.168.100.5  Started container
  Warning  Unhealthy  3m46s (x17 over 12m)  kubelet, 192.168.100.5  Liveness probe failed: dial tcp 172.17.22.6:8080: connect: connection refused

探针自动tcp连接容器ip:8080端口,失败.所以容器一直重启.

实例二

正常配置示例
正常配置是连接提供服务的80端口
简单思路:理论上来说,长时间运行的应用程序最终会过渡到中断状态,除非重新启动,否则无法恢复.Kubernetes提供了活性探针来检测和补救这种情况.这是配置探针的根本原因,以防万一.

创建YAML文件

[root@master demo]# kubectl create -f tcp-sockt2.yaml

apiVersion: v1
kind: Pod
metadata:
  name: httpd
  labels:
    app: httpd
spec:
  containers:
  - name: httpd
    image: nginx
    ports:
    - containerPort: 80
    readinessProbe:
      tcpSocket:
        port: 80
      initialDelaySeconds: 45
      periodSeconds: 20
    livenessProbe:
      tcpSocket:
        port: 80
      initialDelaySeconds: 45
      periodSeconds: 20

创建资源

[root@master demo]# kubectl create -f tcp-sockt2.yaml
pod/httpd created
查看状态,一直运行,检查健康
[root@master demo]# kubectl get pods -w
NAME                              READY   STATUS    RESTARTS   AGE
httpd                             1/1     Running   0          3m30s

实例三

正常配置模拟测试案例

简单思路:起nginx容器,然后执行命令杀死nginx进程,设定探针监测连接tcp socket 80端口,当nginx进程被杀死后,tcp socket连接失败,探针监测容器为不健康不就绪,kubelet重启容器.

Vim tcp-socket3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: httpd
  labels:
    app: httpd
spec:
  containers:
  - name: httpd
    image: nginx
    args:
    - /bin/sh
    - -c
    - sleep 60;nginx -s quit
    ports:
    - containerPort: 80
    readinessProbe:
      tcpSocket:
        port: 80
      initialDelaySeconds: 20
      periodSeconds: 10
    livenessProbe:
      tcpSocket:
        port: 80
      initialDelaySeconds: 20
      periodSeconds: 10

配置参数说明:

容器启动后,执行nginx -s quit杀死Nginx进程
容器启动20s后开始执行readiness和liveness检测
容器启动后35s左右
探针监测到nginx进程已经死掉,无法连接到80端口,报警见下:

查看资源详细信息

[root@master demo]# kubectl describe pod   httpd
Events:
  Type     Reason     Age                  From                    Message
  ----     ------     ----                 ----                    -------
  Normal   Scheduled  2m22s                default-scheduler       Successfully assigned default/httpd to 192.168.100.5
  Normal   Pulling    59s (x2 over 2m22s)  kubelet, 192.168.100.5  pulling image "nginx"
  Normal   Killing    59s                  kubelet, 192.168.100.5  Killing container with id docker://httpd:Container failed liveness probe.. Container will be killed and recreated.
  Normal   Pulled     43s (x2 over 119s)   kubelet, 192.168.100.5  Successfully pulled image "nginx"
  Normal   Created    43s (x2 over 119s)   kubelet, 192.168.100.5  Created container
  Normal   Started    43s (x2 over 119s)   kubelet, 192.168.100.5  Started container
  Warning  Unhealthy  10s (x6 over 90s)    kubelet, 192.168.100.5  Readiness probe failed: dial tcp 172.17.22.2:80: connect: connection refused
  Warning  Unhealthy  1s (x6 over 91s)     kubelet, 192.168.100.5  Liveness probe failed: dial tcp 172.17.22.2:80: connect: connection refused

可以看到,nginx进程杀死后,pod自动重启.

[root@master demo]# kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
httpd   0/1     Running   2          3m48s
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值