健康检查
检查有两类,方法有三种
种类:
-
健康检查:不满足容器就会被重启,但是pod的名字不会变
-
可用性检查:一旦pod不可用,服务不可用但是容器还在运行,就会从负载均衡移除,请求不到有问题的pod
方法:
-
exec:执行一段命令 返回值为0, 非0
-
httpGet:检测某个 http 请求的返回状态码 2xx,3xx正常, 4xx,5xx错误
-
tcpSocket:测试某个端口是否能够连接
方法一:
[root@k8smaster check]# cat nginx_pod_exec.yaml
apiVersion: v1
kind: Pod
metadata:
name: exec
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 1
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-eylM8OxT-1583159304320)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\1583152033377.png)]
[root@k8smaster check]# kubectl describe pod exec----查看状态是否符合健康检查
方法二:
[root@k8smaster check]# cat nginx_pod_httpGet.yaml
apiVersion: v1
kind: Pod
metadata:
name: httpget
spec:
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 3
periodSeconds: 3
626 kubectl create -f nginx_pod_exec.yaml
627 kubectl get pod
628 kubectl describe pod exec
629 vi nginx_pod_httpGet.yaml
630 cat nginx_pod_httpGet.yaml
631 kubectl create -f nginx_pod_httpGet.yaml
632 kubectl delete -f nginx_pod_httpGet.yaml
633 kubectl create -f nginx_pod_httpGet.yaml
634 kubectl get pod
635 kubectl exec -it httpget /bin/bash
636 kubectl describe pod httpget
可用性检查
638 vi nginx-rc-httpGet.yaml
639 kubectl create -f nginx-rc-httpGet.yaml
640 kubectl get all
641 kubectl expose rc readiness --type=NodePort --port=80 --target-port=80
642 kubectl describe svc readiness
643 kubectl exec -it po/readiness-3h0k3 /bin/bash
644 kubectl exec -it readiness-3h0k3 /bin/bash
645 kubectl describe svc readiness
646 history
dashnoard
web界面管理(控制台鼠标点点点)
[root@k8smaster dashboard]# grep -vE ‘#|$’ dashboard.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubernetes-dashboard-latest
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
version: latest
kubernetes.io/cluster-service: “true”
spec:
nodeName: 10.0.0.13
containers:
-
name: kubernetes-dashboard
image: 10.0.0.11:5000/kubernetes-dashboard-amd64:v1.4.1
resources:keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:- containerPort: 9090
args: - –apiserver-host=http://10.0.0.11:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
- containerPort: 9090
[root@k8smaster dashboard]# grep -vE '^#|^$' dashboard-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
上传镜像,创建文件,web访问10.0.0.13:8080/ui
heapster
弹性伸缩需要监控
178 docker load -i docker_heapster.tar.gz
179 docker load -i docker_heapster_grafana.tar.gz
180 docker load -i docker_heapster_influxdb.tar.gz
181 docker images
182 docker tag docker.io/kubernetes/heapster_influxdb:v0.5 10.0.0.11:5000/heapster_influxdb:v0.5
183 docker tag docker.io/kubernetes/heapster_grafana:v2.6.0 10.0.0.11:5000/heapster_grafana:v2.6.0
184 docker tag docker.io/kubernetes/heapster:canary 10.0.0.11:5000/heapster:canary
718 mkdir heap
719 cd heap
720 ls
721 vim heapster-controller.yaml
722 vim heapster-service.yaml
723 vim influxdb-grafana-controller.yaml
724 vimdiff grafana-service.yaml influxdb-service.yaml
725 grep image *.ymal
726 grep image *.yaml
727 ls
728 vim influxdb-service.yaml
729 vim influxdb-grafana-controller.yaml
730 kubectl create -f .
731 systemctl restart kube-apiserver.service
732 history
还是10.0.0.11:8080/ui