一、Static Pod
静态Pod是由kubectl进行管理的仅存于特定Node上的Pod。其不能通过API Server进行管理,无法与ReplicationController、Deployment或者DaemonSet进行关联,并且kubelet也无法对他们进行健康检查。静态Pod总是由kubectl进行创建,并且总是在kubelet所在的Node上运行。
创建静态 Pod 有两种方式:配置文件和 HTTP 两种方式
由kubeadm安装的集群,对应的kubelet已经配置了静态Pod文件的路径
# cat /var/lib/kubelet/config.yaml |grep staticPodPath
staticPodPath: /etc/kubernetes/manifests
1.配置一个静态Pod的yaml文件放入该路径
# cat static-web.yaml
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
name: static-web
spec:
containers:
- name: static-web
image: nginx
ports:
- name: web
containerPort: 80
2.查看docker进程
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b682d21563dd nginx "nginx -g 'daemon of…" 6 minutes ago Up 6 minutes k8s_static-web_static-web-k8s-2_default_a850d62a685464dd2c0bdb31222085c9_0
3.查看Pod创建情况
# kubectl get pod
NAME READY STATUS RESTARTS AGE
static-web-k8s-2 1/1 Running 0 7m14s
4.尝试删除该静态Pod
[root@K8S-1 chapter1]# kubectl delete pod static-web-k8s-2
pod "static-web-k8s-2" deleted
[root@K8S-1 chapter1]# kubectl get pod
NAME READY STATUS RESTARTS AGE
static-web-k8s-2 0/1 Pending 0 1s
[root@K8S-1 chapter1]# kubectl get pod
NAME READY STATUS RESTARTS AGE
static-web-k8s-2 0/1 Pending 0 4s
[root@K8S-1 chapter1]# kubectl get pod
NAME READY STATUS RESTARTS AGE
static-web-k8s-2 1/1 Running 0 6s
5.删除/etc/kubernetes/manifests下的yaml文件
# kubectl get pod
No resources found.
二、Pod容器共享Volume
同一个Pod里面的多个容器能够共享Pod级别的存储卷Volume,Volume可以被定义为各种类型,多个容器各自进行挂载操作,进行数据共享。
配置yaml文件
apiVersion: v1
kind: Pod
metadata:
name: volume-pod
spec:
containers:
- name: tomcat
image: tomcat
ports:
- containerPort: 8080
volumeMounts:
- name: app-logs
mountPath: /usr/local/tomcat/logs
- name: busybox
image: busybox
command: ["sh", "-c", "tail -f /logs/catalina*.log"]
volumeMounts:
- name: app-logs
mountPath: /logs
volumes:
- name: app-logs
emptyDir: {}
创建pod
# kubectl apply -f pod-volume-logs.yaml
pod/volume-pod created
# kubectl get pod
NAME READY STATUS RESTARTS AGE
volume-pod 2/2 Running 0 4m42s
pod中有两个容器,一个是tomcat用于写入日志文件,另一个busybox用于读日志文件
# kubectl logs volume-pod -c busybox
31-May-2019 16:54:39.573 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/docs] has finished in [26] ms
31-May-2019 16:54:39.573 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/examples]
31-May-2019 16:54:39.954 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/examples] has finished in [380] ms
31-May-2019 16:54:39.954 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/host-manager]
31-May-2019 16:54:39.992 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/host-manager] has finished in [38] ms
31-May-2019 16:54:39.992 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/manager]
31-May-2019 16:54:40.025 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/manager] has finished in [32] ms
31-May-2019 16:54:40.031 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
31-May-2019 16:54:40.045 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
31-May-2019 16:54:40.093 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 992 ms
# kubectl exec -it volume-pod -c tomcat -- ls /usr/local/tomcat/logs
catalina.2019-05-31.log localhost_access_log.2019-05-31.txt
host-manager.2019-05-31.log manager.2019-05-31.log
localhost.2019-05-31.log
# kubectl exec -it volume-pod -c tomcat -- tail /usr/local/tomcat/logs/catalina.2019-05-31.log
31-May-2019 16:54:39.573 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/docs] has finished in [26] ms
31-May-2019 16:54:39.573 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/examples]
31-May-2019 16:54:39.954 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/examples] has finished in [380] ms
31-May-2019 16:54:39.954 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/host-manager]
31-May-2019 16:54:39.992 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/host-manager] has finished in [38] ms
31-May-2019 16:54:39.992 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/manager]
31-May-2019 16:54:40.025 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/manager] has finished in [32] ms
31-May-2019 16:54:40.031 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
31-May-2019 16:54:40.045 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
31-May-2019 16:54:40.093 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 992 ms
三、Pod的应用配置管理ConfigMap
ConfigMap提供了将配置数据注入容器的机制,同时保持容器不受kubernetes的影响。ConfigMap有以下几种使用方式:
1.生成容器内的环境变量
2.设置容器内的命令行参数
3.以Volume的形式挂载为容器内部的文件或目录
ConfigMap可以通过yaml配置文件或直接使用kubectl create configmap命令行方式来创建
1.从目录创建
当--from-file指向一个目录,该目录中的文件名将直接用于填充ConfigMap中的key,key的值是这个文件的内容
# ls
my.cnf web.xml
# cat my.cnf
general_log=on
slow_query_log=on
long_query_time = 4
log_bin=on
log-bin=/usr/local/mysql/data/bin.log
# cat web.xml
<?xml version="1.0" encoding="ISO-8859-1"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
version="3.0">
<distributable/>
......
......
<welcome-file-list>
<welcome-file>index.html</welcome-file>
<welcome-file>index.htm</welcome-file>
<welcome-file>index.jsp</welcome-file>
</welcome-file-list>
</web-app>
# kubectl create configmap test1 --from-file configfiles
configmap/test1 created
# kubectl describe configmap test1
Name: test1
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
my.cnf:
----
general_log=on
slow_query_log=on
long_query_time = 4
log_bin=on
log-bin=/usr/local/mysql/data/bin.log
web.xml:
----
<?xml version="1.0" encoding="ISO-8859-1"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
version="3.0">
<distributable/>
......
......
<welcome-file-list>
<welcome-file>index.html</welcome-file>
<welcome-file>index.htm</welcome-file>
<welcome-file>index.jsp</welcome-file>
</welcome-file-list>
</web-app>
Events: <none>
2.从文件创建
通过--from-file从文件创建,可以指定key的名称,也可以在一个命令行中创建包含多个key的ConfigMap
# kubectl create configmap test2 --from-file=my.cnf --from-file=web.xml
configmap/test2 created
# kubectl get configmap test2 -o yaml
apiVersion: v1
data:
my.cnf: |
general_log=on
slow_query_log=on
long_query_time = 4
log_bin=on
log-bin=/usr/local/mysql/data/bin.log
web.xml: |
<?xml version="1.0" encoding="ISO-8859-1"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
version="3.0">
<distributable/>
......
......
<welcome-file-list>
<welcome-file>index.html</welcome-file>
<welcome-file>index.htm</welcome-file>
<welcome-file>index.jsp</welcome-file>
</welcome-file-list>
</web-app>
kind: ConfigMap
metadata:
creationTimestamp: "2019-06-01T10:49:09Z"
name: test2
namespace: default
resourceVersion: "329312"
selfLink: /api/v1/namespaces/default/configmaps/test2
uid: e260c9aa-845a-11e9-a2f2-00505694834d
也可不以文件文件名为key,通过key=value为每个文件重新设置key
# kubectl create configmap test3 --from-file=the.cnf=my.cnf
configmap/test3 created
# kubectl get configmap test3 -o yaml
apiVersion: v1
data:
the.cnf: |
general_log=on
slow_query_log=on
long_query_time = 4
log_bin=on
log-bin=/usr/local/mysql/data/bin.log
kind: ConfigMap
metadata:
creationTimestamp: "2019-06-01T10:53:41Z"
name: test3
namespace: default
resourceVersion: "329706"
selfLink: /api/v1/namespaces/default/configmaps/test3
uid: 849a9eab-845b-11e9-a2f2-00505694834d
3.使用--from-literal直接在命令行中指定key,value
# kubectl create configmap test4 --from-literal=type=null --from-literal=dir=/var/log
configmap/test4 created
# kubectl describe configmap test4
Name: test4
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
dir:
----
/var/log
type:
----
null
Events: <none>
4.通过环境变量方式使用ConfigMap
# cat cm-app.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-app
data:
apploglevel: info
appdatadir: /var/data
# cat cm-env.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-env
data:
APPTYPE: char
# kubectl create -f cm-app.yaml -f cm-env.yaml
configmap/cm-app created
configmap/cm-env created
# cat cm-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: cm-test
spec:
containers:
- name: test-container
image: busybox
command: [ "/bin/sh", "-c", "env | grep APP" ]
env:
- name: APPLOGLEVEL
valueFrom:
configMapKeyRef:
name: cm-app
key: apploglevel
- name: APPDATADIR
valueFrom:
configMapKeyRef:
name: cm-app
key: appdatadir
envFrom:
- configMapRef:
name: cm-env
restartPolicy: Never
# kubectl create -f cm-test.yaml
pod/cm-test created
# kubectl logs cm-test
APPDATADIR=/var/data
APPTYPE=char
APPLOGLEVEL=info
上面使用了两种定义环境变量的方式:env和envFrom,使用envFrom会在Pod环境中将ConfigMap中所有定义的key=value自动生成为环境变量
5.通过volumeMount使用ConfigMap
# cat cm-app.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-app
data:
apploglevel: info
appdatadir: /var/data
# cat cm-volume.yaml
apiVersion: v1
kind: Pod
metadata:
name: cm-volume
spec:
containers:
- name: cm-volume
image: busybox
command: [ "/bin/sh","-c","cat /etc/config/path/key.app" ]
volumeMounts:
- name: volume-test #引用volume的名称
mountPath: /etc/config #挂载到容器内的目录
volumes:
- name: volume-test #定义volume的名称
configMap:
name: cm-app #使用的ConfigMap
items:
- key: apploglevel
path: path/key.app #value将写入key.app文件中
restartPolicy: Never
创建完ConfigMap和Pod后,该Pod会输出:
# kubectl logs cm-volume
info
如果在引用ConfigMap时不指定items,则使用volumeMounts方式在容器内的目录下为每个items都生成一个以key开头的文件。
使用ConfigMap的限制条件:
- ConfigMap必须在Pod之前创建
- ConfigMap受Namespace限制,只有处于相同Namespace中的Pod才能引用
- 静态Pod无法引用ConfigMap
四、获取容器内Pod信息 Downward API
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
五、Pod状态和健康检查
1.Pod的状态
- 挂起(Pending):API Server已经创建该Pod,但在Pod内还有一个或多个容器的镜像没有创建,包括正在下载的过程
- 运行中(Running):Pod内所有容器均已创建,且至少有一个容器处于运行、启动、重启状态
- 成功(Succeeded):Pod 中的所有容器都被成功执行后退出,并且不会再重启
- 失败(Failed):Pod 中的所有容器都已终止了,但至少有一个容器退出为失败状态,也就是说,容器以非0状态退出或者被系统终止。
- 未知(Unknown):因为某些原因无法取得 Pod 的状态,通常是因为与 Pod 所在主机通信失败
2.Pod的重启策略
Pod的重启策略RestartPolicy可能的值为 Always、OnFailure 和 Never,默认为 Always
- Always:当容器失效时,由kubelet自动重启
- OnFailure:当容器终止运行且退出码不为0时,由kubelet自动重启
- Never:不论容器运行状态如何都不会重启
3.Pod健康检查
LivenessProbe:存活性探测
ReadnessProbe:就绪性探测
其存活性探测的方法可配置以下三种实现方式:
- ExecAction:在容器内执行指定命令。如果命令退出时返回码为 0 则表明容器健康
- TCPSocketAction:对指定端口上的容器的 IP 地址进行 TCP 检查。如果能够建立连接,则表明容器健康。
- HTTPGetAction:对指定的端口和路径上的容器的 IP 地址执行 HTTP Get 请求。如果响应的状态码大于等于200 且小于 400则表明容器健康
设置exec探针
# cat pod-exec.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/health
initialDelaySeconds: 15
timeoutSeconds: 1
查看Pod事件
# kubectl describe pod liveness-exec
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m10s default-scheduler Successfully assigned default/liveness-exec to k8s-2
Normal Pulled 77s (x3 over 3m59s) kubelet, k8s-2 Successfully pulled image "busybox"
Normal Created 76s (x3 over 3m59s) kubelet, k8s-2 Created container liveness
Normal Started 76s (x3 over 3m59s) kubelet, k8s-2 Started container liveness
Warning Unhealthy 35s (x9 over 3m35s) kubelet, k8s-2 Liveness probe failed: cat: can't open '/tmp/health': No such file or directory
Normal Killing 35s (x3 over 3m15s) kubelet, k8s-2 Container liveness failed liveness probe, will be restarted
Normal Pulling 5s (x4 over 4m10s) kubelet, k8s-2 Pulling image "busybox"
# kubectl get pod
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 3 4m50s
#此时restart值为3
设置tcp探针
apiVersion: v1
kind: Pod
metadata:
name: pod-with-healthcheck
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
设置http探针
apiVersion: v1
kind: Pod
metadata:
name: pod-with-healthcheck
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /_status/healthz
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m37s default-scheduler Successfully assigned default/liveness-http to k8s-2
Warning Unhealthy 14s (x6 over 104s) kubelet, k8s-2 Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 14s (x2 over 84s) kubelet, k8s-2 Container nginx failed liveness probe, will be restarted
Normal Pulling 13s (x3 over 2m36s) kubelet, k8s-2 Pulling image "nginx"
Normal Pulled 5s (x3 over 2m19s) kubelet, k8s-2 Successfully pulled image "nginx"
Normal Created 5s (x3 over 2m18s) kubelet, k8s-2 Created container nginx
Normal Started 5s (x3 over 2m18s) kubelet, k8s-2 Started container nginx
# kubectl get pod
NAME READY STATUS RESTARTS AGE
liveness-http 1/1 Running 2 3m31s
对于每种探测方式,需要设置initialDelaySeconds和timeoutSeconds参数,分别表示首次检查等待时间以及超时时间。
转载于:https://blog.51cto.com/lullaby/2403331