一、资源
1.1 资源管理介绍
在kubernetes中,所有的内容都抽象为资源,用户需要通过操作资源来管理kubernetes。
kubernetes的本质上就是一个集群系统,用户可以在集群中部署各种服务
所谓的部署服务,其实就是在kubernetes集群中运行一个个的容器,并将指定的程序跑在容器中。
kubernetes的最小管理单元是pod而不是容器,只能将容器放在
Pod
中,kubernetes一般也不会直接管理Pod,而是通过
Pod控制器
来管理Pod的。Pod中服务服务的访问是由kubernetes提供的
Service
资源来实现。Pod中程序的数据需要持久化是由kubernetes提供的各种
存储
系统来实现
1.2 资源管理方式
-
命令式对象管理:直接使用命令去操作kubernetes资源
kubectl run nginx-pod --image=nginx:latest --port=80
-
命令式对象配置:通过命令配置和配置文件去操作kubernetes资源
kubectl create/patch -f nginx-pod.yaml
-
声明式对象配置:通过apply命令和配置文件去操作kubernetes资源
kubectl apply -f nginx-pod.yaml
1.3 资源类型
常见资源类型
kubectl常见命令操作
1.4 基本命令使用
1.创建控制器
# 创建一个名为webcluster的控制器,控制器中的pod数量为2
[root@k8s-master ~]# kubectl create deployment webcluster --image nginx --replicas 2
deployment.apps/webcluster created
# 查看控制器
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 6d3h
webcluster 2/2 2 0 9s
# 查看控制器帮助
[root@k8s-master ~]# kubectl explain deployment
GROUP: apps
KIND: Deployment
VERSION: v1
DESCRIPTION:
Deployment enables declarative updates for Pods and ReplicaSets.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <ObjectMeta>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <DeploymentSpec>
Specification of the desired behavior of the Deployment.
status <DeploymentStatus>
Most recently observed status of the Deployment.
# 查看控制器参数帮助
[root@k8s-master ~]# kubectl explain deployment.spec
GROUP: apps
KIND: Deployment
VERSION: v1
FIELD: spec <DeploymentSpec>
DESCRIPTION:
Specification of the desired behavior of the Deployment.
DeploymentSpec is the specification of the desired behavior of the
Deployment.
FIELDS:
minReadySeconds <integer>
Minimum number of seconds for which a newly created pod should be ready
without any of its container crashing, for it to be considered available.
Defaults to 0 (pod will be considered available as soon as it is ready)
paused <boolean>
Indicates that the deployment is paused.
progressDeadlineSeconds <integer>
The maximum time in seconds for a deployment to make progress before it is
considered to be failed. The deployment controller will continue to process
failed deployments and a condition with a ProgressDeadlineExceeded reason
will be surfaced in the deployment status. Note that progress will not be
estimated during the time a deployment is paused. Defaults to 600s.
replicas <integer>
Number of desired pods. This is a pointer to distinguish between explicit
zero and not specified. Defaults to 1.
revisionHistoryLimit <integer>
The number of old ReplicaSets to retain to allow rollback. This is a pointer
to distinguish between explicit zero and not specified. Defaults to 10.
selector <LabelSelector> -required-
Label selector for pods. Existing ReplicaSets whose pods are selected by
this will be the ones affected by this deployment. It must match the pod
template's labels.
strategy <DeploymentStrategy>
The deployment strategy to use to replace existing pods with new ones.
template <PodTemplateSpec> -required-
Template describes the pods that will be created. The only allowed
template.spec.restartPolicy value is "Always".
2.编辑控制器
# 编辑控制器
[root@k8s-master ~]# kubectl edit deployments.apps webcluster
deployment.apps/webcluster edited
[root@k8s-master ~]# kubectl get deployments.apps webcluster
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 3/3 3 2 7m29s
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 3/3 3 2 7m36s
3.更新和删除
# 更新控制器
[root@k8s-master ~]# kubectl patch deployments.apps webcluster -p '{"spec":{"replicas":4}}'
deployment.apps/webcluster patched
[root@k8s-master ~]# kubectl get deployments.apps webcluster
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 2/4 4 2 10m
# 删除控制器
[root@k8s-master ~]# kubectl delete deployments.apps webcluster
deployment.apps "webcluster" deleted
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/1 1 0 6d3h
1.5 运行和调试
# 运行pod
[root@k8s-master ~]# kubectl run testpod --image nginx
pod/testpod created
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
testpod 1/1 ContainerCreating 0 9s
端口暴露
# 端口暴露
[root@k8s-master ~]# kubectl expose pod testpod --port 80 --target-port 80
service/testpod exposed
[root@k8s-master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d
testpod ClusterIP 10.106.254.172 <none> 80/TCP 8s
[root@k8s-master ~]# curl 10.106.254.172
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
查看资源详细信息
[root@k8s-master ~]# kubectl describe pods testpod
Name: testpod
Namespace: default
Priority: 0
Service Account: default
Node: k8s-node2/172.25.254.20
Start Time: Sun, 15 Sep 2024 16:36:18 +0800
Labels: run=testpod
Annotations: cni.projectcalico.org/containerID: e6c49459729469c5ef128231d3190f756139f16935196b50fa6c05033362bb38
cni.projectcalico.org/podIP: 10.244.169.148/32
cni.projectcalico.org/podIPs: 10.244.169.148/32
Status: Running
IP: 10.244.169.148
IPs:
IP: 10.244.169.148
Containers:
testpod:
Container ID: docker://ca9d7698d83b7d0b2a3839d0d7a5a6c8ef710ac79ff8ebd74f3f1eba8c87e79c
Image: nginx
Image ID: docker-pullable://nginx@sha256:127262f8c4c716652d0e7863bba3b8c45bc9214a57d13786c854272102f7c945
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 15 Sep 2024 16:36:32 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tpfhn (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-tpfhn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m11s default-scheduler Successfully assigned default/testpod to k8s-node2
Normal Pulling 3m8s kubelet Pulling image "nginx"
Normal Pulled 2m58s kubelet Successfully pulled image "nginx" in 10.487s (10.488s including waiting). Image size: 187694648 bytes.
Normal Created 2m58s kubelet Created container testpod
Normal Started 2m57s kubelet Started container testpod
查看资源日志
[root@k8s-master ~]# kubectl describe pods testpod
Name: testpod
Namespace: default
Priority: 0
Service Account: default
Node: k8s-node2/172.25.254.20
Start Time: Sun, 15 Sep 2024 16:36:18 +0800
Labels: run=testpod
Annotations: cni.projectcalico.org/containerID: e6c49459729469c5ef128231d3190f756139f16935196b50fa6c05033362bb38
cni.projectcalico.org/podIP: 10.244.169.148/32
cni.projectcalico.org/podIPs: 10.244.169.148/32
Status: Running
IP: 10.244.169.148
IPs:
IP: 10.244.169.148
Containers:
testpod:
Container ID: docker://ca9d7698d83b7d0b2a3839d0d7a5a6c8ef710ac79ff8ebd74f3f1eba8c87e79c
Image: nginx
Image ID: docker-pullable://nginx@sha256:127262f8c4c716652d0e7863bba3b8c45bc9214a57d13786c854272102f7c945
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 15 Sep 2024 16:36:32 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tpfhn (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-tpfhn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m11s default-scheduler Successfully assigned default/testpod to k8s-node2
Normal Pulling 3m8s kubelet Pulling image "nginx"
Normal Pulled 2m58s kubelet Successfully pulled image "nginx" in 10.487s (10.488s including waiting). Image size: 187694648 bytes.
Normal Created 2m58s kubelet Created container testpod
Normal Started 2m57s kubelet Started container testpod
[root@k8s-master ~]# kubectl logs pods/testpod
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2024/09/15 08:36:33 [notice] 1#1: using the "epoll" event method
2024/09/15 08:36:33 [notice] 1#1: nginx/1.27.1
2024/09/15 08:36:33 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14)
2024/09/15 08:36:33 [notice] 1#1: OS: Linux 5.14.0-70.13.1.el9_0.x86_64
2024/09/15 08:36:33 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1073741816:1073741816
2024/09/15 08:36:33 [notice] 1#1: start worker processes
2024/09/15 08:36:33 [notice] 1#1: start worker process 29
2024/09/15 08:36:33 [notice] 1#1: start worker process 30
10.244.235.192 - - [15/Sep/2024:08:38:23 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.76.1"
运行交互式pod
# 回收资源
[root@k8s-master ~]# kubectl delete pod testpod --force
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "testpod" force deleted
# 运行交互式pod
[root@k8s-master ~]# kubectl run -it testpod --image busybox:latest
If you don't see a command prompt, try pressing enter.
/ #
/ # # ctrl+pq,退出但不终止pod
/ # Session ended, resume using 'kubectl attach testpod -c testpod -i -t' command when the pod is running
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 5m42s
# 进入已经运行的容器中,容器必须有交互环境
[root@k8s-master ~]# kubectl attach pods/testpod -it
If you don't see a command prompt, try pressing enter.
/ #
/ # exit
Session ended, resume using 'kubectl attach testpod -c testpod -i -t' command when the pod
# 回收资源,--force表示强制
[root@k8s-master ~]# kubectl delete pod test --force
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "test" force deleted
运行非交互式pod
# 运行非交互式pod
[root@k8s-master ~]# kubectl run test --image nginx
pod/test created
[root@k8s-master ~]# kubectl exec -it pods/test /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@test:/#
root@test:/#
root@test:/# exit
exit
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 20m
# 复制本机的文件到pod中
[root@k8s-master ~]# kubectl cp anaconda-ks.cfg test:/
[root@k8s-master ~]# kubectl exec -it pods/test /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@test:/#
root@test:/# ls
anaconda-ks.cfg boot docker-entrypoint.d etc lib media opt root sbin sys usr
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
root@test:/# exit
exit
# 复制pod中的文件到本机
[root@k8s-master ~]# kubectl cp test:/anaconda-ks.cfg anaconda-ks.cfg
# 回收资源
[root@k8s-master ~]# kubectl delete pod testpod --force
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "testpod" force deleted
1.6 高级
1.创建yaml文件
# 生成yaml模板文件
[root@k8s-master ~]# kubectl create deployment --image nginx web --dry-run=client -o yaml > web.yml
[root@k8s-master ~]# vim web.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: nginx
name: nginx
# 使yaml文件生成资源
[root@k8s-master ~]# kubectl apply -f web.yml
deployment.apps/web created
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
web 1/1 1 1 7s
[root@k8s-master ~]# kubectl delete -f web.yml
deployment.apps "web" deleted
[root@k8s-master ~]# kubectl get deployments.apps
No resources found in default namespace.
2.资源标签
# 运行pod
[root@k8s-master ~]# kubectl run nginx --image nginx
pod/nginx created
# 查看pod标签
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 13s run=nginx
# 增加标签
[root@k8s-master ~]# kubectl label pods nginx app=lm
pod/nginx labeled
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 65s app=lm,run=nginx
# 更改标签
[root@k8s-master ~]# kubectl label pods nginx app=web --overwrite
pod/nginx labeled
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 118s app=web,run=nginx
# 删除标签
[root@k8s-master ~]# kubectl label pods nginx app-
pod/nginx unlabeled
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 2m31s run=nginx
3.控制器标签
# 创建控制器
[root@k8s-master ~]# kubectl create deployment web --image nginx --replicas 2
deployment.apps/web created
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
web 2/2 2 2 9s
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-7c56dcdb9b-gr9bg 1/1 Running 0 16s
web-7c56dcdb9b-v5n6m 1/1 Running 0 16s
# 删除控制器中的pod标签
[root@k8s-master ~]# kubectl label pods web-7c56dcdb9b-gr9bg app-
pod/web-7c56dcdb9b-gr9bg unlabeled
# 控制器会重新启动新pod
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
web-7c56dcdb9b-gr9bg 1/1 Running 0 90s pod-template-hash=7c56dcdb9b
web-7c56dcdb9b-rmzmm 1/1 Running 0 2s app=web,pod-template-hash=7c56dcdb9b
web-7c56dcdb9b-v5n6m 1/1 Running 0 90s app=web,pod-template-hash=7c56dcdb9b
# 回收资源
[root@k8s-master ~]# kubectl delete deployments.apps web
deployment.apps "web" deleted
[root@k8s-master ~]# kubectl delete pod web-7c56dcdb9b-gr9bg
pod "web-7c56dcdb9b-gr9bg" deleted
二、pod
pod可以创建和管理Kubernetes的最小可部署单元
一个pod代表着集群中运行的一个进程,每个pod都有一个唯一的IP
一个pod类似一个豌豆荚,包含一个或多个容器(通常是docker)
多个容器间共享IPC、Network和namespace
2.1 创建自主式pod(生产不推荐)
[root@k8s-master ~]# kubectl run test --image nginx
pod/test created
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 34s
[root@k8s-master ~]# kubectl get pod test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 92s 10.244.1.3 k8s-node1 <none> <none>
[root@k8s-master ~]# kubectl delete pod test
pod "test" deleted
2.2 控制器管理pod(推荐)
优势:
高可用性和可靠性:
自动故障恢复:如果一个 Pod 失败或被删除,控制器会自动创建新的 Pod 来维持期望的副本数量。确保应用始终处于可用状态,减少因单个 Pod 故障导致的服务中断。
健康检查和自愈:可以配置控制器对 Pod 进行健康检查(如存活探针和就绪探针)。如果 Pod 不健康,控制器会采取适当的行动,如重启 Pod 或删除并重新创建它,以保证应用的正常运行。
可扩展性:
轻松扩缩容:可以通过简单的命令或配置更改来增加或减少 Pod 的数量,以满足不同的工作负载需求。例如,在高流量期间可以快速扩展以处理更多请求,在低流量期间可以缩容以节省资源。
水平自动扩缩容(HPA):可以基于自定义指标(如 CPU 利用率、内存使用情况或应用特定的指标)自动调整 Pod 的数量,实现动态的资源分配和成本优化。
版本管理和更新:
滚动更新:对于 Deployment 等控制器,可以执行滚动更新来逐步替换旧版本的 Pod 为新版本,确保应用在更新过程中始终保持可用。可以控制更新的速率和策略,以减少对用户的影响。
回滚:如果更新出现问题,可以轻松回滚到上一个稳定版本,保证应用的稳定性和可靠性。
声明式配置:
简洁的配置方式:使用 YAML 或 JSON 格式的声明式配置文件来定义应用的部署需求。这种方式使得配置易于理解、维护和版本控制,同时也方便团队协作。
期望状态管理:只需要定义应用的期望状态(如副本数量、容器镜像等),控制器会自动调整实际状态与期望状态保持一致。无需手动管理每个 Pod 的创建和删除,提高了管理效率。
服务发现和负载均衡:
自动注册和发现:Kubernetes 中的服务(Service)可以自动发现由控制器管理的 Pod,并将流量路由到它们。这使得应用的服务发现和负载均衡变得简单和可靠,无需手动配置负载均衡器。
流量分发:可以根据不同的策略(如轮询、随机等)将请求分发到不同的 Pod,提高应用的性能和可用性。
多环境一致性:
一致的部署方式:在不同的环境(如开发、测试、生产)中,可以使用相同的控制器和配置来部署应用,确保应用在不同环境中的行为一致。这有助于减少部署差异和错误,提高开发和运维效率。
# 建立控制器自动运行pod
[root@k8s-master ~]# kubectl create deployment test --image nginx
deployment.apps/test created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test-7895cc554-chvzw 1/1 Running 0 6s
# 控制器扩容
[root@k8s-master ~]# kubectl scale deployment test --replicas 3
deployment.apps/test scaled
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test-7895cc554-8lmfg 1/1 Running 0 8s
test-7895cc554-chvzw 1/1 Running 0 42s
test-7895cc554-gsr96 1/1 Running 0 8s
# 控制器缩容
[root@k8s-master ~]# kubectl scale deployment test --replicas 2
deployment.apps/test scaled
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test-7895cc554-chvzw 1/1 Running 0 63s
test-7895cc554-gsr96 1/1 Running 0 29s
[root@k8s-master ~]# kubectl delete deployments.apps test
deployment.apps "test" deleted
2.3 版本更新
# 创建控制器
[root@k8s-master ~]# kubectl create deployment test --image myapp:v1 --replicas 2
deployment.apps/test created
# 暴露端口
[root@k8s-master ~]# kubectl expose deployment test --port 80 --target-port 80
service/test exposed
# 查看服务
[root@k8s-master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 76m
test ClusterIP 10.100.168.242 <none> 80/TCP 9s
# 访问服务
[root@k8s-master ~]# curl 10.100.168.242
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
# 查看历史版本
[root@k8s-master ~]# kubectl rollout history deployment test
deployment.apps/test
REVISION CHANGE-CAUSE
1 <none>
# 版本更新
[root@k8s-master ~]# kubectl set image deployments/test myapp=myapp:v2
deployment.apps/test image updated
[root@k8s-master ~]# kubectl rollout history deployment test
deployment.apps/test
REVISION CHANGE-CAUSE
1 <none>
2 <none>
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 79m
test ClusterIP 10.100.168.242 <none> 80/TCP 2m24s
[root@k8s-master ~]# curl 10.100.168.242
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
# 版本回滚
[root@k8s-master ~]# kubectl rollout undo deployment test --to-revision 1
deployment.apps/test rolled back
[root@k8s-master ~]# curl 10.100.168.242
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# kubectl delete deployments.apps test
deployment.apps "test" deleted
2.3 yaml文件部署
2.3.1 优点
声明式配置:
清晰表达期望状态:以声明式的方式描述应用的部署需求,包括副本数量、容器配置、网络设置等。这使得配置易于理解和维护,并且可以方便地查看应用的预期状态。
可重复性和版本控制:配置文件可以被版本控制,确保在不同环境中的部署一致性。可以轻松回滚到以前的版本或在不同环境中重复使用相同的配置。
团队协作:便于团队成员之间共享和协作,大家可以对配置文件进行审查和修改,提高部署的可靠性和稳定性。
灵活性和可扩展性:
丰富的配置选项:可以通过 YAML 文件详细地配置各种 Kubernetes 资源,如 Deployment、Service、ConfigMap、Secret 等。可以根据应用的特定需求进行高度定制化。
组合和扩展:可以将多个资源的配置组合在一个或多个 YAML 文件中,实现复杂的应用部署架构。同时,可以轻松地添加新的资源或修改现有资源以满足不断变化的需求。
与工具集成:
与 CI/CD 流程集成:可以将 YAML 配置文件与持续集成和持续部署(CI/CD)工具集成,实现自动化的应用部署。例如,可以在代码提交后自动触发部署流程,使用配置文件来部署应用到不同的环境。
命令行工具支持:Kubernetes 的命令行工具
kubectl
对 YAML 配置文件有很好的支持,可以方便地应用、更新和删除配置。同时,还可以使用其他工具来验证和分析 YAML 配置文件,确保其正确性和安全性。
2.3.2 资源参数
2.3.3 查看资源帮助
kubectl explain pod.spec.containers
2.3.4 示例
1.单个容器
# 创建yaml文件
[root@k8s-master ~]# kubectl run testpod --image myapp:v1 --dry-run=client -o yaml > pod.yml
# 创建文件,不生效,不会生成pod
[root@k8s-master ~]# kubectl get pod
No resources found in default namespace.
# 使yaml文件生效
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/testpod created
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 5s
注意:如果在yaml文件中,增加或减少container,都需要回收资源,重新生成,不然会报错。最好使用资源后,不需要的就回收了,防止出错
[root@k8s-master ~]# kubectl apply -f pod.yml
The Pod "testpod" is invalid: spec.containers: Forbidden: pod updates may not add or remove containers
# 回收资源
[root@k8s-master ~]# kubectl delete pod testpod --force
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resodefinitely.
pod "testpod" force deleted
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/testpod created
2.运行多个容器
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: nginx:latest
name: web1
- image: busybox:latest
name: busybox
command: ["/bin/sh","-c","sleep 100000"]
[root@k8s-master ~]# kubectl apply -f pod.yml
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
testpod 2/2 Running 0 2m25s
3.pod间的网络整合
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: testpod
spec:
containers:
- image: myapp:v1
name: myapp1
- image: busyboxplus:latest
name: busyboxplus
command: ["/bin/sh","-c","sleep 1000000"]
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/testpod created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
testpod 2/2 Running 0 81s
[root@k8s-master ~]# kubectl exec testpod -c busyboxplus -- curl -s localhost
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# kubectl delete pod testpod --force
4.端口映射
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: testpod
spec:
containers:
- image: myapp:v1
name: myapp1
ports:
- name: http
containerPort: 80
hostPort: 80
protocol: TCP
~
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/testpod created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
testpod 1/1 Running 0 7s 10.244.2.21 k8s-node2 <none> <none>
[root@k8s-master ~]# curl k8s-node2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# kubectl delete pod testpod
5.设置环境变量
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: testpod
spec:
containers:
- image: busybox:latest
name: busybox
command: ["/bin/sh","-c","echo $NAME;sleep 10000000"]
env:
- name: NAME
value: lm
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/testpod created
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 4s
[root@k8s-master ~]# kubectl logs pods/testpod busybox
lm
[root@k8s-master ~]# kubectl delete pod testpod --force
6.资源限制
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: testpod
spec:
containers:
- image: myapp:v1
name: myapp
resources:
limits:
cpu: 500m
memory: 100M
requests:
cpu: 500m
memory: 100M
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/testpod created
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 5s
[root@k8s-master ~]# kubectl describe pod testpod
Limits:
cpu: 500m
memory: 100M
Requests:
cpu: 500m
memory: 100M
[root@k8s-master ~]# kubectl delete pod testpod
7.容器启动管理
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: testpod
spec:
containers:
- image: myapp:v1
name: myapp
restartPolicy: Always
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/testpod created
[root@k8s-master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
testpod 1/1 Running 0 7s 10.244.2.23 k8s-node2 <none> <none>
[root@k8s-master ~]# kubectl delete pod testpod
8.选择运行节点
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: testpod
spec:
nodeSelector:
kubernetes.io/hostname: k8s-node1
restartPolicy: Always
containers:
- image: myapp:v1
name: myapp
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/testpod created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
testpod 1/1 Running 0 7s 10.244.1.13 k8s-node1 <none> <none>
[root@k8s-master ~]# kubectl delete pod testpod
9.共享宿主机网路
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: testpod
spec:
hostNetwork: true
restartPolicy: Always
containers:
- image: busybox:latest
name: busybox
command: ["/bin/sh","-c","sleep 100000"]
[root@k8s-master ~]# kubectl exec -it pods/test -c busybox -- /bin/sh
Error from server (NotFound): pods "test" not found
[root@k8s-master ~]# kubectl exec -it pods/testpod -c busybox -- /bin/sh
/ # ifconfig
cni0 Link encap:Ethernet HWaddr D2:CE:A8:3F:4A:B6
inet addr:10.244.2.1 Bcast:10.244.2.255 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:230 errors:0 dropped:0 overruns:0 frame:0
TX packets:88 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:14873 (14.5 KiB) TX bytes:10473 (10.2 KiB)
docker0 Link encap:Ethernet HWaddr 02:42:D9:9F:05:2D
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth0 Link encap:Ethernet HWaddr 00:0C:29:C2:09:FB
inet addr:172.25.254.20 Bcast:172.25.254.255 Mask:255.255.255.0
inet6 addr: fe80::ead3:c0ee:d97c:b0ce/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:11279 errors:0 dropped:0 overruns:0 frame:0
TX packets:6132 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9888528 (9.4 MiB) TX bytes:750764 (733.1 KiB)
flannel.1 Link encap:Ethernet HWaddr 86:39:FB:B5:B0:B9
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:45 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:522 errors:0 dropped:0 overruns:0 frame:0
TX packets:522 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:44241 (43.2 KiB) TX bytes:44241 (43.2 KiB)
/ # exit
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 2m44s
[root@k8s-master ~]# kubectl delete pod testpod
三、pod生命周期
Pod 可以包含多个容器,应用运行在这些容器里面,同时 Pod 也可以有一个或多个先于应用容器启动的 Init 容器。
Init 容器与普通的容器非常像,除了如下两点:
它们总是运行到完成
init 容器不支持 Readiness,因为它们必须在 Pod 就绪之前运行完成,每个 Init 容器必须运行成功,下一个才能够运行。
如果Pod的 Init 容器失败,Kubernetes 会不断地重启该 Pod,直到 Init 容器成功为止。但是,如果 Pod 对应的 restartPolicy 值为 Never,它不会重新启动。
3.1 init容器功能
Init 容器可以包含一些安装过程中应用容器中不存在的实用工具或个性化代码。
Init 容器可以安全地运行这些工具,避免这些工具导致应用镜像的安全性降低。
应用镜像的创建者和部署者可以各自独立工作,而没有必要联合构建一个单独的应用镜像。
Init 容器能以不同于Pod内应用容器的文件系统视图运行。因此,Init容器可具有访问 Secrets 的权限,而应用容器不能够访问。
由于 Init 容器必须在应用容器启动之前运行完成,因此 Init 容器提供了一种机制来阻塞或延迟应用容器的启动,直到满足了一组先决条件。一旦前置条件满足,Pod内的所有的应用容器会并行启动。
3.2 init容器示例
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: initpod
name: initpod
spec:
containers:
- image: myapp:v1
name: myapp
initContainers:
- image: busybox
name: init-myservice
command: ["sh","-c","until test -e /testfile;do echo wating for myservice; sleep 2;done"]
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/initpod created
[root@k8s-master ~]# kube
kubeadm kubectl kubelet
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
initpod 0/1 Init:0/1 0 15s
[root@k8s-master ~]# kubectl logs pods/initpod init-myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
[root@k8s-master ~]# kubectl exec pods/initpod -c init-myservice -- /bin/sh -c "touch /testfile"
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
initpod 1/1 Running 0 94s
[root@k8s-master ~]# kubectl delete pod initpod
四、存活探针
探针是由 kubelet 对容器执行的定期诊断:
ExecAction:在容器内执行指定命令。如果命令退出时返回码为 0 则认为诊断成功。
TCPSocketAction:对指定端口上的容器的 IP 地址进行 TCP 检查。如果端口打开,则诊断被认为是成功的。
HTTPGetAction:对指定的端口和路径上的容器的 IP 地址执行 HTTP Get 请求。如果响应的状态码大于等于200 且小于 400,则诊断被认为是成功的。
每次探测都将获得以下三种结果之一:
成功:容器通过了诊断。
失败:容器未通过诊断。
未知:诊断失败,因此不会采取任何行动。
Kubelet 可以选择是否执行在容器上运行的三种探针执行和做出反应:
livenessProbe:指示容器是否正在运行。如果存活探测失败,则 kubelet 会杀死容器,并且容器将受到其 重启策略 的影响。如果容器不提供存活探针,则默认状态为 Success。有存活探针,去检测,失败;没有存活探针,成功。
readinessProbe:指示容器是否准备好服务请求。如果就绪探测失败,端点控制器将从与 Pod 匹配的所有 Service 的端点中删除该 Pod 的 IP 地址。初始延迟之前的就绪状态默认为 Failure。如果容器不提供就绪探针,则默认状态为 Success。
startupProbe: 指示容器中的应用是否已经启动。如果提供了启动探测(startup probe),则禁用所有其他探测,直到它成功为止。如果启动探测失败,kubelet 将杀死容器,容器服从其重启策略进行重启。如果容器没有提供启动探测,则默认状态为成功Success。
ReadinessProbe 与 LivenessProbe 的区别
ReadinessProbe 当检测失败后,将 Pod 的 IP:Port 从对应的 EndPoint 列表中删除。
LivenessProbe 当检测失败后,将杀死容器并根据 Pod 的重启策略来决定作出对应的措施
StartupProbe 与 ReadinessProbe、LivenessProbe 的区别
如果三个探针同时存在,先执行 StartupProbe 探针,其他两个探针将会被暂时禁用,直到 pod 满足 StartupProbe 探针配置的条件,其他 2 个探针启动,如果不满足按照规则重启容器。
另外两种探针在容器启动后,会按照配置,直到容器消亡才停止探测,而 StartupProbe 探针只是在容器启动后按照配置满足一次后,不在进行后续的探测。
示例
liveness
[root@k8s-master ~]# kubectl run linveness --image myapp:v1 --dry-run=client -o yaml > liveness.yml
[root@k8s-master ~]# vim liveness.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: linveness
name: linveness
spec:
containers:
- image: myapp:v1
name: linveness
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 3
periodSeconds: 1
timeoutSeconds: 1
[root@k8s-master ~]# kubectl apply -f liveness.yml
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
linveness 1/1 Running 6 (44m ago) 3m
[root@k8s-master ~]# kubectl describe pod
Warning Unhealthy 43m (x9 over 43m) kubelet Liveness probe failed: dial tcp 10.244.1.16:8080: connect: connection refused
[root@k8s-master ~]# kubectl delete pod linveness
注:如果想要查看参数帮助
[root@k8s-master ~]# kubectl explain pod.spec.containers.livenessProbe
FIELDS:
exec <ExecAction>
Exec specifies the action to take.
failureThreshold <integer>
Minimum consecutive failures for the probe to be considered failed after
having succeeded. Defaults to 3. Minimum value is 1.
grpc <GRPCAction>
GRPC specifies an action involving a GRPC port.
httpGet <HTTPGetAction>
HTTPGet specifies the http request to perform.
initialDelaySeconds <integer>
Number of seconds after the container has started before liveness probes are
initiated. More info:
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
readiness
[root@k8s-master ~]# vim liveness.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: readiness
name: readiness
spec:
containers:
- image: myapp:v1
name: myapp
readinessProbe:
httpGet:
path: /test.html
port: 80
initialDelaySeconds: 1
periodSeconds: 3
timeoutSeconds: 1
[root@k8s-master ~]# kubectl apply -f liveness.yml
pod/readiness created
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
readiness 0/1 Running 0 9s
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
readiness 0/1 Running 0 22s
[root@k8s-master ~]# kubectl expose pod readiness --port 80 --target-port 80
service/readiness exposed
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
readiness 0/1 Running 0 63s
[root@k8s-master ~]# kubectl describe pods readiness
Warning Unhealthy 45m (x22 over 46m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404
[root@k8s-master ~]# kubectl describe service readiness
Name: readiness
Namespace: default
Labels: run=readiness
Annotations: <none>
Selector: run=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.99.115.159
IPs: 10.99.115.159
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: # 没有暴露端口,就绪探针不满足暴露条件
Session Affinity: None
Events: <none>
[root@k8s-master ~]# kubectl exec pods/readiness -c myapp -- /bin/sh -c "echo test > /usr/share/nginx/html/test.html"
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
readiness 1/1 Running 0 2m36s
[root@k8s-master ~]# kubectl describe svc readiness
Name: readiness
Namespace: default
Labels: name=readiness
Annotations: <none>
Selector: name=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.10.241
IPs: 10.96.10.241
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.17:80 #满足条件,端口暴露
Session Affinity: None
Events: <none>