一、StatefulSet 和 DaemonSet
StatefulSet | Kubernetes
无状态服务:服务之间有角色划分,有主有从,有master有slave。
有唯一且固定的的pod名称,以0、1、2、3顺序命名,这样不管pod怎么变化,我始终将序号为0的作为主服务。
服务(Service) | Kubernetes 无头服务,没有ClusterIP的service,请求直接解析到pod IP。
无头服务(Headless Services)
有时不需要或不想要负载均衡,以及单独的 Service IP。 遇到这种情况,可以通过指定 Cluster IP(spec.clusterIP
)的值为 "None"
来创建 Headless
Service。
你可以使用一个无头 Service 与其他服务发现机制进行接口,而不必与 Kubernetes 的实现捆绑在一起。
对于无头 Services
并不会分配 Cluster IP,kube-proxy 不会处理它们, 而且平台也不会为它们进行负载均衡和路由。 DNS 如何实现自动配置,依赖于 Service 是否定义了选择算符。
带选择算符的服务
对定义了选择算符的无头服务,Kubernetes 控制平面在 Kubernetes API 中创建 EndpointSlice 对象, 并且修改 DNS 配置返回 A 或 AAA 条记录(IPv4 或 IPv6 地址),通过这个地址直接到达 Service
的后端 Pod 上。
无选择算符的服务
对没有定义选择算符的无头服务,控制平面不会创建 EndpointSlice 对象。 然而 DNS 系统会查找和配置以下之一:
- 对于
type: ExternalName
服务,查找和配置其 CNAME 记录 - 对所有其他类型的服务,针对 Service 的就绪端点的所有 IP 地址,查找和配置 DNS A / AAAA 条记录
- 对于 IPv4 端点,DNS 系统创建 A 条记录。
- 对于 IPv6 端点,DNS 系统创建 AAAA 条记录。
部署StatefulSet
root@k8s-deploy:/yaml/k8s-Resource-N70/case12-Statefulset# vim 1-Statefulset.yaml
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myserver-myapp
namespace: myserver
spec:
replicas: 3
serviceName: "myserver-myapp-service"
selector:
matchLabels:
app: myserver-myapp-frontend
template:
metadata:
labels:
app: myserver-myapp-frontend
spec:
containers:
- name: myserver-myapp-frontend
image: nginx:1.20.2-alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: myserver-myapp-service
namespace: myserver
spec:
clusterIP: None
ports:
- name: http
port: 80
selector:
app: myserver-myapp-frontend
DaemonSet | Kubernetes
DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。 当有节点加入集群时, 也会为他们新增一个 Pod 。 当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。
DaemonSet 的一些典型用法:
- 在每个节点上运行集群守护进程
- 在每个节点上运行日志收集守护进程
- 在每个节点上运行监控守护进程
一种简单的用法是为每种类型的守护进程在所有的节点上都启动一个 DaemonSet。 一个稍微复杂的用法是为同一种守护进程部署多个 DaemonSet;每个具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求。
部署web服务
root@k8s-deploy:/yaml/k8s-Resource-N70/case13-DaemonSet# vim 1-DaemonSet-webserver.yaml
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: myserver-myapp
namespace: myserver
spec:
selector:
matchLabels:
app: myserver-myapp-frontend
template:
metadata:
labels:
app: myserver-myapp-frontend
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
hostNetwork: true
hostPID: true
containers:
- name: myserver-myapp-frontend
image: nginx:1.20.2-alpine
ports:
- containerPort: 80 #直接监听宿主机的80端口
---
apiVersion: v1
kind: Service
metadata:
name: myserver-myapp-frontend
namespace: myserver
spec:
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30018
protocol: TCP
type: NodePort
selector:
app: myserver-myapp-frontend
日志收集fluentd
root@k8s-deploy:/yaml/k8s-Resource-N70/case13-DaemonSet# vim 2-DaemonSet-fluentd.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
监控prometheus
root@k8s-deploy:/yaml/k8s-Resource-N70/case13-DaemonSet# vim 3-DaemonSet-prometheus.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: monitoring
labels:
k8s-app: node-exporter
spec:
selector:
matchLabels:
k8s-app: node-exporter
template:
metadata:
labels:
k8s-app: node-exporter
spec:
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
containers:
- image: prom/node-exporter:v1.3.1
imagePullPolicy: IfNotPresent
name: prometheus-node-exporter
ports:
- containerPort: 9100
hostPort: 9100
protocol: TCP
name: metrics
volumeMounts:
- mountPath: /host/proc
name: proc
- mountPath: /host/sys
name: sys
- mountPath: /host
name: rootfs
args:
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
- --path.rootfs=/host
volumes:
- name: proc
hostPath:
path: /proc
- name: sys
hostPath:
path: /sys
- name: rootfs
hostPath:
path: /
hostNetwork: true
hostPID: true
---
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
k8s-app: node-exporter
name: node-exporter
namespace: monitoring
spec:
type: NodePort
ports:
- name: http
port: 9100
nodePort: 39100
protocol: TCP
selector:
k8s-app: node-exporter
访问宿主机的9100端口192.168.0.110:9100
二、pod的常见状态及故障原因
pod调度流程
pod常见状态
常见的Pod异常状态及处理方式
Pod状态 | Pod含义 | 解决方案 |
---|---|---|
Pending | Pod未被调度到节点上。 | Pod状态为Pending |
Init:N/M | Pod包含M个Init容器,其中N个已经启动完成。 | Pod状态为Init:N/M(Init:Error和Init:CrashLoopBackOff) |
Init:Error | Init容器已启动失败。 | Pod状态为Init:N/M(Init:Error和Init:CrashLoopBackOff) |
Init:CrashLoopBackOff | Init容器启动失败,反复重启。 | Pod状态为Init:N/M(Init:Error和Init:CrashLoopBackOff) |
Completed | Pod的启动命令已执行完毕。 | Pod状态为Completed |
CrashLoopBackOff | Pod启动失败,反复重启。 | Pod状态为CrashLoopBackOff |
ImagePullBackOff | Pod镜像拉取失败。 | Pod状态为ImagePullBackOff |
Running | Pod运行正常。Pod Running但是未正常工作。 | 无需处理Pod状态为Running但没正常工作 |
Terminating | Pod正在关闭中。 | Pod状态为Terminating |
Evicted | Pod被驱逐。 | Pod状态为Evicted |
三、kubernetes pod 生命周期和探针
探针检查机制
使用探针来检查容器有四种不同的方法。 每个探针都必须准确定义为这四种机制中的一种:
-
exec
在容器内执行指定命令。如果命令退出时返回码为 0 则认为诊断成功。
-
grpc
使用 gRPC 执行一个远程过程调用。 目标应该实现 gRPC健康检查。 如果响应的状态是 “SERVING”,则认为诊断成功。 gRPC 探针是一个 Alpha 特性,只有在你启用了 “GRPCContainerProbe” 特性门控时才能使用。
-
httpGet
对容器的 IP 地址上指定端口和路径执行 HTTP
GET
请求。如果响应的状态码大于等于 200 且小于 400,则诊断被认为是成功的。 -
tcpSocket
对容器的 IP 地址上的指定端口执行 TCP 检查。如果端口打开,则诊断被认为是成功的。 如果远程系统(容器)在打开连接后立即将其关闭,这算作是健康的。
Pod重启策略
探针类型
针对运行中的容器,kubelet
可以选择是否执行以下三种探针,以及如何针对探测结果作出反应:
-
livenessProbe
指示容器是否正在运行。如果存活态探测失败,则 kubelet 会杀死容器, 并且容器将根据其重启策略决定未来。如果容器不提供存活探针, 则默认状态为
Success
。 -
readinessProbe
指示容器是否准备好为请求提供服务。如果就绪态探测失败, 端点控制器将从与 Pod 匹配的所有服务的端点列表中删除该 Pod 的 IP 地址。 初始延迟之前的就绪态的状态值默认为
Failure
。 如果容器不提供就绪态探针,则默认状态为Success
。 -
startupProbe
指示容器中的应用是否已经启动。如果提供了启动探针,则所有其他探针都会被 禁用,直到此探针成功为止。如果启动探测失败,
kubelet
将杀死容器, 而容器依其重启策略进行重启。 如果容器没有提供启动探测,则默认状态为Success
。
如欲了解如何设置存活态、就绪态和启动探针的进一步细节, 可以参阅配置存活态、就绪态和启动探针。
探针通用配置参数
Probe 有很多配置字段,可以使用这些字段精确地控制启动、存活和就绪检测的行为:
-
initialDelaySeconds
:容器启动后要等待多少秒后才启动启动、存活和就绪探针, 默认是 0 秒,最小值是 0。 -
periodSeconds
:执行探测的时间间隔(单位是秒)。默认是 10 秒。最小值是 1。 -
timeoutSeconds
:探测的超时后等待多少秒。默认值是 1 秒。最小值是 1。 -
successThreshold
:探针在失败后,被视为成功的最小连续成功数。默认值是 1。 存活和启动探测的这个值必须是 1。最小值是 1。(successThreshold 表示探针的成功的阈值,在达到该次数时,表示成功。默认值为 1,表示只要成功一次,就算成功了。) -
failureThreshold
:探针连续失败了failureThreshold
次之后, Kubernetes 认为总体上检查已失败:容器状态未就绪、不健康、不活跃。 对于启动探针或存活探针而言,如果至少有failureThreshold
个探针已失败, Kubernetes 会将容器视为不健康并为这个特定的容器触发重启操作。 kubelet 会考虑该容器的terminationGracePeriodSeconds
设置。 对于失败的就绪探针,kubelet 继续运行检查失败的容器,并继续运行更多探针; 因为检查失败,kubelet 将 Pod 的Ready
状况设置为false
。 -
terminationGracePeriodSeconds
:为 kubelet 配置从为失败的容器触发终止操作到强制容器运行时停止该容器之前等待的宽限时长。 默认值是继承 Pod 级别的terminationGracePeriodSeconds
值(如果不设置则为 30 秒),最小值为 1。 更多细节请参见探针级别terminationGracePeriodSeconds
。
探针http配置参数
HTTP Probes 允许针对 httpGet
配置额外的字段:
host
:连接使用的主机名,默认是 Pod 的 IP。也可以在 HTTP 头中设置 “Host” 来代替。scheme
:用于设置连接主机的方式(HTTP 还是 HTTPS)。默认是 “HTTP”。path
:访问 HTTP 服务的路径。默认值为 “/”。httpHeaders
:请求中自定义的 HTTP 头。HTTP 头字段允许重复。port
:访问容器的端口号或者端口名。如果数字必须在 1~65535 之间。
对于 HTTP 探测,kubelet 发送一个 HTTP 请求到指定的路径和端口来执行检测。 除非 httpGet
中的 host
字段设置了,否则 kubelet 默认是给 Pod 的 IP 地址发送探测。 如果 scheme
字段设置为了 HTTPS
,kubelet 会跳过证书验证发送 HTTPS 请求。 大多数情况下,不需要设置 host
字段。 这里有个需要设置 host
字段的场景,假设容器监听 127.0.0.1,并且 Pod 的 hostNetwork
字段设置为了 true
。那么 httpGet
中的 host
字段应该设置为 127.0.0.1。 可能更常见的情况是如果 Pod 依赖虚拟主机,你不应该设置 host
字段,而是应该在 httpHeaders
中设置 Host
。
针对 HTTP 探针,kubelet 除了必需的 Host
头部之外还发送两个请求头部字段: User-Agent
和 Accept
。这些头部的默认值分别是 kube-probe/{{ skew currentVersion >}}
(其中 1.26
是 kubelet 的版本号)和 */*
。
探针示例
就绪探针readinessProbe - 控制 pod 是否加到 service
就绪探针readinessProbe 控制 pod 是否加到 service ,就绪探针失败不会重启 pod 。
使用 httpGet 检查机制
对容器的 IP 地址上指定端口和路径执行 HTTP GET
请求。如果响应的状态码大于等于 200 且小于 400,则诊断被认为是成功的。
- 探测成功
本例使用就绪探针readinessProbe,httpGet检查机制,对nginx主页 index.html 是否存在进行检测。
initialDelaySeconds
字段告诉 kubelet 在执行第一次探测前应该等待 5 秒;periodSeconds
字段指定了 kubelet 应该每 3 秒执行一次存活探测;timeoutSeconds
字段告诉 kubelet 探测的超时后等待5秒;successThreshold
字段指定探针在失败后,持续检测并且连续成功 1 次后视为检测成功(successThreshold 表示探针的成功的阈值,在达到该次数时,表示成功。默认值为 1,表示只要成功一次,就算成功了);failureThreshold
字段指定探针连续失败了 3 次之后, Kubernetes 认为总体上检查已失败:容器状态未就绪、不健康、不活跃。
kubelet 会向容器内运行的服务(服务在监听 80 端口)发送一个 HTTP GET 请求来执行探测。 如果服务器上 /index.html
路径下的处理程序返回成功代码,则 kubelet 认为容器是健康存活的。 如果处理程序返回失败代码,则 kubelet 会杀死这个容器并将其重启。
root@k8s-deploy:/yaml/20220806/case3-Probe# vim 1-http-Probe.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myserver-myapp-frontend-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels: #rs or deployment
app: myserver-myapp-frontend-label
#matchExpressions:
# - {key: app, operator: In, values: [myserver-myapp-frontend,ng-rs-81]}
template:
metadata:
labels:
app: myserver-myapp-frontend-label
spec:
containers:
- name: myserver-myapp-frontend-label
image: nginx:1.20.2
ports:
- containerPort: 80
readinessProbe:
#livenessProbe:
httpGet:
#path: /monitor/monitor.html
path: /index.html
port: 80
initialDelaySeconds: 5 #告诉 kubelet 在执行第一次探测前应该等待 5 秒
periodSeconds: 3 #指定了 kubelet 应该每 3 秒执行一次存活探测
timeoutSeconds: 5 #告诉 kubelet 探测的超时后等待5秒
successThreshold: 1 #指定探针在失败后,持续检测并且连续成功 1 次后视为检测成功
failureThreshold: 3 #指定探针连续失败了 3 次之后, Kubernetes 认为总体上检查已失败
---
apiVersion: v1
kind: Service
metadata:
name: myserver-myapp-frontend-service
namespace: myserver
spec:
ports:
- name: http
port: 81
targetPort: 80
nodePort: 40012
protocol: TCP
type: NodePort
selector:
app: myserver-myapp-frontend-label
探针检测成功后,Pod 10.200.107.222:80 被加入到SERVICE myserver-myapp-frontend-service 中。
- 模拟失败
下面模拟就绪探针检测失败情况,注释掉 /index.html ,使用不存在的主页文件 /monitor/monitor.html 使其探测失败。
就绪探针检测失败,因此没有把 pod 加到 service 里。
describe 查看报错信息,提示就绪探针检测失败。
我们进入 pod 并创建主页文件 /monitor/monitor.html ,之后会发现就绪探针检测成功,并将 pod 加入到 service。
使用命令 kubectl logs 查看日志,发现状态码从404变为200。
存活探针livenessProbe - 控制 pod 是否重启
存活探针 livenessProbe 控制 pod 是否重启,存活探针失败会重启 pod 。
使用tcpSocket 检查机制
对容器的 IP 地址上的指定端口执行 TCP 检查。如果端口打开,则诊断被认为是成功的。 如果远程系统(容器)在打开连接后立即将其关闭,这算作是健康的。
- 探测成功
本例使用存活探针 livenessProbe ,tcpSocket 检查机制。tcpSocket 对容器的 IP 地址上的指定端口执行 TCP 检查。如果端口打开,则诊断被认为是成功的。 如果远程系统(容器)在打开连接后立即将其关闭,这算作是健康的。
root@k8s-deploy:/yaml/20220806/case3-Probe# vim 2-tcp-Probe.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myserver-myapp-frontend-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels: #rs or deployment
app: myserver-myapp-frontend-label
#matchExpressions:
# - {key: app, operator: In, values: [myserver-myapp-frontend,ng-rs-81]}
template:
metadata:
labels:
app: myserver-myapp-frontend-label
spec:
containers:
- name: myserver-myapp-frontend-label
image: nginx:1.20.2
ports:
- containerPort: 80
livenessProbe:
#readinessProbe:
tcpSocket:
port: 80
#port: 8080
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
name: myserver-myapp-frontend-service
namespace: myserver
spec:
ports:
- name: http
port: 81
targetPort: 80
nodePort: 40012
protocol: TCP
type: NodePort
selector:
app: myserver-myapp-frontend-label
本例中并没有配置就绪探针,为什么还会将 pod 加入到 service 中?因为如果容器不提供就绪态探针,则默认状态为 Success
,所以会被加到 service。
- 模拟失败
我们进入 pod ,修改 nginx 配置文件,将监听端口80改为81,使存活探针检测失败,存活探针失败后将会重启 pod 。nginx -s reload
使其生效。
查看 pod 状态发现重启一次。
使用 exec 检查机制
在容器内执行指定命令。如果命令退出时返回码为 0 则认为诊断成功。
本例使用存活探针 livenessProbe ,exec 检查机制。在 pod 内执行 /usr/local/bin/redis-cli
命令,再退出。
root@k8s-deploy:/yaml/20220806/case3-Probe# vim 3-exec-Probe.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myserver-myapp-redis-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels: #rs or deployment
app: myserver-myapp-redis-label
#matchExpressions:
# - {key: app, operator: In, values: [myserver-myapp-redis,ng-rs-81]}
template:
metadata:
labels:
app: myserver-myapp-redis-label
spec:
containers:
- name: myserver-myapp-redis-container
image: redis
ports:
- containerPort: 6379
livenessProbe:
#readinessProbe:
exec:
command:
#- /apps/redis/bin/redis-cli
- /usr/local/bin/redis-cli
#- ['/bin/bash','-c',"/usr/local/bin/redis-cli -p 6380"]
- quit
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
name: myserver-myapp-redis-service
namespace: myserver
spec:
ports:
- name: http
port: 6379
targetPort: 6379
nodePort: 40016
protocol: TCP
type: NodePort
selector:
app: myserver-myapp-redis-label
启动探针startupProbe - 判断容器内应用程序是否启动完成
指示容器中的应用是否已经启动。如果提供了启动探针,则所有其他探针都会被 禁用,直到此探针成功为止。如果启动探测失败,kubelet
将杀死容器, 而容器依其重启策略进行重启。 如果容器没有提供启动探测,则默认状态为 Success
。
使用 httpGet 检查机制
本例使用启动探针 startupProbe ,httpGet检查机制,检查 nginx 主页文件 index.html 是否存在。启动探针失败将会杀死容器并重启直至启动探针成功。
root@k8s-deploy:/yaml/20220806/case3-Probe# vim 4-startupProbe.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myserver-myapp-frontend-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels: #rs or deployment
app: myserver-myapp-frontend-label
#matchExpressions:
# - {key: app, operator: In, values: [myserver-myapp-frontend,ng-rs-81]}
template:
metadata:
labels:
app: myserver-myapp-frontend-label
spec:
containers:
- name: myserver-myapp-frontend-label
image: nginx:1.20.2
ports:
- containerPort: 80
startupProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 5 #首次检测延迟5s
failureThreshold: 3 #从成功转为失败的次数
periodSeconds: 3 #探测间隔周期
---
apiVersion: v1
kind: Service
metadata:
name: myserver-myapp-frontend-service
namespace: myserver
spec:
ports:
- name: http
port: 81
targetPort: 80
nodePort: 40012
protocol: TCP
type: NodePort
selector:
app: myserver-myapp-frontend-label
三种探针结合使用
apiVersion: apps/v1
kind: Deployment
metadata:
name: myserver-myapp-frontend-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels: #rs or deployment
app: myserver-myapp-frontend-label
#matchExpressions:
# - {key: app, operator: In, values: [myserver-myapp-frontend,ng-rs-81]}
template:
metadata:
labels:
app: myserver-myapp-frontend-label
spec:
terminationGracePeriodSeconds: 60
containers:
- name: myserver-myapp-frontend-label
image: nginx:1.20.2
ports:
- containerPort: 80
startupProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 5 #首次检测延迟5s
failureThreshold: 3 #从成功转为失败的次数
periodSeconds: 3 #探测间隔周期
readinessProbe:
httpGet:
#path: /monitor/monitor.html
path: /index.html
port: 80
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
livenessProbe:
httpGet:
#path: /monitor/monitor.html
path: /index.html
port: 80
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
name: myserver-myapp-frontend-service
namespace: myserver
spec:
ports:
- name: http
port: 81
targetPort: 80
nodePort: 40012
protocol: TCP
type: NodePort
selector:
app: myserver-myapp-frontend-label
四、基于 nerdctl + buildkitd + containerd 构建容器镜像
本实验简单拓扑
GitHub - moby/buildkit: concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit
buildkitd组成部分
- buildkitd(服务端),目前支持 runc 和 containerd 作为镜像构建环境,默认是 runc ,可以更换为 containerd。
- buildctl(客户端),负载解析 Dockerfile文件 ,并向服务的发送构建请求。
1、部署 buildkitd
root@k8s-master1:/usr/local/src# wget https://github.com/moby/buildkit/releases/download/v0.11.2/buildkit-v0.11.2.linux-arm64.tar.gz
root@k8s-master1:/usr/local/src# tar xvf buildkit-v0.11.2.linux-amd64.tar.gz -C /usr/local/bin/
root@k8s-master1:/usr/local/src# mv /usr/local/bin/bin/buildkitd /usr/local/bin/bin/buildctl /usr/local/bin/
# 创建socket文件
root@k8s-master1:/usr/local/src# vim /lib/systemd/system/buildkit.socket
[Unit]
Description=BuildKit
Documentation=https://github.com/moby/buildkit
[Socket]
ListenStream=%t/buildkit/buildkitd.sock
[Install]
WantedBy=sockets.target
#创建service文件
root@k8s-master1:/usr/local/src# vim /lib/systemd/system/buildkitd.service
[Unit]
Description=BuildKit
Requires=buildkit.socket
After=buildkit.socket
Documentation=https://github.com/moby/buildkit
[Service]
ExecStart=/usr/local/bin/buildkitd --oci-worker=false --containerd-worker=true
[Install]
WantedBy=multi-user.target
root@k8s-master1:/usr/local/src# systemctl daemon-reload
root@k8s-master1:/usr/local/src# systemctl start buildkitd.service
root@k8s-master1:/usr/local/src# systemctl status buildkitd.service
root@k8s-master1:/usr/local/src# systemctl enable buildkitd.service
2、测试镜像构建
1.nerdctl命令
设置 nerdctl 命令自动补全
# 修改 /etc/profile 文件,在末尾添加 source <(nerdctl completion bash)
root@k8s-master1:/usr/local/src# vim /etc/profile
source <(nerdctl completion bash)
# 使其生效
root@k8s-master1:/usr/local/src# source /etc/profile
如果没有分发证书,使用 nerdctl 命令登录镜像仓库、上传和下载镜像,要使用参数 --insecure-registry 。
root@k8s-master1:/usr/local/src# nerdctl login --insecure-registry y73.harbor.com
# 下载官方镜像并将其上传到私有仓库
root@k8s-master1:/usr/local/src# nerdctl pull centos:7.9.2009
root@k8s-master1:/usr/local/src# nerdctl tag centos:7.9.2009 y73.harbor.com/baseimages/centos:7.9.2009
root@k8s-master1:/usr/local/src# nerdctl --insecure-registry push y73.harbor.com/baseimages/centos:7.9.2009
2.harbor证书分发
下面我们使用证书,这样登录镜像仓库、下载上传镜像就不需要使用参数 --insecure-registry 了。harbor 自签发证书参考官网 Harbor docs | Configure HTTPS Access to Harbor (goharbor.io)
# 镜像构建服务器创建证书目录
root@k8s-master1:~# mkdir -p /etc/containerd/certs.d/y73.harbor.com
# harbor证书分发过程
# 登录harbor服务器,进入其证书目录
root@k8s-harbor:~# cd /apps/harbor/certs/
root@k8s-harbor:/apps/harbor/certs# ll
total 28
drwxr-xr-x 2 root root 140 Nov 25 23:34 ./
drwxr-xr-x 4 root root 170 Nov 25 23:38 ../
-rw-r--r-- 1 root root 2053 Nov 25 23:33 ca.crt
-rw------- 1 root root 3243 Nov 25 23:32 ca.key
-rw-r--r-- 1 root root 41 Nov 25 23:34 ca.srl
-rw-r--r-- 1 root root 268 Nov 25 23:33 v3.ext
-rw-r--r-- 1 root root 2122 Nov 25 23:34 y73.harbor.com.crt
-rw-r--r-- 1 root root 1708 Nov 25 23:33 y73.harbor.com.csr
-rw------- 1 root root 3243 Nov 25 23:33 y73.harbor.com.key
# 格式转换,官方解释containerd会将 .crt 当成CA的证书,因此要将其转换成客户端证书 .cert
root@k8s-harbor:/apps/harbor/certs# openssl x509 -inform PEM -in y73.harbor.com.crt -out y73.harbor.com.cert
root@k8s-harbor:/apps/harbor/certs# ll
total 32
drwxr-xr-x 2 root root 167 Feb 1 20:58 ./
drwxr-xr-x 4 root root 170 Nov 25 23:38 ../
-rw-r--r-- 1 root root 2053 Nov 25 23:33 ca.crt
-rw------- 1 root root 3243 Nov 25 23:32 ca.key
-rw-r--r-- 1 root root 41 Nov 25 23:34 ca.srl
-rw-r--r-- 1 root root 268 Nov 25 23:33 v3.ext
-rw-r--r-- 1 root root 2122 Feb 1 20:58 y73.harbor.com.cert
-rw-r--r-- 1 root root 2122 Nov 25 23:34 y73.harbor.com.crt
-rw-r--r-- 1 root root 1708 Nov 25 23:33 y73.harbor.com.csr
-rw------- 1 root root 3243 Nov 25 23:33 y73.harbor.com.key
# 将证书拷贝到镜像构建服务器
root@k8s-harbor:/apps/harbor/certs# scp ca.crt y73.harbor.com.cert y73.harbor.com.key 192.168.0.110:/etc/containerd/certs.d/y73.harbor.com/
# 镜像构建服务器验证证书
root@k8s-master1:~# cd /etc/containerd/certs.d/y73.harbor.com
root@k8s-master1:/etc/containerd/certs.d/y73.harbor.com# ll
total 12
drwxr-xr-x 2 root root 73 Feb 1 21:01 ./
drwxr-xr-x 3 root root 28 Feb 1 20:45 ../
-rw-r--r-- 1 root root 2053 Feb 1 21:01 ca.crt
-rw-r--r-- 1 root root 2122 Feb 1 21:01 y73.harbor.com.cert
-rw------- 1 root root 3243 Feb 1 21:01 y73.harbor.com.key
证书分发好后,我们来登录试一下。但是之前我们已经使用参数 --insecure-registry 登陆过镜像仓库了,所以我们先把保存下来的记录删除。
root@k8s-master1:/etc/containerd/certs.d/y73.harbor.com# rm -rf /root/.docker/config.json
# 不使用参数 --insecure-registry 登录仓库
root@k8s-master1:/etc/containerd/certs.d/y73.harbor.com# nerdctl login y73.harbor.com
Enter Username: admin
Enter Password:
WARNING: Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
3.构建镜像
k8s默认使用 k8s.io 命名空间,nerdctl 默认使用 default 命名空间,因此需要切换到 k8s.io 命名空间才能显示对方的镜像、tag等。在镜像构建服务器运行:
mkdir /etc/nerdctl
vim /etc/nerdctl/nerdctl.toml
namespace = "k8s.io"
构建
# 上传文件 ubuntu-nginx-dockerfile.tar.gz 并解压
root@k8s-master1:/opt# pwd
/opt
root@k8s-master1:/opt# ll
total 1128
drwxr-xr-x 3 root root 62 Feb 1 21:16 ./
drwxr-xr-x 20 root root 304 Jan 3 14:20 ../
drwx--x--x 4 root root 28 Dec 16 22:38 containerd/
-rw-r--r-- 1 root root 1152652 Feb 1 21:16 ubuntu-nginx-dockerfile.tar.gz
root@k8s-master1:/opt# tar xvf ubuntu-nginx-dockerfile.tar.gz
ubuntu/
ubuntu/frontend.tar.gz
ubuntu/html/
ubuntu/html/images/
ubuntu/html/images/1.jpg
ubuntu/html/index.html
ubuntu/nginx-1.22.0.tar.gz
ubuntu/nginx.conf
ubuntu/sources.list
ubuntu/Dockerfile
ubuntu/build-command.sh
root@k8s-master1:/opt#
root@k8s-master1:/opt# cd ubuntu/
# 使用命令构建镜像
root@k8s-master1:/opt/ubuntu# nerdctl build -t y73.harbor.com/y73/nginx-base:1.22.0 .
需要注意的是,使用 nerdctl 构建镜像是,首先会下载基础镜像的元数据(docker不需要下载元数据)。因为此处 Dockerfile 是基于官方ubuntu镜像编写的,所以需要服务器能够访问“docker.io/library/ubuntu:22.04”这个域名;如果无法访问,那么就不能构建镜像。因此我们可以把官方ubuntu镜像下载后上传到公司内部的镜像仓库。
上传镜像
nerdctl push y73.harbor.com/y73/nginx-base:1.22.0
或者通过脚本构建镜像并上传到 harbor
root@k8s-master1:/opt/ubuntu# vim build-command.sh
#!/bin/bash
#docker build -t y73.harbor.com/baseimages/nginx:v1 .
#docker push y73.harbor.com/baseimages/nginx:v1
/usr/local/bin/nerdctl build -t y73.harbor.com/y73/nginx-base:1.22.0 .
/usr/local/bin/nerdctl push y73.harbor.com/y73/nginx-base:1.22.0
运行容器
我们使用刚刚构建好的镜像运行一个容器。因为没有配置 cni 环境,所以报错了。
nerdctl run -p 81:80 y73.harbor.com/y73/nginx-base:1.22.0
创建容器 cni 运行环境。cni 需要独立配置:
# 上传cni二进制包
root@k8s-master1:/opt# wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
root@k8s-master1:/opt# ll
total 36616
drwxr-xr-x 5 root root 129 Feb 1 22:00 ./
drwxr-xr-x 20 root root 304 Jan 3 14:20 ../
drwxr-xr-x 3 root root 17 Feb 1 22:00 cni/
-rw-r--r-- 1 root root 36336160 Feb 1 21:58 cni-plugins-linux-amd64-v1.1.1.tgz
drwx--x--x 4 root root 28 Dec 16 22:38 containerd/
drwxr-xr-x 3 root root 177 Feb 1 21:29 ubuntu/
-rw-r--r-- 1 root root 1152652 Feb 1 21:16 ubuntu-nginx-dockerfile.tar.gz
# 创建保存cni插件路径
root@k8s-master1:/opt# mkdir /opt/cni/bin -p
root@k8s-master1:/opt# tar xvf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/
./
./macvlan
./static
./vlan
./portmap
./host-local
./vrf
./bridge
./tuning
./firewall
./host-device
./sbr
./loopback
./dhcp
./ptp
./ipvlan
./bandwidth
安装好 cni 环境后,再次启动一个容器。发现成功启动,再去浏览器验证
3、基于nginx代理harbor实现https
1、出现报错
此时我们使用私有仓库的基础镜像来构建 jdk 环境镜像。观察 Dockerfile ,发现 FROM y73.harbor.com/baseimages/centos:7.9.2009
字段使用的是 harbor 仓库里的 centos 镜像。
root@k8s-master1:/opt/k8s-data/dockerfile/web/pub-images/jdk-1.8.212# vim Dockerfile
#JDK Base Image
FROM y73.harbor.com/baseimages/centos:7.9.2009
#FROM centos:7.9.2009
MAINTAINER zhangshijie "zhangshijie@magedu.net"
ADD jdk-8u212-linux-x64.tar.gz /usr/local/src/
RUN ln -sv /usr/local/src/jdk1.8.0_212 /usr/local/jdk
ADD profile /etc/profile
ENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/lib/
ENV PATH $PATH:$JAVA_HOME/bin
**这里使用 nerdcrl 构建镜像时,出现”无法下载元数据错误“,提示我们使用自签发的证书不被信任。**但是我们前面明明能够正常下载和上传镜像,这是为什么呢?那是因为上传和下载镜像是通过 containerd 来完成的,之前分发的证书是给 containerd 使用的,证书存放在/etc/containerd/certs.d/y73.harbor.com/
目录;但是构建镜像是 nerdctl 来完成,由 nerdctl 来下载镜像的元数据,它没有加载自签发的证书,证书对 nerdctl 来说是无效的。
所以出现这种现象:nerdctl 想要构建镜像,那得先拿到镜像的元数据,要拿到元数据就得信任这个证书。
那么该如何解决?**一种是使用正规机构签发的证书,比如 alphassl、sectigo;一种是通过 nginx 代理。**下面介绍第二种:
我们只要在 harbor 前面加一个 nginx ,并监听 80 和 443 端口,由 nginx 将请求反向代理到 harbor 。这时 containerd 通过 443 端口下载和上传镜像,nerdctl 通过 80 端口下载元数据。大致拓扑如下:
2、将harbor修改为http协议
先进入 harbor 目录停掉 harbor。
root@k8s-harbor:/apps/harbor# docker-compose stop
再修改 harbor 配置文件,注释掉 https 配置。
root@k8s-harbor:/apps/harbor# vim harbor.yml
清除以前的配置,生成新的配置。
root@k8s-harbor:/apps/harbor# ./prepare
重新启动 harbor 后,再通过浏览器访问 harbor ,发现已经变回 http。
root@k8s-harbor:/apps/harbor# docker-compose up -d
因为之前 k8s 中的环境都是走的 https ,所以现在 k8s 中的 node 节点就无法下载镜像了。
要解决这个问题可以:1、把 containerd 配置成 http 。2、在 harbor 前面加个 nginx。
3、nginx实现反向代理
1、nginx 安装及配置
随便找台服务器安装 nginx。
root@k8s-haproxy1:~# cd /usr/local/src/
root@k8s-haproxy1:/usr/local/src# wget http://nginx.org/download/nginx-1.22.1.tar.gz
root@k8s-haproxy1:/usr/local/src# tar xvf nginx-1.22.1.tar.gz
root@k8s-haproxy1:/usr/local/src/nginx-1.22.1# ./configure --prefix=/apps/nginx \
--with-http_ssl_module \
--with-http_v2_module \
--with-http_realip_module \
--with-http_stub_status_module \
--with-http_gzip_static_module \
--with-pcre \
--with-stream \
--with-stream_ssl_module \
--with-stream_realip_module
root@k8s-haproxy1:/usr/local/src/nginx-1.22.1# make && make install
root@k8s-haproxy1:/usr/local/src/nginx-1.22.1# /apps/nginx/sbin/nginx -v
nginx version: nginx/1.22.1
创建证书目录并将 harbor 的证书拷贝过来
root@k8s-haproxy1:/usr/local/src/nginx-1.22.1# mkdir /apps/nginx/certs -p
root@k8s-harbor:/apps/harbor/certs# scp y73.harbor.com.crt y73.harbor.com.key 192.168.0.119:/apps/nginx/certs/
编制 nginx 配置文件
root@k8s-haproxy1:/apps/nginx# vim /apps/nginx/conf/nginx.conf
#gzip on;
client_max_body_size 2000m; # 客户端能够传送的最大文件大小,默认是1m
server {
listen 80;
listen 443 ssl;
ssl_certificate /apps/nginx/certs/y73.harbor.com.crt;
ssl_certificate_key /apps/nginx/certs/y73.harbor.com.key;
ssl_session_cache shared:sslcache:20m;
ssl_session_timeout 10m;
server_name y73.harbor.com; # 改成 habor 域名
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
#root html;
#index index.html index.htm;
if ( $scheme = http ){ # 不加条件判断会导致死循环
rewrite / https://y73.harbor.com permanent;
proxy_pass http://192.168.0.121; # 将请求转发到 harbor 服务器
}
}
#error_page 404 /404.html;
启动 nginx ,修改节点的域名解析,将域名 y73.harbor.com 解析到 nginx 服务器。在浏览器验证
root@k8s-haproxy1:/apps/nginx# /apps/nginx/sbin/nginx -t
nginx: the configuration file /apps/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /apps/nginx/conf/nginx.conf test is successful
root@k8s-haproxy1:/apps/nginx# /apps/nginx/sbin/nginx
# 修改节点域名解析
root@k8s-node1:~# sed -i "s/192.168.0.121/192.168.0.119/g" /etc/hosts
2、buildkitd 配置文件
允许使用 http
root@k8s-master1:~# mkdir /etc/buildkit/
root@k8s-master1:~# vim /etc/buildkit/buildkitd.toml
[registry."y73.harbor.com"]
http = true
insecure = true
3、nerdctl 配置文件
root@k8s-master1:~# mkdir /etc/nerdctl
root@k8s-master1:~# vim /etc/nerdctl/nerdctl.toml
namespace = "k8s.io"
debug = false # 可以把 debug 打开查看详细信息
debug_full = false
insecure-registry = true
重启 buildkitd,再次执行镜像构建成功。
root@k8s-master1:~# systemctl restart buildkitd.service
root@k8s-master1:~# cd /opt/k8s-data/dockerfile/web/pub-images/jdk-1.8.212/
root@k8s-master1:/opt/k8s-data/dockerfile/web/pub-images/jdk-1.8.212# nerdctl build -t y73.harbor.com/y73/jdk-base:v8.212 .
最后将镜像上传到 harbor。
root@k8s-master1:/opt/k8s-data/dockerfile/web/pub-images/jdk-1.8.212# nerdctl push y73.harbor.com/y73/jdk-base:v8.212
因为开启了 debug ,所以看到了有价值的信息:1、跳过验证“y73.harbor.com”的 HTTPS 证书。2、会在/etc/containerd/certs.d``和/etc/docker/certs.d
这两个目录查找证书。
五、Nginx+Tomcat+NFS实现动静分离
外部客户端通过负载均衡器访问到 k8s 集群的 30092 端口,nginx service 将请求转发给 nginx pod ,nginx pod 从存储服务器读取静态数据,通过 tomcat service 找到 tomcat pod 写数据,再将修改后的数据同步到存储服务器。
使用到的镜像:y73.harbor.com/baseimages/centos:7.9.2009
,y73.harbor.com/y73/jdk-base:v8.212
,y73.harbor.com/y73/tomcat-base:v8.5.43
1、制作基础镜像
1、centos 镜像
Dockerfile
#自定义Centos 基础镜像
FROM centos:7.9.2009
MAINTAINER Jack.Zhang 2973707860@qq.com
ADD filebeat-7.12.1-x86_64.rpm /tmp
RUN yum install -y /tmp/filebeat-7.12.1-x86_64.rpm vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop && rm -rf /etc/localtime /tmp/filebeat-7.12.1-x86_64.rpm && ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && useradd nginx -u 2088
build-command.sh
#!/bin/bash
#docker build -t harbor.magedu.net/baseimages/magedu-centos-base:7.9.2009 .
#docker push harbor.magedu.net/baseimages/magedu-centos-base:7.9.2009
/usr/local/bin/nerdctl build -t y73.harbor.com/baseimages/centos-base:7.9.2009 .
/usr/local/bin/nerdctl push y73.harbor.com/baseimages/centos-base:7.9.2009
执行构建脚本
root@k8s-master1:/opt/k8s-data/dockerfile/system/centos# bash build-command.sh
2、jdk 镜像
Dockerfile
#JDK Base Image
FROM y73.harbor.com/baseimages/centos-base:7.9.2009
#FROM centos:7.9.2009
MAINTAINER zhangshijie "zhangshijie@magedu.net"
ADD jdk-8u212-linux-x64.tar.gz /usr/local/src/
RUN ln -sv /usr/local/src/jdk1.8.0_212 /usr/local/jdk
ADD profile /etc/profile
ENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/lib/
ENV PATH $PATH:$JAVA_HOME/bin
build-command.sh
#!/bin/bash
#docker build -t harbor.magedu.net/pub-images/jdk-base:v8.212 .
#sleep 1
#docker push harbor.magedu.net/pub-images/jdk-base:v8.212
nerdctl build -t y73.harbor.com/y73/jdk-base:v8.212 .
nerdctl push y73.harbor.com/y73/jdk-base:v8.212
配置文件 profile
# /etc/profile
# System wide environment and startup programs, for login setup
# Functions and aliases go in /etc/bashrc
# It's NOT a good idea to change this file unless you know what you
# are doing. It's much better to create a custom.sh shell script in
# /etc/profile.d/ to make custom changes to your environment, as this
# will prevent the need for merging in future updates.
pathmunge () {
case ":${PATH}:" in
*:"$1":*)
;;
*)
if [ "$2" = "after" ] ; then
PATH=$PATH:$1
else
PATH=$1:$PATH
fi
esac
}
if [ -x /usr/bin/id ]; then
if [ -z "$EUID" ]; then
# ksh workaround
EUID=`/usr/bin/id -u`
UID=`/usr/bin/id -ru`
fi
USER="`/usr/bin/id -un`"
LOGNAME=$USER
MAIL="/var/spool/mail/$USER"
fi
# Path manipulation
if [ "$EUID" = "0" ]; then
pathmunge /usr/sbin
pathmunge /usr/local/sbin
else
pathmunge /usr/local/sbin after
pathmunge /usr/sbin after
fi
HOSTNAME=`/usr/bin/hostname 2>/dev/null`
HISTSIZE=1000
if [ "$HISTCONTROL" = "ignorespace" ] ; then
export HISTCONTROL=ignoreboth
else
export HISTCONTROL=ignoredups
fi
export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL
# By default, we want umask to get set. This sets it for login shell
# Current threshold for system reserved uid/gids is 200
# You could check uidgid reservation validity in
# /usr/share/doc/setup-*/uidgid file
if [ $UID -gt 199 ] && [ "`/usr/bin/id -gn`" = "`/usr/bin/id -un`" ]; then
umask 002
else
umask 022
fi
for i in /etc/profile.d/*.sh /etc/profile.d/sh.local ; do
if [ -r "$i" ]; then
if [ "${-#*i}" != "$-" ]; then
. "$i"
else
. "$i" >/dev/null
fi
fi
done
unset i
unset -f pathmunge
export LANG=en_US.UTF-8
export HISTTIMEFORMAT="%F %T `whoami` "
export JAVA_HOME=/usr/local/jdk
export TOMCAT_HOME=/apps/tomcat
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$TOMCAT_HOME/bin:$PATH
export CLASSPATH=.$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar
执行构建脚本
root@k8s-master1:/opt/k8s-data/dockerfile/web/pub-images/jdk-1.8.212# bash build-command.sh
验证镜像是否构建成功,运行一个容器,进入查看 java 版本
root@k8s-master1:/opt/k8s-data/dockerfile/web/pub-images/jdk-1.8.212# nerdctl run -it --rm y73.harbor.com/y73/jdk-base:v8.212 bash
[root@d7e7c9090bf2 /]#
[root@d7e7c9090bf2 /]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)
[root@d7e7c9090bf2 /]#
3、tomcat 镜像
Dockerfile
#Tomcat 8.5.43基础镜像
FROM y73.harbor.com/y73/jdk-base:v8.212
MAINTAINER zhangshijie "zhangshijie@magedu.net"
RUN mkdir /apps /data/tomcat/webapps /data/tomcat/logs -pv
ADD apache-tomcat-8.5.43.tar.gz /apps
RUN useradd tomcat -u 2050 && ln -sv /apps/apache-tomcat-8.5.43 /apps/tomcat && chown -R tomcat.tomcat /apps /data -R
build-command.sh
#!/bin/bash
#docker build -t harbor.magedu.net/pub-images/tomcat-base:v8.5.43 .
#sleep 3
#docker push harbor.magedu.net/pub-images/tomcat-base:v8.5.43
nerdctl build -t y73.harbor.com/y73/tomcat-base:v8.5.43 .
nerdctl push y73.harbor.com/y73/tomcat-base:v8.5.43
执行构建脚本
root@k8s-master1:/opt/k8s-data/dockerfile/web/pub-images/tomcat-base-8.5.43# bash build-command.sh
4、nginx 镜像
Dockerfile
#Nginx Base Image
FROM y73.harbor.com/baseimages/centos-base:7.9.2009
MAINTAINER zhangshijie@magedu.net
RUN yum install -y vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop
ADD nginx-1.22.0.tar.gz /usr/local/src/
RUN cd /usr/local/src/nginx-1.22.0 && ./configure && make && make install && ln -sv /usr/local/nginx/sbin/nginx /usr/sbin/nginx &&rm -rf /usr/local/src/nginx-1.22.0.tar.gz
build-command.sh
#!/bin/bash
#docker build -t y73.harbor.com/y73/nginx-base:v1.22.0 .
#sleep 1
#docker push y73.harbor.com/y73/nginx-base:v1.22.0
nerdctl build -t y73.harbor.com/y73/nginx-base:v1.22.0 .
nerdctl push y73.harbor.com/y73/nginx-base:v1.22.0
执行构建脚本
root@k8s-master1:/opt/k8s-data/dockerfile/web/pub-images/nginx-base# bash build-command.sh
2、制作业务镜像
1、tomcat 业务镜像
**注意:其中几个脚本文件都要添加执行权限。**其中 tomcat 的启动脚本 catalina.sh
和 主配置文件server.xml
,可以通过启动一个容器来获得模板,后期自己修改。filebeat.yml 在容器内启动一个进程,用于日志收集。
Dockerfile
#tomcat web1
FROM y73.harbor.com/y73/tomcat-base:v8.5.43
ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
#ADD myapp/* /data/tomcat/webapps/myapp/
ADD app1.tar.gz /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
#ADD filebeat.yml /etc/filebeat/filebeat.yml
RUN chown -R nginx.nginx /data/ /apps/
#ADD filebeat-7.5.1-x86_64.rpm /tmp/
#RUN cd /tmp && yum localinstall -y filebeat-7.5.1-amd64.deb
EXPOSE 8080 8443
CMD ["/apps/tomcat/bin/run_tomcat.sh"]
run_tomcat.sh ,tomcat 启动脚本:运行一个程序在前台占着,防止容器自动退出。
#!/bin/bash
#echo "nameserver 223.6.6.6" > /etc/resolv.conf
#echo "192.168.7.248 k8s-vip.example.com" >> /etc/hosts
#/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
su - nginx -c "/apps/tomcat/bin/catalina.sh start"
tail -f /etc/hosts
build-command.sh ,执行该脚本要在脚本后面 $1 位置加一个参数用作镜像的 tag 。比如 bash build-command.sh v1
#!/bin/bash
TAG=$1
#docker build -t y73.harbor.com/y73/tomcat-app1:${TAG} .
#sleep 3
#docker push y73.harbor.com/y73/tomcat-app1:${TAG}
nerdctl build -t y73.harbor.com/y73/tomcat-app1:${TAG} .
nerdctl push y73.harbor.com/y73/tomcat-app1:${TAG}
执行构建脚本
root@k8s-master1:/opt/k8s-data/dockerfile/web/magedu/tomcat-app1# bash build-command.sh v1
这里报错提示没有 nginx 用户。但是我前面的 centos 基础镜像已经把 nginx 用户加进去了,回去检查发现是基础镜像打错了,重新构建镜像以解决问题。或者在当前 Dockerfile 里加上 RUN useradd nginx -u 2088
。
验证镜像
1、启动容器
启动一个容器看看镜像是否没问题。
root@k8s-master1:~# nerdctl run -it -p 8080:8080 y73.harbor.com/y73/tomcat-app1:v1
-bash: /apps/tomcat/bin/catalina.sh: Permission denied
# <nerdctl>
127.0.0.1 localhost localhost.localdomain
::1 localhost localhost.localdomain
10.4.0.2 9060f694b25e nginx-base-9060f
10.4.0.11 1194e7dc15cc tomcat-app1-1194e
10.4.0.12 3bf7cc69f0fa tomcat-app1-3bf7c
# </nerdctl>
2、排错
用浏览器发现无法访问
重新启动一个容器并进入查看有没有监听端口,说明 tomcat 没有启动。注意:这里要先启动容器,然后打开新的终端使用命令nerdctl exec -it tomcat-app1-09f71 bash
进入容器查看端口。
tomcat 的启动和启动脚本 catalina.sh 有关,回去发现构建镜像时没给执行权限
root@k8s-master1:/opt/k8s-data/dockerfile/web/magedu/tomcat-app1# ll catalina.sh
-rw-r--r-- 1 root root 23611 Feb 1 22:14 catalina.sh
给启动脚本 catalina.sh 添加执行权限,再次构建镜像
root@k8s-master1:/opt/k8s-data/dockerfile/web/magedu/tomcat-app1# chmod a+x catalina.sh
之后启动容器就能正常访问了
至此业务镜像已经构建成功。
2、nginx 业务镜像
Dockerfile
#Nginx 1.22.0
FROM y73.harbor.com/y73/nginx-base:v1.22.0
ADD nginx.conf /usr/local/nginx/conf/nginx.conf
ADD app1.tar.gz /usr/local/nginx/html/webapp/
ADD index.html /usr/local/nginx/html/index.html
#静态资源挂载路径
RUN mkdir -p /usr/local/nginx/html/webapp/static /usr/local/nginx/html/webapp/images
EXPOSE 80 443
CMD ["nginx"]
build-command.sh
#!/bin/bash
TAG=$1
#docker y73.harbor.com/y73/nginx-web1:${TAG} .
#echo "镜像构建完成,即将上传到harbor"
#sleep 1
#docker push y73.harbor.com/y73/nginx-web1:${TAG}
#echo "镜像上传到harbor完成"
nerdctl build -t y73.harbor.com/y73/nginx-web1:${TAG} .
nerdctl push y73.harbor.com/y73/nginx-web1:${TAG}
主配置文件 nginx.conf ,在配置文件里加入 daemon off
,使其在前台运行。
user nginx nginx;
worker_processes auto;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
daemon off; # 关闭后台运行,使其在前台运行,这样运行容器时就能保证 nginx 一直在运行状态
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
upstream tomcat_webserver {
server magedu-tomcat-app1-service.magedu.svc.y73.local:80; # 可以从 tomcat 的 yaml 文件和命令 kubectl get svc -A 找到
}
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
location /webapp {
root html;
index index.html index.htm;
}
location /myapp { # 将访问 /myapp 的请求转发到tomcat
proxy_pass http://tomcat_webserver;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
执行构建脚本
root@k8s-master1:/opt/k8s-data/dockerfile/web/magedu/nginx# bash build-command.sh v1
验证镜像
此处不用 nerdctl 命令运行容器来验证镜像,是因为 nerdctl 无法解析 service name ,会报错。我们直接在k8s里运行 pod 来验证。跳转到 3、k8s 启动 nginx
3、k8s 中运行业务
1、创建 nfs 服务
# 安装 nfs 服务
root@haproxy:~# apt install nfs-server
# 创建共享目录
root@k8s-haproxy1:~# mkdir -p /data/k8sdata/magedu/images
root@k8s-haproxy1:~# mkdir -p /data/k8sdata/magedu/static
# 在末尾添加
root@haproxy:~# vim /etc/exports
/data/k8sdata *(rw,no_root_squash) #读写权限,不做权限映射
# 不重启nfs,使其生效
root@k8s-haproxy1:~# exportfs -arv
exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exporting *:/data/volumes
exporting *:/data/k8sdata
root@haproxy:~# systemctl enable nfs-server
在每个 node 节点验证有没有权限
root@k8s-master1:~# showmount -e 192.168.0.119
Command 'showmount' not found, but can be installed with:
apt install nfs-common
# 没装nfs-common无法识别nfs文件系统
root@k8s-master1:~# apt install nfs-common
root@k8s-master1:~# showmount -e 192.168.0.119
Export list for 192.168.0.119:
/data/volumes *
/data/k8sdata *
2、启动 tomcat Pod
tomcat 服务被后续的 nginx 服务所依赖,我们先启动 tomcat。
tomcat-app1.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app1-deployment-label
name: magedu-tomcat-app1-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-tomcat-app1-selector
template:
metadata:
labels:
app: magedu-tomcat-app1-selector
spec:
containers:
- name: magedu-tomcat-app1-container
image: y73.harbor.com/y73/tomcat-app1:v1
#command: ["/apps/tomcat/bin/run_tomcat.sh"]
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
#resources:
# limits:
# cpu: 1
# memory: "512Mi"
# requests:
# cpu: 500m
# memory: "512Mi"
volumeMounts:
- name: magedu-images
mountPath: /usr/local/nginx/html/webapp/images
readOnly: false
- name: magedu-static
mountPath: /usr/local/nginx/html/webapp/static
readOnly: false
volumes:
- name: magedu-images
nfs:
server: 192.168.0.119
path: /data/k8sdata/magedu/images
- name: magedu-static
nfs:
server: 192.168.0.119
path: /data/k8sdata/magedu/static
# nodeSelector:
# project: magedu
# app: tomcat
---
kind: Service
apiVersion: v1
metadata:
labels:
app: magedu-tomcat-app1-service-label
name: magedu-tomcat-app1-service
namespace: magedu
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 30092
selector:
app: magedu-tomcat-app1-selector
运行并验证
root@k8s-master1:/opt/k8s-data/yaml/magedu/tomcat-app1# kubectl apply -f tomcat-app1.yaml
root@k8s-master1:/opt/k8s-data/yaml/magedu/tomcat-app1# kubectl get pod -n magedu -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
magedu-tomcat-app1-deployment-f47468cc5-bs6jw 1/1 Running 0 28s 10.200.36.100 192.168.0.113 <none> <none>
3、k8s 启动 nginx
nginx.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: magedu-nginx-deployment-label
name: magedu-nginx-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-nginx-selector
template:
metadata:
labels:
app: magedu-nginx-selector
spec:
containers:
- name: magedu-nginx-container
image: y73.harbor.com/y73/nginx-web1:v1
#command: ["/apps/tomcat/bin/run_tomcat.sh"]
#imagePullPolicy: IfNotPresent
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
- containerPort: 443
protocol: TCP
name: https
env:
- name: "password"
value: "123456"
- name: "age"
value: "20"
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 500m
memory: 256Mi
volumeMounts:
- name: magedu-images
mountPath: /usr/local/nginx/html/webapp/images
readOnly: false
- name: magedu-static
mountPath: /usr/local/nginx/html/webapp/static
readOnly: false
volumes:
- name: magedu-images
nfs:
server: 192.168.0.119
path: /data/k8sdata/magedu/images
- name: magedu-static
nfs:
server: 192.168.0.119
path: /data/k8sdata/magedu/static
#nodeSelector:
# group: magedu
---
kind: Service
apiVersion: v1
metadata:
labels:
app: magedu-nginx-service-label
name: magedu-nginx-service
namespace: magedu
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30090
- name: https
port: 443
protocol: TCP
targetPort: 443
nodePort: 30091
selector:
app: magedu-nginx-selector
运行并验证
root@k8s-master1:/opt/k8s-data/yaml/magedu/nginx# kubectl apply -f nginx.yaml
4、测试存储 nfs
在本实验中,由 tomcat 往 nfs 存储服务器里写文件,然后 nginx 再来访问。我们可以进入 tomcat Pod ,在对应的挂载的目录里面下载一个图片
图片下载好后,在 nfs 存储服务器上能够看到刚刚下载的图片
然后再浏览器中访问看看
在 static 资源目录里测试步骤同上
六、zookeeper 集群
1、制作镜像
Dockerfile
#FROM harbor-linux38.local.com/linux38/slim_java:8
FROM y73.harbor.com/y73/slim_java:8
ENV ZK_VERSION 3.4.14
ADD repositories /etc/apk/repositories
# Download Zookeeper
COPY zookeeper-3.4.14.tar.gz /tmp/zk.tgz
COPY zookeeper-3.4.14.tar.gz.asc /tmp/zk.tgz.asc
COPY KEYS /tmp/KEYS
RUN apk add --no-cache --virtual .build-deps \
ca-certificates \
gnupg \
tar \
wget && \
#
# Install dependencies
apk add --no-cache \
bash && \
#
#
# Verify the signature
export GNUPGHOME="$(mktemp -d)" && \
gpg -q --batch --import /tmp/KEYS && \
gpg -q --batch --no-auto-key-retrieve --verify /tmp/zk.tgz.asc /tmp/zk.tgz && \
#
# Set up directories
#
mkdir -p /zookeeper/data /zookeeper/wal /zookeeper/log && \
#
# Install
tar -x -C /zookeeper --strip-components=1 --no-same-owner -f /tmp/zk.tgz && \
#
# Slim down
cd /zookeeper && \
cp dist-maven/zookeeper-${ZK_VERSION}.jar . && \
rm -rf \
*.txt \
*.xml \
bin/README.txt \
bin/*.cmd \
conf/* \
contrib \
dist-maven \
docs \
lib/*.txt \
lib/cobertura \
lib/jdiff \
recipes \
src \
zookeeper-*.asc \
zookeeper-*.md5 \
zookeeper-*.sha1 && \
#
# Clean up
apk del .build-deps && \
rm -rf /tmp/* "$GNUPGHOME"
COPY conf /zookeeper/conf/
COPY bin/zkReady.sh /zookeeper/bin/
COPY entrypoint.sh /
ENV PATH=/zookeeper/bin:${PATH} \
ZOO_LOG_DIR=/zookeeper/log \
ZOO_LOG4J_PROP="INFO, CONSOLE, ROLLINGFILE" \
JMXPORT=9010
ENTRYPOINT [ "/entrypoint.sh" ]
CMD [ "zkServer.sh", "start-foreground" ]
EXPOSE 2181 2888 3888 9010
build-command.sh
#!/bin/bash
TAG=$1
#docker build -t y73.harbor.com/y73/zookeeper:${TAG} .
#sleep 1
#docker push y73.harbor.com/y73/zookeeper:${TAG}
nerdctl build -t y73.harbor.com/y73/zookeeper:${TAG} .
nerdctl push y73.harbor.com/y73/zookeeper:${TAG}
执行构建脚本
root@k8s-master1:/opt/k8s-data/dockerfile/web/magedu/zookeeper# bash build-command.sh v1
2、创建 pod
1、创建 PV 和 PVC
首先在 nfs 服务器上创建目录
zookeeper-persistentvolume.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-1
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.0.119
path: /data/k8sdata/magedu/zookeeper-datadir-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-2
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.0.119
path: /data/k8sdata/magedu/zookeeper-datadir-2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-3
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.0.119
path: /data/k8sdata/magedu/zookeeper-datadir-3
创建 PV
root@k8s-master1:/opt/k8s-data/yaml/magedu/zookeeper/pv# kubectl apply -f zookeeper-persistentvolume.yaml
persistentvolume/zookeeper-datadir-pv-1 created
persistentvolume/zookeeper-datadir-pv-2 created
persistentvolume/zookeeper-datadir-pv-3 created
zookeeper-persistentvolumeclaim.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datadir-pvc-1
namespace: magedu
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datadir-pv-1
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datadir-pvc-2
namespace: magedu
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datadir-pv-2
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datadir-pvc-3
namespace: magedu
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datadir-pv-3
resources:
requests:
storage: 10Gi
创建 PVC
root@k8s-master1:/opt/k8s-data/yaml/magedu/zookeeper/pv# kubectl apply -f zookeeper-persistentvolumeclaim.yaml
persistentvolumeclaim/zookeeper-datadir-pvc-1 created
persistentvolumeclaim/zookeeper-datadir-pvc-2 created
persistentvolumeclaim/zookeeper-datadir-pvc-3 created
2、创建 pod
yaml 文件
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: magedu
spec:
ports:
- name: client
port: 2181
selector:
app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper1
namespace: magedu
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 32181
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "1"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper2
namespace: magedu
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 32182
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "2"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper3
namespace: magedu
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 32183
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "3"
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: zookeeper1
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "1"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: y73.harbor.com/y73/zookeeper:v3.4.14
imagePullPolicy: Always
env:
- name: MYID
value: "1"
- name: SERVERS
value: "zookeeper1,zookeeper2,zookeeper3"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/zookeeper/data"
name: zookeeper-datadir-pvc-1
volumes:
- name: zookeeper-datadir-pvc-1
persistentVolumeClaim:
claimName: zookeeper-datadir-pvc-1
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: zookeeper2
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "2"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: y73.harbor.com/y73/zookeeper:v3.4.14
imagePullPolicy: Always
env:
- name: MYID
value: "2"
- name: SERVERS
value: "zookeeper1,zookeeper2,zookeeper3"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/zookeeper/data"
name: zookeeper-datadir-pvc-2
volumes:
- name: zookeeper-datadir-pvc-2
persistentVolumeClaim:
claimName: zookeeper-datadir-pvc-2
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: zookeeper3
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "3"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: y73.harbor.com/y73/zookeeper:v3.4.14
imagePullPolicy: Always
env:
- name: MYID
value: "3"
- name: SERVERS
value: "zookeeper1,zookeeper2,zookeeper3"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/zookeeper/data"
name: zookeeper-datadir-pvc-3
volumes:
- name: zookeeper-datadir-pvc-3
persistentVolumeClaim:
claimName: zookeeper-datadir-pvc-3
3、验证
先使用 kubectl logs
和 kubectl describe
命令查看有无异常,然后进入 pod 查看它们是否是集群。
前面我们把 yaml 文件里的镜像拉取策略设置成了 imagePullPolicy: Always
,这样每次 pod 重建都要从 harbor 拉取镜像;而且前面的实验我们做了 nginx 负载均衡,所以是通过 nginx 从 harbor 拉取镜像。
这里我们把 nginx 停了,然后删除 zookeeper 集群里的 leader ,这样 pod 无法拉取镜像,kubelet 就无法重建 pod。然后就会在 zookeeper 集群里剩下的 pod 中触发选举机制,重新选出一个 pod 作为 leader 。
停止负载均衡 nginx
root@k8s-haproxy1:~# /apps/nginx/sbin/nginx -s stop
之后删除 zookeeper 集群里的 leader
root@k8s-master1:/opt/k8s-data/yaml/magedu/zookeeper# kubectl get pod -n magedu
NAME READY STATUS RESTARTS AGE
magedu-nginx-deployment-544b9686f5-kwtgq 1/1 Running 0 23h
magedu-tomcat-app1-deployment-f47468cc5-bs6jw 1/1 Running 0 31h
net-test4 1/1 Running 0 28h
zookeeper1-77556d7d84-5njfr 1/1 Running 0 9m14s
zookeeper2-7888f5f57f-vc59b 1/1 Running 0 9m14s
zookeeper3-7fd6f7d994-bzq66 1/1 Running 0 9m14s
root@k8s-master1:/opt/k8s-data/yaml/magedu/zookeeper#
root@k8s-master1:/opt/k8s-data/yaml/magedu/zookeeper# kubectl delete pod zookeeper3-7fd6f7d994-bzq66 -n magedu
pod "zookeeper3-7fd6f7d994-bzq66" deleted
然后进入剩下的 pod 里查看选举出来的 leader
最后我们启动 nginx ,让原来的 leader pod3 重建起来,重建起来的 pod 就变成了 follower