k8s集群节点删除再重新加入+ERROR

需要删除node 的原因是 pod 在node2上创建失败,给node2 加了一个taint, 让pod选择node1,创建成功,原因不详,我查询了一下解决方法,将node2 从集群中删除再重新加入,所以就尝试了一下。果然可以。 下面就是从集群中删除node 再将node 重新加入集群的流程:

从集群中删除node:

[root@master ~]# kubectl delete node node2
node "node2" deleted
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   10d   v1.21.0
node1    Ready    <none>                 10d   v1.21.0

在master 上生成加入集群的指令:

[root@master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.204.130:6443 --token 9upog9.x1huogm7non75g7n --discovery-token-ca-cert-hash sha256:b85c1afaa3ba92935ae67caf515b893ce92af375568b8a7ecdc559f81a3d3257

在node2 上执行该命令,加入集群:

[root@node2 ~]# kubeadm join 192.168.204.130:6443 --token 9upog9.x1huogm7non75g7n --discovery-token-ca-cert-hash sha256:b85c1afaa3ba92935ae67caf515b893ce92af375568b8a7ecdc559f81a3d3257 
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 20.10
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

由于 kubeadm init 初始化过,一些数据配置已经存在,导致重新加入集群的时候会有冲突,所以会有error。可以通过重置kubeadm  解决。

[root@node2 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0107 22:55:49.383503   72775 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

再次执行加入集群指令:

[root@node2 ~]# kubeadm join 192.168.204.130:6443 --token 9upog9.x1huogm7non75g7n --discovery-token-ca-cert-hash sha256:b85c1afaa3ba92935ae67caf515b893ce92af375568b8a7ecdc559f81a3d3257
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 20.10
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

又一个小插曲, kubeadm 重置的时候, 内核参数也重置了, 手动添加bridge-nf-call-iptables参数。

[root@node2 ~]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

再次尝试加入集群:

[root@node2 ~]# kubeadm join 192.168.204.130:6443 --token vo9o87.p07f0pv6fscubzyz --discovery-token-ca-cert-hash sha256:b85c1afaa3ba92935ae67caf515b893ce92af375568b8a7ecdc559f81a3d3257 
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

加入成功!

  • 9
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
好的,以下是一个简单的Kubernetes与Docker集成的项目案例: 项目概述:使用Kubernetes和Docker搭建一个高可用的Web应用程序,应用程序将在多个节点上运行,并且在节点故障时,它们将自动迁移到其他可用节点上。 组件: 1. Docker:用于打包和部署应用程序的容器化技术。 2. Kubernetes:用于容器编排、自动化部署、自动扩展和管理容器化应用程序的开源平台。 3. Nginx:用于反向代理和负载均衡的Web服务器。 4. Flask:用于构建Web应用程序的Python框架。 5. Redis:用于缓存应用程序数据的内存数据库。 步骤: 1. 安装Docker和Kubernetes 2. 构建Web应用程序Docker镜像 ``` FROM python:3.8-alpine COPY requirements.txt /app/ RUN pip install --no-cache-dir -r /app/requirements.txt COPY . /app/ WORKDIR /app CMD ["python", "app.py"] ``` 3. 部署Web应用程序到Kubernetes集群 ``` apiVersion: apps/v1 kind: Deployment metadata: name: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:latest ports: - containerPort: 5000 imagePullSecrets: - name: my-registry-key ``` 4. 创建Service对象 ``` apiVersion: v1 kind: Service metadata: name: myapp-service spec: selector: app: myapp ports: - protocol: TCP port: 80 targetPort: 5000 type: LoadBalancer ``` 5. 部署Nginx反向代理和负载均衡器 ``` apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 volumeMounts: - name: nginx-conf mountPath: /etc/nginx/nginx.conf subPath: nginx.conf volumes: - name: nginx-conf configMap: name: nginx-conf --- apiVersion: v1 kind: ConfigMap metadata: name: nginx-conf data: nginx.conf: | user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { upstream myapp { server myapp-service; } server { listen 80; location / { proxy_pass http://myapp; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } } ``` 6. 部署Redis ``` apiVersion: apps/v1 kind: Deployment metadata: name: redis spec: replicas: 1 selector: matchLabels: app: redis template: metadata: labels: app: redis spec: containers: - name: redis image: redis:latest ports: - containerPort: 6379 volumeMounts: - name: redis-data mountPath: /data volumes: - name: redis-data emptyDir: {} ``` 7. 使用Kubernetes的Volume功能将Redis数据存储在本地磁盘上 ``` apiVersion: v1 kind: Pod metadata: name: redis-data spec: containers: - name: redis-data image: busybox command: ["/bin/sh"] args: ["-c", "while true; do sleep 3600; done"] volumeMounts: - name: redis-data mountPath: /data --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-data spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: /var/data/redis-data --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: redis-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi ``` 8. 使用Kubernetes的ConfigMap功能将Nginx配置文件存储在集群中 9. 运行应用程序并测试 ``` kubectl apply -f app.yaml kubectl apply -f nginx.yaml kubectl apply -f redis.yaml kubectl apply -f redis-data.yaml kubectl apply -f nginx-conf.yaml kubectl expose deployment myapp --type=LoadBalancer --name=myapp-service ``` 测试:使用Web浏览器访问Nginx的公共IP地址,应该可以看到Web应用程序的主页。同时,停止一个节点,应用程序将自动迁移到其他节点上,而不会影响其可用性。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值