如何在同一个机器里运行 Kubernetes Control Plane Master Node 和 Worker Node (Kubernetes集群)

小结

在Kubernetes集群的环境中,同一个机器里如何同时运行 Kubernetes Control Plane Master Node 和 Worker Node,这样同一个机器承担了两个角色,本文描述了将Kubernetes Control Plane Master Node进行设置使其承担Worker Node的功能。

问题

参考CSDN: 使用 keepalived 和 haproxy 实现Kubernetes Control Plane的高可用 (HA)
部署了一个Kubernetes Control Plane的高可用 (HA)的集群后,试图在同一个机器上添加Worker Node.

kubeadm join 192.168.238.100:4300 --token si5oek.mbrw418p8mr357qt --discovery-token-ca-cert-hash sha256:0e23eb637e09afc4c6dbb1f891409b314d5731e46fe33d84793ba2d58da006d6

返回类似以下错误:

deploy k8s-ha ,when join worker node to master,which master and worker node are in one machine ,return this error:
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists

Kubectl和Kubeadm的正本如下:

[root@Master ~]# kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.27.2
Kustomize Version: v5.0.1
Server Version: v1.27.7
[root@Master ~]# 
[root@Master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3", GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean", BuildDate:"2023-06-14T09:52:26Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/amd64"}
[root@Master ~]# 

解决

默认情况下Kubernetes Control Plane Master Node被设置为不能部署pod的,因为Control Plane节点被默认设置了以下NoSchedule标签:

[root@Master ~]# kubectl get nodes --selector='node-role.kubernetes.io/control-plane'
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   20h   v1.27.3
node1    Ready    control-plane   19h   v1.27.3
node2    Ready    control-plane   19h   v1.27.3
[root@Master ~]# 

[root@Master ~]# kubectl describe node master | grep Taint
Taints:             node-role.kubernetes.io/control-plane:NoSchedule

这个NoSchedule标签的意义如下:

  • NoSchedule: No pod will be scheduled onto the node unless it has a matching toleration. Existing pods will not be evicted from the node.
  • PreferNoSchedule: Kubernetes prevents pods that cannot tolerate this taint from being scheduled onto the node.
  • NoExecute: If the pod has been running on a node, the pod will be evicted from the node. If the pod has not been running on a node, the pod will not be scheduled onto the node.

需要去掉NoSchedule标签即可解决问题,如下操作 (以Master节点为例,其它Control Plane节点同样操作):

[root@Master ~]# kubectl taint node master node-role.kubernetes.io/control-plane:NoSchedule-
node/master untainted

注意以上有了个-小横线,是表示删除。
检查确认已经去掉:

[root@Master ~]# kubectl describe node node2 | grep Taint
Taints:             <none>
[root@Master ~]# 

可以用以下脚本同时去掉三个节点的标签:

for node in $(kubectl get nodes --selector='node-role.kubernetes.io/control-plane' | awk 'NR>1 {print $1}' ) ; do   kubectl taint node $node node-role.kubernetes.io/control-plane- ; done

注意,以上是为了测试才将 Kubernetes Control Plane Master Node承担了Worker Node的角色,一般不建议如此操作,因为Control Plane Master Node是关键组件,负责管理整个集群,包括调度集群任务和工作量,监测节点和容器运行状态等等,让Control Plane Master Node承担Worker Node功能会有负面作用,例如消耗了资源,导致时间延迟,以及系统不稳定。 最后,也有安全风险。

参考

Kubernetes: k8s-ha how to join worker node to master node ,when master and worker node are in one machine #2219
stackoverflow: Node had taints that the pod didn’t tolerate error when deploying to Kubernetes cluster
stackoverflow: Should I run “join” or “taint” after “kubeadm init”?
stackoverflow: Master tainted - no pods can be deployed
51CTO: 如何实现kubectl taint nodes --all node-role.kubernetes.io/master-的具体操作步骤
Huawei Cloud: Managing Node Taints
Scheduling workloads on control plane nodes in kubernetes – a bad idea?
CSDN: 使用 keepalived 和 haproxy 实现Kubernetes Control Plane的高可用 (HA)

好的,以下是一个简单的Ansible Playbook,可以自动化部署一个三masternodeKubernetes集群。 1. 首先,我们需要创建一个inventory文件,指定服务器的IP地址SSH用户名。在本例中,我们将使用`hosts.ini`文件: ``` [masters] 192.168.1.1 192.168.1.2 192.168.1.3 [nodes] 192.168.1.4 192.168.1.5 [all:vars] ansible_user=ubuntu ``` 2. 接下来,我们需要创建一个`deploy-k8s.yml`文件,定义Kubernetes集群的部署步骤: ``` --- - hosts: all become: yes tasks: - name: Install Docker apt: name: - docker.io - apt-transport-https - ca-certificates - curl - gnupg-agent - software-properties-common state: present - name: Add Kubernetes apt-key apt_key: url: https://packages.cloud.google.com/apt/doc/apt-key.gpg state: present - name: Add Kubernetes apt repository apt_repository: repo: deb https://apt.kubernetes.io/ kubernetes-xenial main state: present filename: kubernetes.list - name: Install Kubernetes components apt: name: - kubelet - kubeadm - kubectl state: present - name: Disable swap command: swapoff -a become: yes when: ansible_swaptotal_mb > 0 - name: Initialize Kubernetes cluster command: kubeadm init --control-plane-endpoint "loadbalancer" --upload-certs --pod-network-cidr=10.244.0.0/16 become: yes args: creates: /etc/kubernetes/admin.conf - name: Copy Kubernetes config file to user's home directory copy: src: /etc/kubernetes/admin.conf dest: /home/{{ ansible_user }}/admin.conf owner: "{{ ansible_user }}" - name: Create Kubernetes directory for user file: path: /home/{{ ansible_user }}/.kube state: directory owner: "{{ ansible_user }}" - name: Copy Kubernetes config file to user's .kube directory copy: src: /etc/kubernetes/admin.conf dest: /home/{{ ansible_user }}/.kube/config owner: "{{ ansible_user }}" - name: Install Flannel CNI plugin command: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml become: yes args: creates: /etc/kubernetes/cni/net.d/10-flannel.conflist - name: Join worker nodes to Kubernetes cluster command: kubeadm join loadbalancer:6443 --token <token> --discovery-token-ca-cert-hash <hash> become: yes when: inventory_hostname in groups['nodes'] - name: Copy Kubernetes config file to worker nodes copy: src: /etc/kubernetes/admin.conf dest: /tmp/admin.conf - name: Set Kubernetes config file for worker nodes command: kubectl config --kubeconfig=/tmp/admin.conf use-context kubernetes-admin@kubernetes become: yes when: inventory_hostname in groups['nodes'] ``` 3. 最后,我们可以运行这个Playbook来自动化部署Kubernetes集群: ``` ansible-playbook -i hosts.ini deploy-k8s.yml ``` 这个Playbook将会在所有服务器上安装DockerKubernetes组件,禁用swap,初始化Kubernetes集群,安装Flannel CNI插件,将节点加入到集群中,并且在工作节点上设置Kubernetes配置文件。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值