参考:参考此链接步骤一至步骤八,操作相同,但要注意步骤一要为新节点设置静态IP
节点 | 主机名 | IP | OS |
Master | k8s-1 | 192.168.203.222 | centos 7 |
Node1 | k8s-2 | 192.168.203.223 | centos 7 |
Node2 | k8s-3 | 192.168.203.224 | centos 7 |
步骤三设置主机名
//Node2上执行:
hostnamectl --static set-hostname k8s-3
//重启
reboot
步骤四
//Master、Node1、Node2编辑hosts文件
vi /etc/hosts
//新增下面的内容,wq保存。
192.168.203.222 k8s-1
192.168.203.223 k8s-2
192.168.203.223 k8s-3
步骤九
初始化完配置kubectl环境(master&node)
将master下/etc/kubernetes路径下的内容拷贝到每个node节点的相同路径下
//对于非root用户
su 非root用户
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
//对于root用户
export KUBECONFIG=/etc/kubernetes/admin.conf
//也可以直接放到~/.bash_profile
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
//source一下环境变量
source ~/.bash_profile
//查看版本
kubectl version
随后执行master中kubeadm init生成的
kubeadm join 192.168.203.222:6443 --token ou6zi9.0t44pyljg5w927yt \
> --discovery-token-ca-cert-hash sha256:1402546af91b3e811e95c30fafc67470f5f538c0a33a314961d93efa76f89770
如果报错
kubeadm join 192.168.203.222:6443 --token ou6zi9.0t44pyljg5w927yt --discovery-token-ca-cert-hash sha256:1402546af91b3e811e95c30fafc67470f5f538c0a33a314961d93efa76f89770
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
先执行kubeadm reset
kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0223 19:59:39.367960 6607 reset.go:98] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get node registration: failed to get node name from kubelet config: open /var/lib/kubelet/pki/kubelet-client-current.pem: no such file or directory
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
E0223 19:59:43.217277 6607 cleanupnode.go:100] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
再次kubeadm join即可