探索云原生技术之容器编排引擎-kubeadm安装kubernetes1.21.10(新版:针对高版本内核)

文章目录

kubeadm安装kubernetes1.21.10

  • 操作背景:由于我们之前安装过了kubernetes1.17.4,这次在这个基础上升级到1.21.10

kubernetes1.17.4集群重置(每个节点都要执行)

卸载旧版本Docker容器
  • 1:查看已安装的docker版本
[root@k8s-master ~]# yum list installed |grep docker
docker-ce.x86_64                     18.06.3.ce-3.el7               @docker-ce-stable
  • 2:删除docker
yum -y remove docker-ce.x86_64
卸载旧版本kubernetes1.17.4
  • 1:重置kubernetes集群:
kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
yum clean all
yum remove kube*
  • 2:移除kubernetes对应yum:
yum -y remove kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64

虚拟机环境初始化

为每个节点检查操作系统的版本
  • 检查操作系统的版本(要求操作系统的版本至少在centos7.5以上
[root@master ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
为每个节点升级内核版本
  • 1:为每个节点检查操作系统的版本:(这个内核版本已经很低了,需要升级)
[root@k8s-master ~]# uname -sr
Linux 3.10.0-1160.el7.x86_64
  • 2:在 centos7上启用 ELRepo 仓库:
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
  • 3:查看可用的系统内核相关包:
[root@k8s-master ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
可安装的软件包
kernel-lt.x86_64                                         5.4.234-1.el7.elrepo                        elrepo-kernel
kernel-lt-devel.x86_64                                   5.4.234-1.el7.elrepo                        elrepo-kernel
kernel-lt-doc.noarch                                     5.4.234-1.el7.elrepo                        elrepo-kernel
kernel-lt-headers.x86_64                                 5.4.234-1.el7.elrepo                        elrepo-kernel
kernel-lt-tools.x86_64                                   5.4.234-1.el7.elrepo                        elrepo-kernel
kernel-lt-tools-libs.x86_64                              5.4.234-1.el7.elrepo                        elrepo-kernel
kernel-lt-tools-libs-devel.x86_64                        5.4.234-1.el7.elrepo                        elrepo-kernel
kernel-ml-devel.x86_64                                   6.2.3-1.el7.elrepo                          elrepo-kernel
kernel-ml-doc.noarch                                     6.2.3-1.el7.elrepo                          elrepo-kernel
kernel-ml-headers.x86_64                                 6.2.3-1.el7.elrepo                          elrepo-kernel
kernel-ml-tools.x86_64                                   6.2.3-1.el7.elrepo                          elrepo-kernel
kernel-ml-tools-libs.x86_64                              6.2.3-1.el7.elrepo                          elrepo-kernel
kernel-ml-tools-libs-devel.x86_64                        6.2.3-1.el7.elrepo                          elrepo-kernel
perf.x86_64                                              5.4.234-1.el7.elrepo                        elrepo-kerne
  • 4:安装最新版本的内核:
    • 把下面的“xxx内核版本”全部替换成上一步查询出来的任意版本(例如替换成上面查出来的:5.4.234-1.el7.elrepo或者6.2.3-1.el7.elrepo)
yum --enablerepo=elrepo-kernel install kernel-lt-devel-xxx内核版本 kernel-lt-xxx内核版本 -y
  • 5:查看系统上的所有可用内核
    • 可以看到 Linux (5.4.234-1.el7.elrepo.x86_64)所在的编号为“1”。(记住这个编号)
[root@k8s-master ~]# sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
0 : CentOS Linux (6.2.3-1.el7.elrepo.x86_64) 7 (Core)
1 : CentOS Linux (5.4.234-1.el7.elrepo.x86_64) 7 (Core)
2 : CentOS Linux (3.10.0-1160.el7.x86_64) 7 (Core)
3 : CentOS Linux (0-rescue-5c9164754f1542e781ab283f70d07da5) 7 (Core)
  • 6:设置默认的内核版本:
rm -f /etc/default/grub
vim /etc/default/grub

内容如下:

  • 注意:修改GRUB_DEFAULT=切换内核所在的编号(从上面第5步我们可以看到,Linux (5.4.234-1.el7.elrepo.x86_64)所在的编号为“1”)。则就修改成 GRUB_DEFAULT=1 即可 .
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=切换内核所在的编号
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet"
GRUB_DISABLE_RECOVERY="true"
  • 6:重新创建内核配置:
grub2-mkconfig -o /boot/grub2/grub.cfg
  • 7:重启系统:
reboot
  • 8:再次查看当前系统的内核:
[root@k8s-master ~]# uname -r
5.4.234-1.el7.elrepo.x86_64
分别关闭每个节点防火墙
  • 关闭firewalld服务
  • 关闭iptables服务
systemctl stop firewalld
systemctl disable firewalld

systemctl stop iptables
systemctl disable iptables
分别给每个节点设置主机名
设置192.168.184.100 k8s主机的主机名
hostnamectl set-hostname k8s-master
设置192.168.184.101 k8s从机(1)的主机名
hostnamectl set-hostname k8s-slave01
设置192.168.184.102 k8s从机(2)的主机名
hostnamectl set-hostname k8s-slave02
分别给每个节点进行主机名解析
  • 为了方便后面集群节点间的直接调用,需要配置一下主机名解析,企业中推荐使用内部的DNS服务器
#1:进入hosts文件
vim /etc/hosts

#2:“”追加“”以下内容
192.168.184.100 k8s-master
192.168.184.101 k8s-slave01
192.168.184.102 k8s-slave02
分别给每个节点时间同步
  • kubernetes要求集群中的节点时间必须精确一致,所以在每个节点上添加时间同步
[root@master ~]# yum install ntpdate -y
已加载插件:fastestmirror, langpacks
Determining fastest mirrors
 * base: mirrors.dgut.edu.cn
 * extras: mirrors.dgut.edu.cn
 * updates: mirrors.aliyun.com
base                                                                                       | 3.6 kB  00:00:00     
extras                                                                                     | 2.9 kB  00:00:00     
updates                                                                                    | 2.9 kB  00:00:00     
(1/4): base/7/x86_64/group_gz                                                              | 153 kB  00:00:00     
(2/4): extras/7/x86_64/primary_db                                                          | 246 kB  00:00:00     
(3/4): base/7/x86_64/primary_db                                                            | 6.1 MB  00:00:05     
(4/4): updates/7/x86_64/primary_db                                                         |  15 MB  00:00:09     
软件包 ntpdate-4.2.6p5-29.el7.centos.2.x86_64 已安装并且是最新版本
无须任何处理
[root@master ~]#  ntpdate time.windows.com
30 Apr 00:06:26 ntpdate[1896]: adjust time server 20.189.79.72 offset 0.030924 sec
分别为每个节点关闭selinux
查看selinux是否开启
[root@master ~]# getenforce
Enforcing
为每个节点永久关闭selinux(需要重启)
sed -i 's/enforcing/disabled/' /etc/selinux/config
为每个节点永久关闭swap分区
sed -ri 's/.*swap.*/#&/' /etc/fstab
为每个节点将桥接的IPv4流量传递到iptables的链
  • 在每个节点上将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
# 加载br_netfilter模块
modprobe br_netfilter
# 查看是否加载
[root@master ~]# lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter
# 开始生效
[root@master ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
* Applying /etc/sysctl.conf ...
为每个节点开启ipvs
  • 在kubernetes中service有两种代理模型,一种是基于iptables,另一种是基于ipvs的。ipvs的性能要高于iptables的,但是如果要使用它,需要手动载入ipvs模块。
  • 所有节点安装ipset和ipvsadmin:
yum install ipset ipvsadmin -y
  • 所有节点执行如下脚本:
cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
  • 为脚本授予权限并执行脚本文件:
chmod +x /etc/sysconfig/modules/ipvs.modules
/bin/bash /etc/sysconfig/modules/ipvs.modules
  • 检查是否加载
[root@k8s-master ~]# lsmod | grep -e -ip_vs -e nf_conntrack
nf_conntrack_netlink    45056  0 
nfnetlink              16384  2 nf_conntrack_netlink
nf_conntrack          147456  5 xt_conntrack,nf_nat,nf_conntrack_netlink,xt_MASQUERADE,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs
重启三台机器
reboot
为每个节点安装必备工具、Docker、kubeadm、kubelet和kubectl
  • yum安装所需要的软件包:
yum -y install gcc gcc-c++ yum-utils
  • 更新 yum 软件包索引:
yum makecache fast
为每个节点安装Docker
  • 1:切换镜像源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
  • 2:查看当前镜像源中支持的docker版本
yum list docker-ce --showduplicates | sort -r
  • 3:安装特定版本的docker-ce
yum -y install docker-ce-3:20.10.8-3.el7.x86_64 docker-ce-cli-3:20.10.8-3.el7.x86_64 containerd.io
为每个节点的Docker接入阿里云镜像加速器

配置镜像加速器方法。

  • 准备工作:
  • 1:首先进入阿里云容器镜像服务 https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors
  • 2:点击镜像工具下面的镜像加速器
  • 3:拿到你的加速器地址和下面第二步的registry-mirrors的值替换即可。

针对Docker客户端版本大于 1.10.0 的用户,可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器

  • 第一步:
mkdir -p /etc/docker
  • 第二步:
cat <<EOF> /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],	
  "registry-mirrors": [
    "https://u01jo9qv.mirror.aliyuncs.com",
    "https://hub-mirror.c.163.com",
    "https://mirror.baidubce.com"
  ],
  "live-restore": true,
  "log-driver":"json-file",
  "log-opts": {"max-size":"500m", "max-file":"3"},
  "max-concurrent-downloads": 10,
  "max-concurrent-uploads": 5,
  "storage-driver": "overlay2"
}
EOF
  • 第三步:
sudo systemctl daemon-reload
  • 第四步:
sudo systemctl restart docker

最后就接入阿里云容器镜像加速器成功啦。

为每个节点的docker设置开机自动启动
sudo systemctl enable docker
为每个节点都添加阿里云的YUM软件源(十分重要,否则无法安装k8s)
  • 由于kubernetes的镜像源在国外,所以这里切换成国内的阿里云镜像源否则将下载不了k8s
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
为每个节点安装kubeadm、kubelet和kubectl
  • 1:指定版本安装
yum install -y kubelet-1.21.10 kubeadm-1.21.10 kubectl-1.21.10
  • 2:配置kubelet的cgroup,编辑/etc/sysconfig/kubelet, 添加下面的配置
vim /etc/sysconfig/kubelet

内容如下:

KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
  • 3:设置为开机自启动即可,由于没有生成配置文件,k8s集群初始化后自动启动
systemctl enable kubelet
为每个节点准备集群镜像
查看k8s所需镜像
[root@k8s-master ~]# kubeadm config images list
I0430 16:57:23.161932    1653 version.go:252] remote version is much newer: v1.23.6; falling back to: stable-1.18
W0430 16:57:26.117608    1653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
为每个节点下载镜像(master和node节点都要)

下载镜像

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0

给 coredns 镜像重新打 tag :

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
部署k8s的Master节点
  • 当前主节点(master)的ip192.168.184.100
  • 下面的操作只需要在master节点上执行即可
  • 哪一个节点执行了下面的init命令,那么这一个节点就会变成master节点
# 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里需要指定阿里云镜像仓库地址
kubeadm init \
  --apiserver-advertise-address=192.168.184.100 \
  --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
  --kubernetes-version=v1.21.10 \
  --service-cidr=10.96.0.0/16 \
  --pod-network-cidr=10.244.0.0/16
  • 参数介绍:
    • –apiserver-advertise-address:必须指定你master节点的ip地址
    • –kubernetes-version:指定k8s的版本,我们这里使用的是v1.21.10
问题1:get nodes报错解决(有点坑)

产生原因

  • localhost:8080 这个端口是k8s api(kube-apiserver非安全端口)的端口,在master上面可以执行成功其实走的是配置文件但是在node上连接的是本地的非安全端口

kube-apiserver两个端口:

  • localhost:8080 非安全端口(不需要认证,没有加入认证机制),是kubectl默认先连接8080,如果你配置kubeconfig(.kube/config)就直接走这个配置连接的安全端口(在master上没有8080端口,走的是kubeconfig)

  • master-ip:6443 安全端口 ,提供了内部授权的机制,比如登入网站想要输入用户名密码才能登入。

  • kubeadm安装的默认禁用了8080端口

  • 当我们执行kubectl get nodes命令时出现下面的异常信息:

[root@k8s-master ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
  • 解决方案:就是执行下面这一行命令即可

  • 方法1:(非常建议,永久生效,一劳永逸)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 方法2:(不建议,因为重新连接centos7服务器会失效,需要重新执行)
export KUBECONFIG=/etc/kubernetes/admin.conf
  • 测试一下
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   65s   v1.21.10
问题2:发现节点的状态是NotReady。
#查看当前节点日志,查看报错信息
journalctl -f -u kubelet
-- Logs begin at 三 2022-05-04 11:23:44 CST. --
5月 04 11:48:14 k8s-master kubelet[2658]: W0504 11:48:14.630055    2658 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
5月 04 11:48:15 k8s-master kubelet[2658]: E0504 11:48:15.119308    2658 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
5月 04 11:48:19 k8s-master kubelet[2658]: W0504 11:48:19.631196    2658 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
  • 原因是kubelet参数多了network-plugin=cni但又没有安装cni插件

  • 看下面的步骤,安装cni完成之后。

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   18h   v1.17.4
问题3:初始化报错
[root@k8s-master ~]# kubeadm init \
  --apiserver-advertise-address=192.168.184.100 \
  --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
  --kubernetes-version=v1.21.10 \
  --service-cidr=10.96.0.0/16 \
  --pod-network-cidr=10.244.0.0/16
W0501 20:02:43.025579   14399 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.14. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Port-10259]: Port 10259 is in use
	[ERROR Port-10257]: Port 10257 is in use
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
	[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
  • 可以看出,docker和k8s要进行版本的匹配,不能太高也不能太低,否则就会报错。
解决办法:卸载Docker容器
  • 1:查看已安装的docker版本
[root@aubin ~]#  yum list installed |grep docker
输出结果:
containerd.io.x86_64                        1.4.4-3.1.el7              @docker-ce-stable
docker-ce-cli.x86_64                        1:20.10.6-3.el7            @docker-ce-stable
docker-scan-plugin.x86_64                   0.7.0-3.el7                @docker-ce-stable
  • 2:删除docker
yum -y remove containerd.io.x86_64
yum -y remove docker-ce-cli.x86_64 
yum -y remove docker-scan-plugin.x86_64
  • 3:重新按照上面安装docker(别忘了这步)
卸载kubernetes
kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
yum clean all
yum remove kube*
yum -y remove kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64
  • 重新安装k8s(或者换一个新的虚拟机)

  • 执行成功在最下面会返回这样一行内容:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

#############最重要的命令,可以用来加入从节点##########
kubeadm join 192.168.184.100:6443 --token 1r8ysf.p7myixjcjvmn39u3 \
    --discovery-token-ca-cert-hash sha256:6f5a2271dd9e764fc33e4ec22e5d3a68d79d57d80bec7a67f99b2190667c7631 
  • 可以看到下面有行命令,是用来加入节点的,只需要将这条命令放到从机上去即可。
kubeadm join 192.168.184.100:6443 --token 1r8ysf.p7myixjcjvmn39u3 \
    --discovery-token-ca-cert-hash sha256:6f5a2271dd9e764fc33e4ec22e5d3a68d79d57d80bec7a67f99b2190667c7631 
在master节点上使用kubectl工具
mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 然后我们的master节点就部署好了。
部署k8s的slave节点
  • 在两个从节点(192.168.184.101和192.168.184.102)上写入如下的命令(加入节点):
kubeadm join 192.168.184.100:6443 --token 1r8ysf.p7myixjcjvmn39u3 \
    --discovery-token-ca-cert-hash sha256:6f5a2271dd9e764fc33e4ec22e5d3a68d79d57d80bec7a67f99b2190667c7631
  • 上面这些命令都是在init主节点自动产生的,具体的可以看上面部署master节点。

默认的token有效期为2小时,当过期之后,该token就不能用了,这时可以使用如下的命令创建token。

创建token的两种方式
  • 1:创建一个永久的token:
# 生成一个永不过期的token
[root@k8s-master ~]# kubeadm token create --ttl 0 --print-join-command
kubeadm join 192.168.184.100:6443 --token 5vd98v.w8sl3oes4bta0bap --discovery-token-ca-cert-hash sha256:9c2f7674d7532439b5020fa88897747e5e3473d6bc3cdb4448768ed9efc29142
########这个就是不会过期的命令
kubeadm join 192.168.184.100:6443 --token 5vd98v.w8sl3oes4bta0bap --discovery-token-ca-cert-hash sha256:9c2f7674d7532439b5020fa88897747e5e3473d6bc3cdb4448768ed9efc29142
  • 2:创建一个会过期的token:
kubeadm token create --print-join-command
  • 3:把生成命令重新在各个slave节点上执行。
[root@k8s-slave01 ~]# kubeadm join 192.168.184.100:6443 --token 2wmd2p.39xczgxkoxbjmqpx     --discovery-token-ca-cert-hash sha256:b6a3200a4bf327ce10d229921f21c2d890f0bf48da6a3e37de5de36c48ed9210 

W0504 13:38:09.198532    1849 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING Hostname]: hostname "k8s-slave01" could not be reached
	[WARNING Hostname]: hostname "k8s-slave01": lookup k8s-slave01 on 223.5.5.5:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  • 4:在master节点查看所有节点状态
[root@k8s-master ~]# kubectl get pods,svc,nodes
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   5m32s

NAME               STATUS     ROLES                  AGE     VERSION
node/k8s-master    NotReady   control-plane,master   5m33s   v1.21.10
node/k8s-slave01   NotReady   <none>                 22s     v1.21.10
node/k8s-slave02   NotReady   <none>                 20s     v1.21.10
slave节点退出join
  • 当slave已经加入一个master节点,我们想让它加入另外一个master节点,我们必须要先退出才能join。
[root@k8s-slave02 ~]# kubeadm join 192.168.184.100:6443 --token f99cdo.1a4h5qv90t6ktq0l     --discovery-token-ca-cert-hash sha256:9570221e545e3b7c592ad9460d9c3d393e6123101b7c26e7b1437bcd5c20f5be 
W0515 14:44:59.685275   13110 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks

error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[ERROR Port-10250]: Port 10250 is in use
	[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
  • 进行集群重置(slave节点上执行),必须要进行重置,否则将无法执行成功下面的join命令
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl restart kubelet
systemctl restart docker
  • 再次运行join命令,可以运行成功了。
[root@k8s-slave01 ~]# kubeadm join 192.168.184.100:6443 --token 8vd0qj.doqff1kdurlzgzzd     --discovery-token-ca-cert-hash sha256:9570221e545e3b7c592ad9460d9c3d393e6123101b7c26e7b1437bcd5c20f5be 
W0515 15:00:39.368526   16957 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
部署CNI网络插件calico(只在Master节点操作)
在Master节点上使用kubectl工具查看节点状态(可以看出状态是NotReady)
[root@k8s-master ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master    NotReady   control-plane,master   6m41s   v1.21.10
k8s-slave01   NotReady   <none>                 90s     v1.21.10
k8s-slave02   NotReady   <none>                 88s     v1.21.10
安装网络插件calico(只在master节点执行)
  • kubernetes支持多种网络插件,比如flannel、calico、canal等,任选一种即可,本次选择calico

calico和kubernetes版本对应

  • 1:执行远程配置文件:
kubectl apply -f https://projectcalico.docs.tigera.io/v3.19/manifests/calico.yaml
  • 2:查看部署 CNI 网络插件进度:(直到全部都是Running)
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7cc8dd57d9-fqpv7   1/1     Running   0          3m52s
calico-node-dvgbx                          1/1     Running   0          3m52s
calico-node-k2qs7                          1/1     Running   0          3m52s
calico-node-t7jbx                          1/1     Running   0          3m52s
coredns-6f6b8cc4f6-68jmg                   1/1     Running   0          13m
coredns-6f6b8cc4f6-dq5hd                   1/1     Running   0          13m
etcd-k8s-master                            1/1     Running   0          13m
kube-apiserver-k8s-master                  1/1     Running   0          13m
kube-controller-manager-k8s-master         1/1     Running   0          13m
kube-proxy-26h7x                           1/1     Running   0          8m47s
kube-proxy-h28jz                           1/1     Running   0          8m45s
kube-proxy-w9hqv                           1/1     Running   0          13m
kube-scheduler-k8s-master                  1/1     Running   0          13m
  • 3:查看节点是否都是Ready:(OK)
[root@k8s-master ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
k8s-master    Ready    control-plane,master   15m   v1.21.10
k8s-slave01   Ready    <none>                 10m   v1.21.10
k8s-slave02   Ready    <none>                 10m   v1.21.10
设置 kube-proxy 的 ipvs 模式(只在Master节点执行)
  • 在 Master节点设置 kube-proxy 的 ipvs 模式:
kubectl edit cm kube-proxy -n kube-system
  • 找到mode: “”:
# 修改之前
mode: "" #  ✖
---------------------------------
# 修改之后
mode: "ipvs" # 将mode的值加上ipvs   ✔
  • 修改完成就:wq退出即可。

  • 删除 kube-proxy ,让 Kubernetes 集群自动创建新的 kube-proxy :

kubectl delete pod -l k8s-app=kube-proxy -n kube-system
自动补全插件
  • 1:安装插件
yum -y install bash-completion
  • 2:
echo 'source <(kubectl completion bash)' >>~/.bashrc 
kubectl completion bash >/etc/bash_completion.d/kubectl
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null
source /usr/share/bash-completion/bash_completion
重置集群(如果上面出问题了就执行下面命令)
  • 注意:上面操作出问题了才执行(记住:在master节点之外的节点(也就是在slave节点)进行操作)
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl restart kubelet
systemctl restart docker

❤️💛🧡本章结束,我们下一章见❤️💛🧡

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

摸鱼打酱油

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值