CentOS 7.9 kubeadm安装K8S 1.28.2
master 192.168.2.191
worker1 192.168.2.10
一、设置master主机名、服务器初始化配置
设置虚拟机IP、网关
vi /etc/sysconfig/network-scripts/ifcfg-ens33
service network restart
service network restart
service NetworkManager status
service NetworkManager stop
journalctl -xe
service network restart #虚拟机ip地址被占用,network服务启动不了
1.设置master主机名和/etc/hosts
所有节点配置主机名(其他节点名称可自行更改)
hostnamectl set-hostname k8sadmin
cat /etc/hostname
所有节点配置hosts,修改/etc/hosts如下
cat /etc/hosts
192.168.2.191 k8sadmin
192.168.2.10 k8sadminnode1
vi /etc/hostname
所有节点配置Docker、Kubernetes和默认yum源
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
所有节点安装一些常用的工具
yum install wget jq psmisc vim net-tools telnet git -y
所有节点关闭防火墙、SELinux、DNSmasq:
systemctl disable --now firewalld
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
setenforce 0
sed -i 's#SELINUX=enforcing#SElINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SElINUX=disabled#g' /etc/selinux/config
所有节点关闭Swap分区:
swapoff -a && sysctl -w vm.swappiness=0
sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
所有节点安装ntpdate(如果公司的服务器已经配置了自动同步时间,关于时间的配置可以不用操作):
yum update
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate
所有节点同步时间:
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' > /etc/timezone
date
ntpdate time2.aliyun.com
crontab -e
加入到crontab
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com
service crond restart
service crond status
所有节点配置limit:
ulimit -SHn 65535
vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
ssh-keygen -t rsa #在master上生成公钥
for i in k8sadminnode1;do ssh-copy-id -i /root/.ssh/id_rsa.pub $i;done
2、内核配置。从阿里云镜像下载5.4.263两个内核文件。再执行下面命令安装新内核:
yum localinstall -y kernel-lt*
grubby --default-kernel
所有节点更改内核启动顺序
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enabled=1" --update-kernel="$(grubby --default-kernel)"
所有节点检查默认内核是否是5.4.263
grubby --default-kernel
所有节点重启
cat /etc/os-release
uname -a
开启一些K8s集群中必需的内核参数,所有节点配置K8s内核
# cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
# sysctl --system
sysctl --system
所有节点配置完内核后,重启服务器,保证重启后内核依旧加载:
reboot
二、K8S组件和Runtime安装
1.Runtime安装
Containerd作为Runtime,安装docker-ce-20
yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y
首先配置Containerd所需的模块(所有节点)
vim /etc/modules-load.d/containerd.conf
# cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
所有节点加载模块:
modprobe -- overlay
modprobe -- br_netfilter
所有节点配置Containerd所需的内核:
# cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
所有节点配置Containerd的配置文件:
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml
vim /etc/containerd/config.toml
找到containerd.runtimes.runc.options,添加SystemdCgroup = true,如图1.1所示
所有节点将sandbox_image的Pause镜像改成符合自己版本的地址:registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
所有节点启动Containerd,并配置开机自启动:
systemctl daemon-reload
systemctl enable --now containerd
clear
所有节点配置crictl客户端连接的Runtime位置:
vim /etc/crictl.yaml
# cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
2、安装Kubernetes组件
yum list kubeadm.x86_64 --showduplicates | sort -r
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install kubeadm-1.28* kubelet-1.28* kubectl-1.28* -y
systemctl daemon-reload
systemctl enable --now kubelet
systemctl status kubelet
3.集群初始化
集群初始化使用Kubeadm安装集群时,需要一个Master节点初始化集群,然后加入其他节点即可。初始化集群时,可以直接使用Kubeadm命令进行初始化,也可以使用一个配置文件进行初始化,由于使用命令行的形式可能需要配置的字段比较多,
因此本示例采用配置文件进行初始化。注意首先创建的是kubeadm配置文件,宿主机网段、podSubnet网段、serviceSubnet网段不能重复;
其次,kubernetesVersion的值改为和读者环境Kubeadm版本一致,可以通过kubeadm version命令查询;
最后,如果不是高可用集群,controlPlaneEndpoint需要改为Master节点的IP和6443端口,certSANs也需要改为Master节点的IP。注意criSocket更改为自己的Runtime。
cd /root/
kubeadm config print init-defaults > kubeadm-config.yaml
vim kubeadm-config.yaml
cp kubeadm-config.yaml kubeadm-config.yaml_0116bak
vim kubeadm-config.yaml
kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
kubeadm config images pull --config /root/new.yaml
kubeadm config images pull --config /root/new.yaml
kubeadm init --config /root/new.yaml --upload-certs
初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录一下初始化成功生成的token值(令牌值):
如果初始化失败,需要检查各项配置是否正确,之后再次初始化,清理命令如下:
kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube
初始化成功后,Master01节点配置KUBECONFIG环境变量,之后Kubectl即可访问Kubernetes集群:
export KUBECONFIG=/etc/kubernetes/admin.conf
vim /root/.bashrc
source /root/.bashrc
cd /etc/yum.repos.d/
cat kubernetes.repo
cd /etc/sysconfig/
vim kubelet
clear
rpm -qa | grep kube
kubeadm init phase kubelet-start
systemctl status kubelet
ctr ps
crictl ps
crictl ps -a
cd /var/lib/kubelet/
ll
cat kubeadm-flags.env
cat config.yaml
worker加入集群
kubeadm join 192.168.2.191:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:61f5a5c39a5cbe2e480d8147a4269eeaf610c7a70f98ad39f65b71fbb43b3fe8
kubectl get node
三、安装calico CNI网络插件
安装Pod Network
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml
watch kubectl get pods -n calico-system