containerd和kubernetes v 1.24.0 install method
1.24版本包含2项重要变更:
1,移除Dockershim
2,默认关闭各Beta APIs
Centos 7.6
containerd containerd.io 1.5.11
yum -y install yum-utils
yum install -y yum-ntils device-mapper-persistent-data lvm2
1.系统初始化
设置系统主机名以及 Host 文件的相互解析
hostnamectl set-hostname k8s-master01
安装依赖包
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
设置防火墙为 Iptables
systemctl stop firewalld && systemctl disable firewalld && yum -y remove firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
关闭 交换分区 和 SELINUX
关闭交换分区
swapoff -a && sed -i ‘/ swap / s/^(.)$/#\1/g’ /etc/fstab
#关闭 selinux
setenforce 0 && sed -i 's/^SELINUX=./SELINUX=disabled/’ /etc/selinux/config
调整内核参数,对于 K8S
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
调整系统时区
设置系统时区为 中国/上海
timedatectl set-timezone Asia/Shanghai
将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond
关闭系统不需要服务
systemctl stop postfix && systemctl disable postfix
设置 rsyslogd 和 systemd journald
mkdir /var/log/journal # 持久化保存日志的目录
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
持久化保存到磁盘
Storage=persistent
压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
最大占用空间 10G
SystemMaxUse=10G
单日志文件最大 200M
SystemMaxFileSize=200M
日志保存时间 2 周
MaxRetentionSec=2week
不将日志转发到 syslog
ForwardToSyslog=no
EOF
systemctl restart systemd-journald
CentOS7内核升级
#导入ELRepo仓库的公共密钥
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
#安装ELRepo仓库的yum源
rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
#查看可用的系统内核包
yum --disablerepo=“*” --enablerepo=“elrepo-kernel” list available
#安装最新版本内核 kernel-lt 为长期支持版本 kernel-ml为linus个人维护版本
yum --enablerepo=elrepo-kernel install kernel-lt
#查看系统上的所有可用内核
sudo awk -F’ ‘$1=="menuentry " {print i++ " : " $2}’ /etc/grub2.cfg
0 : CentOS Linux (4.4.207-1.el7.elrepo.x86_64) 7 (Core)
1 : CentOS Linux (5.4.8-1.el7.elrepo.x86_64) 7 (Core)
2 : CentOS Linux (3.10.0-957.el7.x86_64) 7 (Core)
3 : CentOS Linux (0-rescue-cf877f153e884428892d2a1454f652a6) 7 (Core)
如果上面的命令提示文件不存在,执行下面的命令 生成 grub 配置文件
grub2-mkconfig -o /boot/grub2/grub.cfg
#设置新的内核为grub2的默认版本(其中 0 是上面查询出来的可用内核)
grub2-set-default 0
重启服务器
reboot
删除旧内核
查看现有内核
rpm -qa | grep kernel
删除旧内核的 RPM 包
yum remove kernel-3.10.0-1062.9.1.el7.x86_64 kernel-tools-3.10.0-1062.9.1.el7.x86_64 kernel-tools-libs-3.10.0-1062.9.1.el7.x86_64 kernel-3.10.0-957.el7.x86_64
2.Kubeadm 部署安装
kube-proxy开启ipvs的前置条件
modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe – ip_vs
modprobe – ip_vs_rr
modprobe – ip_vs_wrr
modprobe – ip_vs_sh
modprobe – nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
CentOS7安装Containerd
安装需要的软件包, yum-util 提供yum-config-manager功能,另外两个是devicemapper驱动依赖的
yum install -y yum-utils device-mapper-persistent-data lvm2
设置 yum 源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install containerd -y
$ containerd config default > /etc/containerd/config.toml
$ systemctl restart containerd
$ systemctl status containerd
替换 containerd 默认的 sand_box 镜像,编辑 /etc/containerd/config.toml
sandbox_image = “registry.aliyuncs.com/google_containers/pause:3.2”
重启containerd
$ systemctl daemon-reload
$ systemctl restart containerd
安装 CRI 客户端 crictl
https://github.com/kubernetes-sigs/cri-tools/releases/ 选择版本
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.20.0/crictl-v1.20.0-linux-amd64.tar.gz
sudo tar zxvf crictl-v1.20.0-linux-amd64.tar.gz -C /usr/local/bin
vi /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
验证是否可用
crictl pull nginx:alpine
crictl rmi nginx:alpine
crictl images
下载镜像
registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.24.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.0
安装 Kubeadm
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
删除旧版本,如果安装了
yum remove kubeadm kubectl kubelet kubernetes-cni cri-tools socat
查看所有可安装版本
yum --showduplicates list kubeadm
默认安装最新稳定版,当前版本1.24.0
yum install kubeadm
安装指定版本用下面的命令
yum -y install kubeadm-1.24.0 kubectl-1.24.0 kubelet-1.24.0
开机自启
systemctl enable kubelet.service
修改kubelet配置
$ vi /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
加入下面内容
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
检查所需端口,iptables规则设置
master节点
协议 方向 端口范围 作用 使用者
TCP 入站 6443* Kubernetes API 服务器 所有组件
TCP 入站 2379-2380 etcd server client API kube-apiserver, etcd
TCP 入站 10250 Kubelet API kubelet 自身、控制平面组件
TCP 入站 10251 kube-scheduler kube-scheduler 自身
TCP 入站 10252 kube-controller-manager kube-controller-manager 自身
vi /etc/sysconfig/iptables #添加下面几行
-A INPUT -p tcp -m state --state NEW -m tcp --dport 6443 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 2379:2380 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 10250:10253 -j ACCEPT
命令行方式添加防火墙规则
iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 6443 -j ACCEPT
iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 2379:2380 -j ACCEPT
iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 10250:10253 -j ACCEPT
工作节点
协议 方向 端口范围 作用 使用者
TCP 入站 10250 Kubelet API kubelet 自身、控制平面组件
TCP 入站 30000-32767 NodePort 服务** 所有组件
-A INPUT -p tcp -m state --state NEW -m tcp --dport 10250 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 30000:32767 -j ACCEPT
** NodePort 服务 的默认端口范围。
使用 * 标记的任意端口号都可以被覆盖,所以您需要保证所定制的端口是开放的。
虽然控制平面节点已经包含了 etcd 的端口,您也可以使用自定义的外部 etcd 集群,或是指定自定义端口。
您使用的 pod 网络插件 (见下) 也可能需要某些特定端口开启。由于各个 pod 网络插件都有所不同,请参阅他们各自文档中对端口的要求。
服务器角色 端口
etcd 2379,2380
Master 6443,8472
Node 8472
LB 8443
iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 6443 -j ACCEPT
iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 8472 -j ACCEPT
iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 2379 -j ACCEPT
iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 2380 -j ACCEPT
iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 8443 -j ACCEPT
systemctl daemon-reload && systemctl restart docker && systemctl restart kubelet && systemctl restart kube-proxy
初始化主节点
kubeadm config print init-defaults > kubeadm-config.yaml
#修改 kubeadm-config.yaml文件如下部分
localAPIEndpoint:
advertiseAddress: 192.168.43.105(这里的ip修改为主机ip)
#默认拉取镜像地址k8s.gcr.io国内无法访问,指定阿里云镜像仓库地址
imageRepository: registry.aliyuncs.com/google_containers
kubernetes 版本
kubernetesVersion: v1.24.0
nodeRegistration:
criSocket: /run/containerd/containerd.sock # 这里的内容修改下
#networking组下新增一行 podSubnet: “10.244.0.0/16” flannel默认使用的网断
networking:
podSubnet: “10.244.0.0/16”
serviceSubnet: 10.96.0.0/12
#将 kube-proxy 默认的调度方式改为ipvs
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
初始化 --experimental-upload-certs 给其它主节点自动颁发证书 tee kubeadm-init.log 把所有信息写入文件中
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
448 history
[root@k8s-master ~]# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
W0506 11:28:45.495726 1637 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme “unix” to the “criSocket” with value “/run/containerd/containerd.sock”. Please update your configuration!
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent–proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
—解决报错
[root@k8s-master ~]# echo “1” >/proc/sys/net/bridge/bridge-nf-call-iptables
-bash: /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
[root@k8s-master ~]# sudo modprobe br_netfilter
[root@k8s-master ~]# echo “1” >/proc/sys/net/bridge/bridge-nf-call-iptables
执行初始化操作
[root@k8s-master ~]# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
W0506 11:34:15.365646 1898 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.43.105]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.43.105 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.43.105 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.002580 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
3d56d52f60e0907b70fe0a81147e8ada51b90cdbf5cdccb912103ef4ad623e04
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.43.105:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:c41e46c32cef717ac69a09b935473087eb1e9f814ea405ea64cca8040fbd8a42
加入主节点以及其余工作节点
执行安装日志中的加入命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown
(
i
d
−
u
)
:
(id -u):
(id−u):(id -g) $HOME/.kube/config
部署网路
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
验证安装
kubectl get pods -o wide -n kube-system
如果遇到 coredns READY数量是0/1 那么可能是dns服务器设置问题,
vi /etc/resolv.conf
nameserver 114.114.114.114
然后再看看
[root@k8s-master ~]# kubectl get pods -o wide -n kube-system
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-77484fbbb5-j9dhf 1/1 Running 7 (10m ago) 7h46m 10.244.85.192 k8s-node01 <none> <none>
kube-system calico-node-757p7 1/1 Running 0 7h46m 192.168.43.105 k8s-master <none> <none>
kube-system calico-node-nswh2 1/1 Running 0 7h46m 192.168.43.106 k8s-node01 <none> <none>
kube-system coredns-74586cf9b6-7psfp 1/1 Running 0 8h 10.244.235.194 k8s-master <none> <none>
kube-system coredns-74586cf9b6-q24nc 1/1 Running 0 8h 10.244.235.193 k8s-master <none> <none>
kube-system etcd-k8s-master 1/1 Running 1 (47m ago) 8h 192.168.43.105 k8s-master <none> <none>
kube-system kube-apiserver-k8s-master 1/1 Running 1 (47m ago) 8h 192.168.43.105 k8s-master <none> <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 1 (47m ago) 8h 192.168.43.105 k8s-master <none> <none>
kube-system kube-proxy-k8lbv 1/1 Running 1 (46m ago) 7h47m 192.168.43.106 k8s-node01 <none> <none>
kube-system kube-proxy-twf98 1/1 Running 1 (47m ago) 8h 192.168.43.105 k8s-master <none> <none>
kube-system kube-scheduler-k8s-master 1/1 Running 1 (47m ago) 8h 192.168.43.105 k8s-master <none> <none>
加入工作节点(node)
kubeadm join 192.168.28.101:6443 --token abcdef.0123456789abcdef
–discovery-token-ca-cert-hash sha256:c33e1d061b0a029df6bfa7182345b7c01b62b9e769a44db62d33ce690f40ac1e
使用的镜像
[root@k8s-master ~]# crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/calico/cni v3.22.1 2a8ef6985a3e5 80.5MB
docker.io/calico/kube-controllers v3.22.1 c0c6672a66a59 54.9MB
docker.io/calico/node v3.22.1 7a71aca7b60fc 69.6MB
docker.io/calico/pod2daemon-flexvol v3.22.1 17300d20daf93 8.46MB
docker.io/library/nginx alpine d7c7c5df4c3a3 10.2MB
registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7a 13.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7a 13.6MB
registry.aliyuncs.com/google_containers/etcd 3.5.1-0 25f8c7f3da61c 98.9MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.5.1-0 25f8c7f3da61c 98.9MB
registry.aliyuncs.com/google_containers/etcd 3.5.3-0 aebe758cef4cd 102MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.23.5 3fc1d62d65872 32.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.23.5 3fc1d62d65872 32.6MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.24.0 529072250ccc6 33.8MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.23.5 b0c9e5e4dbb14 30.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.23.5 b0c9e5e4dbb14 30.2MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.24.0 88784fb4ac2f6 31MB
registry.aliyuncs.com/google_containers/kube-proxy v1.23.5 3c53fa8541f95 39.3MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.23.5 3c53fa8541f95 39.3MB
registry.aliyuncs.com/google_containers/kube-proxy v1.24.0 77b49675beae1 39.5MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.23.5 884d49d6d8c9f 15.1MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.23.5 884d49d6d8c9f 15.1MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.24.0 e3ed7dee73e93 15.5MB
registry.aliyuncs.com/google_containers/pause 3.5 ed210e3e4a5ba 301kB
registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12e 302kB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.6 6270bb605e12e 302kB
registry.aliyuncs.com/google_containers/pause 3.7 221177c6082a8 311kB