目录
文章目录
准备工作
-
资源准备
- 虚拟机集群:3 Master、2 Node、1 Load Balancer
- 计算资源:x86-64 processor、2CPU、2GB RAM、20GB free disk space
- 操作系统:CentOS 7.x+
- 版本:Kubernetes 1.18.14
- Container Runtime:Docker
-
配置操作系统
3个master节点,2个node节点,共5台实列。hostname ip role k8s-master-01 192.168.121.128 master k8s-master-02 192.168.121.129 master k8s-master-03 192.168.121.130 master k8s-node-01 192.168.121.131 none k8s-node-02 192.168.121.132 none 负载均衡vip 192.168.121.100 -
配置/etc/hosts文件
-
开启全节点间ssh免密登录
-
禁用Swap分区,保证 kubelet 正常工作。
$ swapoff -a
-
关闭SELinux,允许容器访问主机的文件系统
$ setenforce 0 $ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
-
将net.bridge.bridge-nf-call-iptables 设置为 1,保证kube-proxy控制的数据流量经过iptable的处理来进行路由。
$ cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
-
安装依赖软件
$ yum install ebtables ethtool ipvsadm -y
-
关闭各节点防火墙
$ systemctl stop firewalld
-
配置负载均衡器
安装Keepalived
-
所有master主机安装keepalived
$ yum install keepalived -y
-
修改配置文件
修改k8s-master-01的配置文件
#vi /etc/keepalived/keepalived.conf global_defs { router_id LVS_DEVEL_01 //此处变量应保持唯一 } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" //健康检查脚本 interval 8 weight -5 fall 3 rise 2 } vrrp_instance VI_1 { state MASTER //主节点 interface ens33 mcast_src_ip 192.168.121.128 //本机ip virtual_router_id 51 //两台主机需保持一致 priority 150 //权重,主节点应大于备份节点 advert_int 1 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.121.100/24 //虚拟ip } track_script { chk_apiserver } }
修改k8s-master-02配置文件,k8s-master-03类似操作
#vi /etc/keepalived/keepalived.conf global_defs { router_id LVS_DEVEL_02 } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 8 weight -5 fall 3 rise 2 } vrrp_instance VI_1 { state BACKUP //备份节点 interface ens33 mcast_src_ip 192.168.121.129 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.121.100/24 } track_script { chk_apiserver } }
-
在master主机上创建健康检查脚本
#vi /etc/keepalived/check_apiserver.sh err=0 for k in $(seq 1 5) do check_code=$(pgrep kube-apiserver) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 5 continue else err=0 break fi done if [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi
安装haproxy
-
所有master主机上安装haproxy
$ yum install haproxy -y
-
修改配置文件,所有master配置文件相同
global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor listen stats bind *:8006 mode http stats enable stats hide-version stats uri /stats stats refresh 30s stats realm Haproxy\ Statistics stats auth admin:admin frontend k8s-master bind *:16443 //虚拟ip绑定端口 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master-01 192.168.121.128:6443 check server k8s-master-02 192.168.121.129:6443 check server k8s-master-03 192.168.121.130:6443 check
启动负载均衡
-
启动
$ systemctl start haproxy $ systemctl enable haproxy $ systemctl status haproxy $ systemctl start keepalived $ systemctl enable keepalived $ systemctl status keepalived
-
查看配置结果
在state为MASTER的主机上查看ip
$ ip a
安装Container Runtime
-
安装依赖包
$ yum install -y yum-utils device-mapper-persistent-data lvm2
-
新增Docker仓库
$ yum-config-manager --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
-
安装Docker CE
$ yum update -y && sudo yum install -y \ containerd.io-1.2.13 \ docker-ce-19.03.11 \ docker-ce-cli-19.03.11
-
创建 /etc/docker 目录
$ mkdir /etc/docker
-
设置Docker daemon
$ cat <<EOF | sudo tee /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF
-
创建 /etc/systemd/system/docker.service.d
$ mkdir -p /etc/systemd/system/docker.service.d
-
重启Docker
$ systemctl daemon-reload $ systemctl restart docker $ systemctl enable docker $ systemctl status docker
安装 kubeadm、kubelet 和 kubectl
-
更新kubernetes yum仓库
$ cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
-
安装
$ yum install -y kubelet-1.18.14 kubeadm-1.18.14 kubectl-1.18.14 --disableexcludes=kubernetes
-
确认版本
$ kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.14", GitCommit:"89182bdd065fbcaffefec691908a739d161efc03", GitTreeState:"clean", BuildDate:"2020-12-18T12:08:45Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"} $ kubectl version --client Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.14", GitCommit:"89182bdd065fbcaffefec691908a739d161efc03", GitTreeState:"clean", BuildDate:"2020-12-18T12:11:25Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"} $ kubelet --version Kubernetes v1.18.14
-
将 Container Runtime、kubelet 配置成使用 systemd 来作为 cgroup 驱动
修改 /etc/sysconfig/kubelet 文件的KUBELET_EXTRA_ARGS参数
#vi /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS=--cgroup-driver=systemd
-
启动
$ systemctl daemon-reload $ systemctl restart kubelet $ systemctl enable --now kubelet $ systemctl status kubelet
部署Master主控制平面
-
初始化master节点
$ kubeadm init \ --control-plane-endpoint "192.168.121.100:16443" \ --kubernetes-version "1.18.14" \ --pod-network-cidr "10.0.0.0/8" \ --service-cidr "172.16.0.0/16" \ --token "abcdef.0123456789abcdef" \ --token-ttl "0" \ --image-repository registry.aliyuncs.com/google_containers \ --upload-certs
参数解释:
–control-plane-endpoint:为控制平面指定一个固定的虚拟IP地址。其值应与负载均衡vip一致,若负载均衡与master位于同一主机,请指定与6443不同的端口号。
–kubernetes-version:指定kubernetes版本号。
–pod-network-cidr:指定pod网络的IP地址集。
–service-cidr:为service VIPs指定IP地址集。
–token:用于控制平面和节点之间建立双向结构。
–token-ttl:设置token过期时间。“0”表示不过期。
–image-repository :指定拉取控制平面镜像的仓库。
–upload-certs:上传控制平面证书到kubeadm-certs Secret。
初始化结果:
当出现如图结果,说明初始化成功。
1:此处命令用于配置kubectl;
2:此处命令用于添加控制面节点;
3:此处命令用于添加工作节点;
-
配置kubectl
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
添加控制平面冗余节点并配置kubectl
在k8s-master-02和k8s-master-03节点上执行初始化结果图中的命令2
$ kubeadm join 192.168.121.100:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:8c371e3470ed423a139c7a5879ee4820878c3766232a699da8925392b416390d \ --control-plane --certificate-key 91fb332bdece28872755bfd4a7b5a37d4e159f8d228e050b53b0b15d1bad9fc4
配置kubectl
$ mkdir -p $HOME/.kube $ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ chown $(id -u):$(id -g) $HOME/.kube/config
-
检查Master节点数
$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master-01 NotReady master 10m v1.18.14 k8s-master-02 NotReady master 4m47s v1.18.14 k8s-master-03 NotReady master 4m47s v1.18.14
添加work node
-
添加Node
分别在k8s-node-01和k8s-node-02节点上执行初始化结果图的命令3
$ kubeadm join 192.168.121.100:16443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:8c371e3470ed423a139c7a5879ee4820878c3766232a699da8925392b416390d
如果出现如下结果,说明node添加成功:
所有节点添加完成,执行kubectl get nodes命令检查node:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master-01 NotReady master 19m v1.18.14 k8s-master-02 NotReady master 14m v1.18.14 k8s-master-03 NotReady master 14m v1.18.14 k8s-node-01 NotReady <none> 6m30s v1.18.14 k8s-node-02 NotReady <none> 6m14s v1.18.14
由于未安装网络插件,STATUS的值应该都是NotReady。
安装CNI网络插件
-
执行以下命令安装CNI网络插件
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
-
检查Calico Pods
$ watch kubectl get pod --all-namespaces
-
检查Cluster节点的状态,此时STATUS值应该都是Ready。
$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master-01 Ready master 19m v1.18.14 k8s-master-02 Ready master 14m v1.18.14 k8s-master-03 Ready master 14m v1.18.14 k8s-node-01 Ready <none> 6m30s v1.18.14 k8s-node-02 Ready <none> 6m14s v1.18.14
关于重新初始化
-
如果想要重新初始化节点,可以执行kubeadm reset 命令。
若出现以下结果,说明执行成功。
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y [preflight] Running pre-flight checks W0107 15:46:22.859748 24159 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni] The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.
-
再根据执行结果中的提示删除相应配置。
重置 iptables 或 IPVS
$ iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X $ ipvsadm -C