master1: 192.168.0.122
master2:192.168.0.86
master3: 192.168.0.144
node1: 192.168.0.204
node2: 192.168.0.184
1、修改主机名并加入到主机映射中
hostnamectl set-hostname k8s-master1
echo "192.168.0.122 k8s-master1
192.168.0.86 k8s-master2
192.168.0.144 k8s-master3
192.168.0.204 k8s-node1
192.168.0.184 k8s-node2" >> /etc/hosts
2、安装ipvs模块
在所有的Kubernetes节点执行以下脚本(若内核大于4.19替换nf_conntrack_ipv4为nf_conntrack):
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
执行脚本
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
yum install ipset ipvsadm -y
3、修改内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl --system
4、使用keepalived加nginx做负载均衡高可用(所有master节点执行)
yum install -y nginx keepalived nginx-all-modules.noarchnginx-all-modules.noarch
修改nginx配置文件nginx.conf(主备节点配置一样)
echo " stream {
log_format main "$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent";
access_log /var/log/nginx/k8s.log main;
upstream k8s-apiserver {
server 192.168.0.122:6443;
server 192.168.0.86:6443;
server 192.168.0.144:6443;
}
server {
listen 16443;
proxy_pass k8s-apiserver;
}
}" >> /etc/nginx/nginx.conf
修改keepalived配置文件keepalived.conf
删除配置文件中vrrp_instance VI_1后面的所有配置(包括vrrp_instance VI_1)
echo " vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 100
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
unicast_src_ip 192.168.0.122
unicast_peer{
192.168.0.86
192.168.0.144
virtual_ipaddress {
192.168.0.100
}
track_script {
chk_nginx
}
}
}" >> /etc/keepalived/keepalived.conf
把主节点的keepalived.conf发给备节点,备节点需要修改部分字段
state 修改为BACKUP
priority 修改优先级,必须比100小
unicast_src_ip 本端ip unicast_peer对端ip 修改对应的本端ip和对端ip
在/etc/keepalived/keepalived路径下创建检测脚本nginx_check.sh
echo "#!/bin/bash
A=`netstat -ntpl |awk '{print $4}'|grep 16643`
if [ ! $A ];then
systemctl restart nginx #尝试重新启动nginx
sleep 2 #睡眠2秒
if [ `ps -C nginx --no-header | wc -l` -eq 0 ];then
systemctl stop keepalived #启动失败,将keepalived服务杀死。将vip漂移到其它备份节点
fi
fi" >> /etc/keepalived/nginx_check.sh
重启nginx和keepalived
systemctl enable nginx keepalived
systemctl restart nginx keepalived
5、安装docker-ce(所有master和node执行)
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
可以选择自己需要的版本,这里我装的20.10.12
yum install docker-ce-20.10.12 -y
systemctl enable docker
systemctl restart docker
6、正式开始安装k8s集群(所有节点执行,node节点可以不安装kubectl)
设置yum源,国外的源比较慢,本次我使用阿里的镜像
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all
yum makecache
yum install -y kubelet-1.19.16 kubeadm-1.19.16 kubectl-1.19.16
systemctl enable kubelet
systemctl restart kubelet
7、初始化master节点
kubeadm config print init-defaults > kubeadm-config.yaml
生成之后,可以修改 kubeadm-config.yaml,我的配置是:
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 本机ip
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "keepalived的vip地址:nginx负载均衡的端口"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.19.16
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
使用kubeadm init 初始化master节点
kubeadm 初始化master节点时使用以下命令,该命令包含了自动上传cert,日志级别更敏感(–v=6) 同时保存了启动日志到kubeadm-init.log文件中
kubeadm init --config=kubeadm-config.yaml --upload-certs --v=6 | tee kubeadm-init.log
几个重要的参数说明:
controlPlaneEndpoint: 这个就是我们用keepalive+nginx搭建的负载均衡器的入口了
imageRepository : 由于墙的原因无法访问官网的镜像仓库,因此需要改成使用国内的,这里我使用了阿里云的仓库
kubernetesVersion :指定k8s的版本,默认会拉取最新版
初始化成功后会打印添加 control-plane node和 worker node的命令
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 192.168.0.100:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3d5d6a6afc3a67592b7076371d94da2e45d8d4073a57624ff37219cfbf68464b \
--control-plane --certificate-key 523f5f6c312b0a63a91bb93c96516536044cda9d57f7f7343dd7c1f674a83618
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.100:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3d5d6a6afc3a67592b7076371d94da2e45d8d4073a57624ff37219cfbf68464b
配置config
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
部署网络组件(这里部署的是flannel)
wget https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
安装完成之后,通过 kubectl get nodes 可以看到状态以及是 Ready 了
8、其他节点加入集群
master加入集群
kubeadm join 192.168.0.100:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3d5d6a6afc3a67592b7076371d94da2e45d8d4073a57624ff37219cfbf68464b \
--control-plane --certificate-key 523f5f6c312b0a63a91bb93c96516536044cda9d57f7f7343dd7c1f674a83618
node加入集群
kubeadm join 192.168.0.100:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3d5d6a6afc3a67592b7076371d94da2e45d8d4073a57624ff37219cfbf68464b