使用kubeadm部署k8s高可用集群
服务器信息和实验步骤
部署好一台master01
将node01加入进集群
扩容一台master02进入集群
停用master01
扩容一台master03进入集群
master01:192.168.73.138
master02:192.168.73.139
master03:192.168.73.141
node01: 192.168.73.140
虚拟VIP: 192.168.73.100 [配置在keepalived上]
环境初始化 [所有节点]
## 关闭防火墙及selinux
* systemctl stop firewalld && systemctl disable firewalld
* sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config && setenforce 0
## 关闭 swap 分区
* swapoff -a [临时关闭swap分区的命令]
* sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
## 设置主机名及配置hosts,写进/etc/hosts
* hostnamectl set-hostname master01 192.168.73.138
* hostnamectl set-hostname node01 192.168.73.140
* hostnamectl set-hostname master02 192.168.73.139
## 内核调整,将桥接的IPv4流量传递到iptables的链
* cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
EOF
* sysctl -p /etc/sysctl.d/k8s.conf
* sysctl --system [使配置生效]
## 开启ipvs模块
* modprobe -- ip_vs
* modprobe -- ip_vs_rr
* modprobe -- ip_vs_wrr
* modprobe -- ip_vs_sh
* modprobe -- nf_conntrack_ipv4
* lsmod | grep -e ip_vs -e nf_conntrack_ipv4 [验证]
- nf_conntrack_ipv4 15053 4
- nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
- ip_vs_sh 12688 0
- ip_vs_wrr 12697 0
- ip_vs_rr 12600 4
- ip_vs 141432 10 ip_vs_rr,ip_vs_sh,ip_vs_wrr
- nf_conntrack 133053 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
- libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
## 设置系统时区并同步时间服务器
* yum install -y ntpdate
* ntpdate time.windows.com
## 重启机器
* reboot
开始安装
* wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
* yum -y install docker-ce-18.06.1.ce-3.el7
* systemctl enable docker && systemctl start docker
* 配置docker加速器并修改成k8s驱动
* cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
* 查看docker驱动是否和k8s集群驱动一致(k8s驱动是systemd)
* docker info | grep Cgroup
* 配置k8s的yum源
* [kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
* yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
* yum -y install ipvsadm ipset sysstat conntrack libseccomp [安装ipset服务]
* systemctl enable kubelet [kubelet只需要enable状态就可以,不能启动]
- master节点安装haproxy和keepalived服务 [我只用了keepalived]
* yum -y install haproxy keepalived
* systemctl enable haproxy && systemctl enable keepalived
* systemctl start haproxy && systemctl start keepalived
* keepalived配置文件模板 ./keepalived.conf [各节点主要修改角色,权重,网卡名。。。]
* haproxy配置模板 ./haproxy.cfg [所有master节点的haproxy配置都一样]
* cat /etc/keepalived/check_haproxy.sh
#!/bin/sh
# HAPROXY down
A=`ps -C haproxy --no-header | wc -l`
if [ $A -eq 0 ]
then
systmectl start haproxy
if [ ps -C haproxy --no-header | wc -l -eq 0 ]
then
killall -9 haproxy
echo "HAPROXY down" | mail -s "haproxy"
sleep 3600
fi
fi
* chmod +x check_haproxy.sh
- 部署master01 【apiserver按需修改】
* 获取默认配置文件
- kubeadm config print init-defaults > kubeadm-config.yaml
* 修改初始化配置文件
- 初始化配置模板 ./kubeadm-config.yaml
* 初始化操作
- kubeadm config images pull --config kubeadm-config.yaml [提前下载相关镜像]
- kubeadm init --config kubeadm-config.yaml [初始化集群]
* 输出结果,应同时包含node节点和master节点添加进集群的命令 [缺任何一个,都是集群初始化出问题了]
- 初始化出错,重置
* kubeadm reset
* 根据提示,如下操作 【相关路径,按需操作】
* mkdir -p $HOME/.kube
* sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
* sudo chown $(id -u):$(id -g) $HOME/.kube/config
- wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [用这个flannel.yaml]
- wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml [备用]
- flannel.yaml模板 ./kube-flannel.yaml
- kubectl apply -f kube-flannel.yml
新增节点
新增node
- 安装docker-ce
* yum -y install docker-ce-18
- 添加kubernetes的yum源
* 安装 yum - y install kubeadm-1.15.0 kubectl-1.15.0 kubelet-1.15.0
* yum enalbe kubelet
- master端生成新的token
* kubeadm create token
* kubeadm token list
* sha256值获取方法:openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
- node节点加入集群
* kubeadm join 192.168.4.34:6443 --token token值【token的值有效期24h】 \
--discovery-token-ca-cert-hash sha256:sha256值【可以命令行获取,见上行】
- 等待master端生成新的pod进程 [所有的pod进程状态running,表示添加成功]
* 查询命令:kubectl get pod -n kube-system
* kube-proxy-*
* kube-flannel-ds-amd64-*
- 备注:如果出现报错【一个节点删除了再添加,会出现报错。。。】
- 报错:一些配置文件already exists
- 使用命令kubeadm reset即可解决
新增master03【192.168.73.141】[在master03上面操作]
- 环境初始化操作,关闭防火墙之类 [所有节点都需要]
- 安装docker-ce和kubeadm,kubectl,kubelet [照之前的master和node操作,下载相同的版本软件]
- 安装keepalived和haproxy [记得master节点需要编写健康检测脚本,添加+x权限]
- 新建目录
* mkdir -p /etc/kubernetes/pki/etcd
- master节点的证书分别拷贝到本机 [从master01和master02拷贝都行]
* cat add_master.sh
scp master01:/etc/kubernetes/pki/ca.* /etc/kubernetes/pki/
scp master01:/etc/kubernetes/pki/sa.* /etc/kubernetes/pki/
scp master01:/etc/kubernetes/pki/front-proxy-ca.* /etc/kubernetes/pki/
scp master01:/etc/kubernetes/pki/etcd/ca.* /etc/kubernetes/pki/etcd/
scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
scp master01:/etc/kubernetes/pki/ca.* /etc/kubernetes/pki/
scp master01:/etc/kubernetes/pki/sa.* /etc/kubernetes/pki/
scp master01:/etc/kubernetes/pki/front-proxy-ca.* /etc/kubernetes/pki/
scp master01:/etc/kubernetes/pki/etcd/ca.* /etc/kubernetes/pki/etcd/
scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
- master节点的adminconf拷贝到本机
* scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
- kubeadm join 192.168.200.16:16443 --token token值【token的值有效期24h】 \ [token,sha256值都可以参照node节点的添加方式]
--discovery-token-ca-cert-hash sha256:sha256值【可以命令行获取】 --control-plane
- 修改环境变量
* echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
* source .bash_profile
- 拷贝etcdctl到本机
* scp master01:/usr/local/bin/etcdctl /usr/local/bin/etcdctl