简单描述一下HAproxy + keepalive的作用
keepalive的作用是虚拟IP,让多个master节点公用一个虚拟IP,当主节点挂掉之后,虚拟IP通过选举飘逸到剩下二个节点其中一个
Kubeadm的init操作就会init这个IP节点,当IP节点可以多用时,自然实现了高可用HAproxy 的作用在于实现apiserve的负载均衡 权重配置,也可以用nginx
1.master分布节点
角色 | ip | 组件 |
---|---|---|
master1 | 192.168.100.99 | HAproxy + keepalive+Kubeadm |
master2 | 192.168.100.100 | HAproxy + keepalive+Kubeadm |
master3 | 192.168.100.101 | HAproxy + keepalive+Kubeadm |
2.三台机器操作
2.1关闭防火墙,selinux,swap,时间同步,更新yum
#关闭CentOS防火墙
systemctl disable firewalld
systemctl stop firewalld
#忽略haproxy的VIP
cat >> /etc/sysctl.conf<<EOF
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.neigh.default.gc_thresh1=4096
net.ipv4.neigh.default.gc_thresh2=6144
net.ipv4.neigh.default.gc_thresh3=8192
EOF
sysctl -p
#关闭selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
#关闭swap:
swapoff -a # 临时
#永久禁用
echo vm.swappiness=0 >> /etc/sysctl.conf
vi /etc/fstab #删除 /mnt/swap swap swap defaults 0 0 这一行或者注释掉这一行
swapoff -a && swapon -a
sysctl -p #(执行这个使其生效,不用重启)
#查看是否关闭
free -m
##更新yum
yum -y update
#将桥接的IPv4流量传递到iptables的链(防止流量丢失):
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 生效
sysctl --system
#时间同步:
yum install ntpdate -y
ntpdate time.windows.com
#在master添加hosts,如果node节点和master节点为Hostname为IP可以不设置(修改主机名实例: hostnamectl set-hostname k8s-master):
cat >> /etc/hosts << EOF
192.168.100.99 k8s-master1
192.168.100.100 k8s-master2
192.168.100.101 k8s-master3
EOF
2.2 安装Docker
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y yum install -y docker-ce-19.03.9-3.el7
systemctl enable docker && systemctl start docker
docker --version
#由于这里的Docker的Cgroup Driver不符合kubernetes要求,执行如下命令
cat > /etc/docker/daemon.json << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl restart docker
systemctl status docker
2.3添加阿里云YUM软件源
#指定阿里云镜像拉取阿里云镜像仓库地址。
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.4 安装kubeadm,kubelet和kubectl
yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0
#kubelet加入开机自启动
systemctl enable kubelet
2.6 keepalived和haproxy安装(可以只配置二台,负载到三台apiservice上面)
yum -y install epel-re*
yum -y install keepalived.x86_64
yum -y install haproxy.x86_64
2.7 配置haproxy
cat > /etc/haproxy/haproxy.cfg <<-'EOF'
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /var/run/haproxy-admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
nbproc 1
defaults
log global
timeout connect 5000
timeout client 10m
timeout server 10m
listen admin_stats
bind 0.0.0.0:10080
mode http
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /status
stats realm welcome login\ Haproxy
stats auth admin:123456 #图形界面在宿主机IP:10080/status 账号密码为 admin:123456
stats hide-version
stats admin if TRUE
listen kube-master
bind 0.0.0.0:8443
mode tcp
option tcplog
balance source
server api1 192.168.100.99:6443 check port 6443 check inter 2000 fall 2 rise 2 weight 1
server api2 192.168.100.100:6443 check port 6443 check inter 2000 fall 2 rise 2 weight 1
server api3 192.168.100.101:6443 check port 6443 check inter 2000 fall 2 rise 2 weight 1
EOF
启动:systemctl start haproxy.service && systemctl enable haproxy.service
2.8配置keepalived虚拟节点
2.8.1 master1 作为集群结构的leader配置
cat > /etc/keepalived/keepalived.conf <<-'EOF'
! Configuration File for keepalived
global_defs {
router_id master01 #这里的id记得更改
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI-kube-master {
#下面IP位置记得修改
unicast_src_ip 192.168.100.99
unicast_peer {
192.168.100.101
192.168.100.100
}
state MASTER #该节点为MASTER
virtual_router_id 51
# 优先级配置,每台服务器最好都不一样,如100,90,80等,优先级越高越先使用
priority 90
advert_int 1
interface ens33 # 虚拟网卡桥接的真实网卡
authentication {
auth_type PASS
auth_pass 111
}
virtual_ipaddress {
192.168.100.16 # 对外提供的虚拟IP,注意和你的IP要在同一个网段
}
track_script {
check_haproxy
}
}
EOF
启动:systemctl enable keepalived.service && systemctl restart keepalived.service
2.8.2 master2,3 作为集群结构的BACKUP配置,同样配置记得修改对应属性,这里只做master2的
cat > /etc/keepalived/keepalived.conf <<-'EOF'
! Configuration File for keepalived
global_defs {
router_id master02 #这里的id记得更改
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI-kube-master {
#下面IP位置记得修改
unicast_src_ip 192.168.100.100
unicast_peer {
192.168.100.101
192.168.100.99
}
state BACKUP #该节点为BACKUP
virtual_router_id 51
# 优先级配置,每台服务器最好都不一样,如100,90,80等,优先级越高越先使用
priority 90
advert_int 1
interface ens33 # 虚拟网卡桥接的真实网卡
authentication {
auth_type PASS
auth_pass 111
}
virtual_ipaddress {
192.168.100.16 # 对外提供的虚拟IP,注意和你的IP要在同一个网段
}
track_script {
check_haproxy
}
}
EOF
然后启动:systemctl enable keepalived.service && systemctl restart keepalived.service
部署完成之后记得在其他节点或者同局域网的电脑 ping virtual_ipaddress的id,这里ping的是192.168.100.16,如果可以ping通,说明这个虚拟IP配置完成
ip a 命令可以看到这个虚拟IP配置,当主keepalived挂掉之后,该IP会自动飘逸IP到剩下的备份节点,可以执行测试
3.kubeadm init
kubeadm init \
--control-plane-endpoint=192.168.100.16:8443 \ #记得对应修改你的虚拟IP和端口port
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.17.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--upload-certs
#安装失败重置
kubeadm reset
#查看需要安装的镜像
kubeadm config images list
#如果实在pull不下来执行下面命令,如果还拉不下镜像请手动拉取修改~
kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#docker.io/mirrorgooglecontainers#g' |sh -x
docker images |grep mirrorgooglecontainers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#docker.io/mirrorgooglecontainers#k8s.gcr.io#2' |sh -x
docker images |grep mirrorgooglecontainers |awk '{print "docker rmi ", $1":"$2}' |sh -x
docker pull coredns/coredns:1.6.5
docker tag coredns/coredns:1.6.5 k8s.gcr.io/coredns:1.6.5
docker rmi coredns/coredns:1.6.5
记得安装网络插件(kube-flannel)
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#改成这个文件的镜像(image)地址为:lizhenliang/flannel:v0.11.0-amd64 然后执行下列命令
kubectl apply -f kube-flannel.yml
init之后记得查看日志(这里图比较小,记得查看init之后的日志有一个kubeadm join的加入命令):
1.下面这条命令添加node节点(这里接不过多描述node节点添加)
2.下面这条命令是添加master(区别在于master上面多了一条 --control-plane --certificate-key)
安装完毕后执行下面命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
4. master添加
直接 master2,master3输入 获得的该命令,即可添加
kubeadm join 192.168.100.16:8443 --token ddumea.8hje4uidmi8dwduc --discovery-token-ca-cert-hash sha256:9761042298abfde6308e9a443fd2b3f71ee0119f4d9cf5877bfd16fbb84608e8 --control-plane --certificate-key 1c92ea433c008cab36e2e6bd9133b6161494c690f4ecdf8d391a0e087ee46539
5.查看集群
我将101节点杀死之后,集群依旧可以用