一、k8s高可用集群(3.12日课)
几种常见的集群结构
1、堆叠的 etcd 拓扑
2、 外部 etcd 拓扑
3、外部 etcd 拓扑(load balancer = lvs + keepalived)
4、外部 etcd 拓扑(负载均衡器置于node上)
以第三种方式:外部 etcd 拓扑(load balancer = lvs + keepalived)结构为例
(1)搭建高可用平台:新建两个虚拟机server5和server6,作为高可用主机
第一步:安装haproxy
#在server5和server6上安装haproxy
yum install -y haproxy
修改server5/6的haproxy的配置文件
注意:前台端口设置的是6443,这是因为k8s集群的对外暴露端口是6443,设为一样的方便。
在8000端口上设置“状态页”和用户名密码。
vim /etc/haproxy/haproxy.cfg
1 #---------------------------------------------------------------------
2 # Example configuration for a possible web application. See the
3 # full configuration options online.
4 #
5 # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
6 #
7 #---------------------------------------------------------------------
8
9 #---------------------------------------------------------------------
10 # Global settings
11 #---------------------------------------------------------------------
12 global
13 # to have these messages end up in /var/log/haproxy.log you will
14 # need to:
15 #
16 # 1) configure syslog to accept network log events. This is done
17 # by adding the '-r' option to the SYSLOGD_OPTIONS in
18 # /etc/sysconfig/syslog
19 #
20 # 2) configure local2 events to go to the /var/log/haproxy.log
21 # file. A line like the following can be added to
22 # /etc/sysconfig/syslog
23 #
24 # local2.* /var/log/haproxy.log
25 #
26 log 127.0.0.1 local2
27
28 chroot /var/lib/haproxy
29 pidfile /var/run/haproxy.pid
30 maxconn 4000
31 user haproxy
32 group haproxy
33 daemon
34
35 # turn on stats unix socket
36 stats socket /var/lib/haproxy/stats
37
38 #---------------------------------------------------------------------
39 # common defaults that all the 'listen' and 'backend' sections will
40 # use if not designated in their block
41 #---------------------------------------------------------------------
42 defaults
43 mode http
44 log global
45 option httplog
46 # option dontlognull
47 option http-server-close
48 # option forwardfor except 127.0.0.0/8
49 option redispatch
50 retries 3
51 timeout http-request 10s
52 timeout queue 1m
53 timeout connect 10s
54 timeout client 1m
55 timeout server 1m
56 timeout http-keep-alive 10s
57 timeout check 10s
58 maxconn 3000
59
60 listen stats
61 bind *:8000
62 stats uri /status
63 stats auth admin:westos
64 #---------------------------------------------------------------------
65 # main frontend which proxys to the backends
66 #---------------------------------------------------------------------
67 frontend k8s *:6443
68 mode tcp
69 # acl url_static path_beg -i /static /images /javascript /stylesheets
70 # acl url_static path_end -i .jpg .gif .png .css .js
71
72 # use_backend static if url_static
73 default_backend apiserver
74
75 #---------------------------------------------------------------------
76 # round robin balancing between the various backends
77 #---------------------------------------------------------------------
78 backend apiserver
79 mode tcp
80 balance roundrobin
81 server api1 172.25.254.2:6443 check
82 server api2 172.25.254.3:6443 check
83 server api3 172.25.254.4:6443 check
在server5/6上重启服务
systemctl restart haproxy.service
浏览器测试:
两台主机互不干扰,都可以监视后端的k8s集群中主机的状态。
测试完成后,停止服务,
第二步:添加高可用——设置vip
由于默认的软件源目录中不包含高可用的软件包,所以需要在server5/6上先设置高可用的软件源:
[root@server5 ~]# cat /etc/yum.repos.d/dvd.repo
[dvd]
name=rhel7.6
baseurl=http://172.25.254.50/rhel7.6
gpgcheck=0
[HighAvailability]
name=rhel7.6 HighAvailability
baseurl=http://172.25.254.50/rhel7.6/addons/HighAvailability
gpgcheck=0
安装集群软件并配置
#在server5/6安装集群软件包
yum install -y pacemaker pcs psmisc policycoreutils-python
#在server5/6设置服务开机自启动
systemctl enable --now pcsd.service
#在server5/6为hacluster创建密码,这个密码将为后边创建虚拟机使用
echo westos | passwd --stdin hacluster
#在server5上认证高可用节点
[root@server5 ~]# pcs cluster auth server5 server6
Username: hacluster
Password:
server5: Authorized
server6: Authorized
#在server5上,将两个节点加入“mycluster”集群中
[root@server5 ~]# pcs cluster setup --name mycluster server5 server6
Destroying cluster on nodes: server5, server6...
server5: Stopping Cluster (pacemaker)...
server6: Stopping Cluster (pacemaker)...
#启动集群内所有节点
pcs cluster start --all
#设置集群内所有节点开机自启动
pcs cluster enable --all
#关闭stonith-enabled,防止报错
[root@server5 ~]# pcs property set stonith-enabled=false
[root@server5 ~]# crm_verify -LV
注意:如果<crm_verify -LV>显示有报错,集群将无法正常启动。所以这里先关闭stonith-enabled(禁止挂载存储系统),防止因报错而无法启动集群
#在server5上创建资源,并指定IP为
pcs resource create vip ocf:heartbeat:IPaddr2 ip=172.25.254.200 op monitor interval=30s
#查看资源状态
[root@server5 ~]# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: server6 (version 1.1.19-8.el7-c3c624ea3d) - partition with quorum
Last updated: Sun Mar 13 17:59:03 2022
Last change: Sun Mar 13 17:58:57 2022 by root via cibadmin on server5
#创建haproxy资源
pcs resource create haproxy systemd:haproxy op monitor interval=60s
#查看pcs状态
pcs status
#查看服务开机自启动状态
systemctl is-enabled haproxy.service
#列出agents下关于systemed的所有资源
pcs resource agents systemd
pcs resource describe systemd:haproxy
默认创建好haproxy资源后,会均衡到server6节点上,而VIP设置在server5上,因此还是不对,解决方法是新建一个组,将haproxy和vip放在一起。
#新建一个hagroup组
pcs resource group add hagroup vip haproxy
新建一个haproxy组,包含vip和haproxy
在浏览器测试:
模拟问题:当server5挂掉后,会自动切换到server6上,并且用户在浏览器上不会看到变化。
第三步:创建k8s集群并加入worker
注意:更改网络组件时,需要先删除</etc/cni/net.d/>目录下的文件,防止之前的网络组件影响