尊重原创版权: https://www.csnovel.com/hot/38980.html
更多内容参考: https://www.csnovel.com/
使用kubeadm搭建高可用的k8s集群(亲测有效)
kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。
这个工具能通过两条指令完成一个kubernetes集群的部署:
# 创建一个 Master 节点
$ kubeadm init
# 将一个 Node 节点加入到当前集群中
$ kubeadm join <Master节点的IP和端口 >
安装要求
在开始之前,部署Kubernetes集群机器需要满足以下几个条件:
-
一台或多台机器,操作系统 CentOS7.x-86_x64
-
硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
-
可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点
-
禁止swap分区
准备环境
角色
|
IP
—|—
master1
|
192.168.3.155
master2
|
192.168.3.156
node1
|
192.168.3.157
VIP(虚拟ip)
|
192.168.3.158
# 关闭防火墙 如果是minimal安装,默认没有装firewalld
systemctl stop firewalld
systemctl disable firewalld
# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
# 关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
# 根据规划设置主机名
hostnamectl set-hostname <hostname> #分别设置为master1、master2、node1
# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.3.158 master.k8s.io k8s-vip
192.168.3.155 master01.k8s.io master1
192.168.3.156 master02.k8s.io master2
192.168.3.157 node01.k8s.io node1
EOF
ping node1或ping node01.k8s.io #确认配置生效
# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
# 时间同步
yum install ntpdate -y
ntpdate time.windows.com
所有master节点部署keepalived
3.1 安装相关包和keepalived
yum install -y conntrack-tools libseccomp libtool-ltdl
yum install -y keepalived
3.2配置master节点
master1节点配置
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id k8s
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eno33554984
virtual_router_id 51
priority 250
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab
}
virtual_ipaddress {
192.168.3.158
}
track_script {
check_haproxy
}
}
EOF
master2节点配置
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id k8s
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface eno33554984
virtual_router_id 51
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab
}
virtual_ipaddress {
192.168.3.158
}
track_script {
check_haproxy
}
}
EOF
3.3 启动和检查
在两台master节点都执行
# 启动keepalived
$ systemctl start keepalived.service
设置开机启动
$ systemctl enable keepalived.service
# 查看启动状态
$ systemctl status keepalived.service # 以master1为例
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Main PID: 2985 (keepalived)
CGroup: /system.slice/keepalived.service
├─2985 /usr/sbin/keepalived -D
├─2986 /usr/sbin/keepalived -D
└─2987 /usr/sbin/keepalived -D
Feb 06 02:42:15 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:15 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:15 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:15 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:20 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:20 master1 Keepalived_vrrp[2987]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eno33554984 for 192.168.3.158
Feb 06 02:42:20 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:20 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:20 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:20 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
启动后查看master1的网卡信息
$ ip a s eno33554984
3: eno33554984: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:b8:e6:c1 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.155/24 brd 192.168.3.255 scope global eno33554984
valid_lft forever preferred_lft forever
inet 192.168.3.158/32 scope global eno33554984
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:feb8:e6c1/64 scope link
valid_lft forever preferred_lft forever
部署haproxy
4.1 安装
yum install -y haproxy
4.2 配置
两台master节点的配置均相同,配置中声明了后端代理的两个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口
cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
mode tcp