架构图
master 节点我们要避免单节点,防止故障
1、多节点部署
我们在192.168.1.9上面部署另外一台master
首先我们将主节点文件拷贝过去:
cd /opt
scp -r kubernetes 192.168.1.9:/opt/
scp -r etcd 192.168.1.9:/opt/
scp /usr/lib/systemd/system/kube-* 192.168.1.9:/usr/lib/systemd/system/
此时我们在master02上面修改配置文件,
需要修改的地方就2个:
cd /opt/kubernetes/cfg
vim kube-apiserver
--bind-address=192.168.1.9
--advertise-address=192.168.1.9
其他的配置文件都是指向本地不用修改,
直接启动:
systemctl start kube-apiserver
systemctl start kube-controller-manager.service
systemctl start kube-scheduler.service
检测:
将kubectl工具复制出来:
cp /opt/kubernetes/bin/kubectl /usr/local/bin/
[root@master02 cfg]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
此处可以看到多节点没啥难度,搞定。
2、master负载均衡
我们用nginx的stream模块做负载均衡,因此,master将不直连kube-apiserver,而是连接nginx,再由nginx转发,(做nginx高可用的就需要写vip,因为ip需要漂移),此处我们nginx master为192.168.1.21,nginx backup为192.168.1.111,vip为192.168.1.10
如果是编译安装的,没有加入此模块的,我们需要加入此模块:
加入方法:
./sbin/nginx -V 获取到编译的参数,
进入源码路径编译,并且加上–with-stream
cd nginx-1.12.1/
./configure --prefix=/usr/local/nginx --with-http_realip_module --with-http_sub_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_ssl_module --with-http_v2_module --with-pcre --with-stream
make
cp -rf ./objs/nginx /usr/local/nginx/sbin
/etc/init.d/nginx restart
/usr/local/nginx/sbin/nginx -V
此时可以发现stream模块已经编译进去
nginx.conf配置:
stream {
log_format main "$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent";
access_log /var/log/nginx/k8s.log main;
upstream k8s-apiserver {
server 192.168.1.39:6443;
server 192.168.1.9:6443;
}
server {
listen 6443;
proxy_pass k8s-apiserver;
}
}
注意:这个是4层TCP协议,不要写到http{}模块里面,不然会报错
重新加载nginx:
./sbin/nginx -s reload
此时我们需要信任nginx高可用的机器,所以,如果我们之前没有添加信任IP,我们现在需要添加上,master上,server证书需要重新配置:、
在我们Master节点生成证书处,
加上192.168.1.9和192.168.1.111和192.168.1.10
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.1.39",
"192.168.1.40",
"192.168.1.41",
"192.168.1.42",
"192.168.1.9",
"192.168.1.21",
"192.168.1.10",
"192.168.1.111",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
将生成的server-key.pem和server.pem 将以前的覆盖,并且发送到另外一台master
cp server-key.pem server.pem /opt/kubernetes/ssl
scp server-key.pem server.pem 192.168.1.9:/opt/kubernetes/ssl
重启服务
systemctl restart kube-apiserver.service
如果我们我们证书上面已经信任了ip,上面就可以忽略,现在我们需要将node节点配置里面的所有关于master节点的ip,全部换成nginxd代理机器的ip,(要做keepalived就得全部换成vip:192.168.1.10)
换后:
注意:这里的操作要完成了keepalived的高可用,才能改成这个ip,不然会找不到这个ip而报错,如果不需要keepalived高可用,直接填写nginx ip即可
[root@g2 cfg]# grep 10 *
bootstrap.kubeconfig: server: https://192.168.1.10:6443
kubelet.kubeconfig: server: https://192.168.1.10:6443
kube-proxy.kubeconfig: server: https://192.168.1.10:6443
重启节点的服务:
systemctl restart kubelet.service
systemctl restart kube-proxy.service
此时,我们可以看到nginx有日志进来:
[root@server2 ~]# tail -f /var/log/nginx/k8s.log
192.168.1.42 192.168.1.9:6443 - [13/Nov/2018:15:59:21 +0800] 200 1119
192.168.1.42 192.168.1.39:6443 - [13/Nov/2018:15:59:21 +0800] 200 1118
192.168.1.40 192.168.1.39:6443 - [13/Nov/2018:15:59:21 +0800] 200 1566
192.168.1.42 192.168.1.9:6443 - [13/Nov/2018:15:59:21 +0800] 200 1566
在master上面检查节点也正常:
[root@master k8s-cert]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.1.40 Ready <none> 23h v1.12.1
192.168.1.42 Ready <none> 23h v1.12.1
3、keepalived高可用:
上面Nginx对k8s进行了高可用,现在我们还需要对nginx进行高可用,
安装
yum install -y keepalived
keepalived主节点配置:
[root@server1 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
notification_email {
liao@liaochao.com
}
notification_email_from root@liaochao.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_nginx {
script "/usr/local/sbin/check_ng.sh" #检测脚本
interval 3
}
vrrp_instance VI_1 {
state MASTER #备需要设置为BACKUP
interface eth0 #网卡名称
virtual_router_id 51 #路由唯一id
priority 100 #备需要设置为90
advert_int 1
authentication {
auth_type PASS
auth_pass aminglinux>com
}
virtual_ipaddress {
192.168.1.10 #vip
}
track_script {
chk_nginx
}
}
nginx检测脚本:
[root@server1 ~]# cat /usr/local/sbin/check_ng.sh
#!/bin/bash
#时间变量,用于记录日志
d=`date --date today +%Y%m%d_%H:%M:%S`
#计算nginx进程数量
n=`ps -C nginx --no-heading|wc -l`
#如果进程为0,则启动nginx,并且再次检测nginx进程数量,
#如果还为0,说明nginx无法启动,此时需要关闭keepalived
if [ $n -eq "0" ]; then
/etc/init.d/nginx start
n2=`ps -C nginx --no-heading|wc -l`
if [ $n2 -eq "0" ]; then
echo "$d nginx down,keepalived will stop" >> /var/log/check_ng.log
# systemctl stop keepalived
/etc/init.d/keepalived stop
fi
fi
授权:
chmod+x /usr/local/sbin/check_ng.sh
通过这个脚本,我们可以实现nginx挂掉自动拉取,拉取失败就调用keepalived,进行准备切换 高可用。保证不宕机
启动keepalived:
systemctl start keepalived
此时 我们可以使用:ip a观察vip已经绑定上去
备用机:
我们将keepalived配置和nginx脚本拷贝到备用机器上
scp /etc/keepalived/keepalived.conf 192.168.1.111:/etc/keepalived/
scp /usr/local/sbin/check_ng.sh 192.168.1.111:/usr/local/sbin/
然后呢更改下keepalived的设置就可以启动了:
需要更改的地方:
vim /etc/keepalived/keepalived.conf
state BACKUP #备需要设置为BACKUP
interface eth0 #网卡名称
virtual_router_id 51 #路由唯一id
priority 90 #备需要设置为90
启动:
systemctl start keepalived
检测keepalived:
我们在主节点关闭keepalived,看看vip会不会漂移到备用节点192.168.1.111上去。
我们在k8s主节点上面运行kubectl get node 将不会有任何感知,现在我们的机器宕机一台master也稳稳的了