Kubeadm(v1 .19)高可用集群部署

一:搭建资源分配

需要注意的是,master节点的硬件资源不得低于2核心,2GB内存。最大没有限制,如果后续要玩prom监控,或者其他微服务,请自己掂量0.o?                   

k8s-master1192.168.50.140
k8s-master2   192.168.50.141
k8s-master3192.168.50.142
k8s-node1   192.168.50.143
k8s-node2 192.168.50.144
k8s-node3 192.168.50.145
VIP     192.168.50.100

 二、搭建步骤

①在各节点上按步骤、依次执行以下命令:

1 .设置主机名

hostnamectl set-hostname k8s-master1; bash
hostnamectl set-hostname k8s-master2; bash
hostnamectl set-hostname k8s-master3; bash
hostnamectl set-hostname k8s-node1; bash
hostnamectl set-hostname k8s-node2; bash
hostnamectl set-hostname k8s-node3; bash


2 .在所有Master节点上添加主机名解析

cat >> /etc/hosts << EOF
192.168.50.140 k8s-master1
192.168.50.141 k8s-master2
192.168.50.142 k8s-master3
192.168.50.143 k8s-node1
192.168.50.144 k8s-node2
192.168.50.145 k8s-node3
EOF


3 .将桥接的IPv4流量传递到iptables的链:

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system   ## 生效配置

4 .安装NTP服务并进行时间同步

yum -y install ntpdate
ntpdate time.windows.com   ## 使用微软时间同步服务器

5 .关闭并禁用swap

临时关闭: swapoff -a

永久关闭: sed - ri 's/.*swap.*/#&/' /etc/fstab

记得要使用 " reboot"命令重启服务器生效

6 .关闭并禁用firewalld

systemctl disable firewalld && systemctl stop firewalld && systemctl status 
firewalld


7 .关闭并禁用 selinux

sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config

使用 " reboot"命令重启服务器生效
重启完成后确认结果:执行命令"getenforce"的结果应该为"Disabled"


8 .安装Docker

请使用 "yum自动部署docker",在这里就不做过多介绍了~

9 .添加阿里云yum源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kube rnetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


10 .安装kubeadm,kubelet及kubectl

yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0
systemctl enable kubelet


11 .安装kubectl命令补全工具

yum -y install bash-completion

临时生效: source < (kubectl completion bash)

永久生效: echo "source < (kubectl completion bash)" >> /root/.bash rc

②在所有Master节点上按步骤、依次执行以下命令:

部署Keepalive+LVS实现对Master节点的apiserver高可用


1 .安装keepalived及ipvsadm

yum install -y socat keepalived ipvsadm conntrack

2.修改Master1节点的keepalived.conf文件

global_defs {
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP
nopreempt
interface ens33
virtual_router_id 80
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass just0kk
}
virtual_ipaddress {
192.168.50.100
}
}
virtual_server 192.168.50.100 6443 {
delay_loop 6
lb_algo loadbalance
lb_kind DR
net_mask 255.255.255.0
persistence_timeout 0
protocol TCP
real_server 192.168.50.140 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.50.141 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.50.142 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}

3.修改Master2节点的keepalived.conf文件

global_defs {
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP
nopreempt
interface ens33
virtual_router_id 80
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass just0kk
}
virtual_ipaddress {
192.168.50.100
}
}
virtual_server 192.168.50.100 6443 {
delay_loop 6
lb_algo loadbalance
lb_kind DR net_mask 255.255.255.0
persistence_timeout 0
protocol TCP
real_server 192.168.50.140 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.50.141 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.50.142 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}

4.修改Master3节点的keepalived.conf文文件

global_defs {
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP
nopreempt
interface ens33
virtual_router_id 80
priority 30
advert_int 1
authentication {
auth_type PASS
auth_pass just0kk
}
virtual_ipaddress {
192.168.50.100
}
}
virtual_server 192.168.50.100 6443 {
delay_loop 6
lb_algo loadbalance
lb_kind DR
net_mask 255.255.255.0
persistence_timeout 0
protocol TCP
real_server 192.168.50.140 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.50.141 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.50.142 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}

5.分别启动Master1、2、3节点上的keepalived

systemctl enable keepalived && systemctl start keepalived && systemctl status 
keepalived

6 .验证keepalived配置

在maste r1上通过" ip a"命令可以看到vip已经绑定到ens33网卡上了

7 .在maste r1节点初始化k8s集群

创建初始化yaml文件:vim kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
controlPlaneEndpoint: 192.168.50.100:6443
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
certSANs:
- 192.168.50.140
- 192.168.50.141
- 192.168.50.142
- 192.168.50.143
- 192.168.50.144
- 192.168.50.145
- 192.168.50.100
networking:
podSubnet: 10.244.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

应用配置文件:

kubeadm init --config kubeadm-config.yaml [这一步执行后等待时间较长]

 [注意]:初始化成功后会生成三条命令(两条kubeadm join,一条mkdir开头的),请复制到记事本中接下来,会用到这些命令

8.在master1节点上拷贝证书文件获取操作kubernetes的权限

mkdir -p $HOME/ .kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/ . kube/config
sudo chown $(id -u):$(id -g) $HOME/ .kube/config

执行以上三条命令后就可以查看节点和pod信息啦=。=

kubectl get nodes

kubectl get pods -n kube-system

9.在Master1节点上安装calico网络插件

配置在我的资源中

kubectl apply -f calico.yaml

执行完上述步骤,就可以看到其他节点信息

 

1.如果遇到 kubectl get cs 

Warning : v1 ComponentStatus is deprecated in v1.9+

NAME                 STATUS     MESSAGE             ERROR

scheduler           Unhealthy   Get "http : //127.0.0.1:10251/healthz" : dial tcp 127.0.0.1:102 51 :connect:connection refused  

这里发现cont roller-manage r和scheduler这两个组件的状态是Unhealthy,通过以下方式解决【如果不处理不影响使用!】

① .编辑配置文件: vim /etc/kubernetes/manifests/kube-scheduler.yaml

将配置文件中 " - - -port=0"这一行注释掉

① .编辑配置文件: vim /etc/kubernetes/manifests/kube-controller-manager.yaml

将配置文件中 " - - -port=0"这一行注释掉

[问题产生原因]: 这两个pod的非安全端口没有开启,健康检查时报错,但是由于本身服务是正常的,只是健康检查的端口没 启,所以不影响正常使用

scheduler, control-manager的10251,10252端口未开启

10 .把Master1节点的证书拷贝到Master2和Master3上


( 1)在Maste r2和Maste r3上分别执行 :

cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/

( 2)在Maste r1上执行以下命令,将证书推送到Maste r2上

scp /etc/kubernetes/pki/ca.crt k8s-master2:/etc/kubernetes/pki/ 
scp /etc/kubernetes/pki/ca.key k8s-master2:/etc/kubernetes/pki/ 
scp /etc/kubernetes/pki/sa.key k8s-master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub k8s-master2:/etc/kubernetes/pki/ 
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key k8s-master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt k8s-master2:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key k8s-master2:/etc/kubernetes/pki/etcd/

( 3)在Maste r1上执行以下命令,将证书推送到Maste r3上

scp /etc/kubernetes/pki/ca.crt k8s-master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key k8s-master3:/etc/kubernetes/pki/ 
scp /etc/kubernetes/pki/sa.key k8s-master3:/etc/kubernetes/pki/ 
scp /etc/kubernetes/pki/sa.pub k8s-master3:/etc/kubernetes/pki/ 
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-master3:/etc/kubernetes/pki/ 
scp /etc/kubernetes/pki/front-proxy-ca.key k8s-master3:/etc/kubernetes/pki/ 
scp /etc/kubernetes/pki/etcd/ca.crt k8s-master3:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key k8s-master3:/etc/kubernetes/pki/etcd/


11 .在Maste r2与Maste r3上分别执行下面的命令,将其加入kube rnetes集群中:

[注意]:每次安装token与证书均不同,请以每次生成的命令为准!!!

kubeadm join 192.168.50.100:6443 --token zntgke.md2t1payaylzsbe8 \
   --discovery-token-ca-cert-hash 
sha256:80e60fd9b041c3798ddf20c1057b620ba6b27fd41b0c3c6f47b6f5d0cee04591 \
   --control-plane


12 .在Maste r2与Maste r3上分别执行下面的命令,拷贝证书文件获取操作kubernetes的权限

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


13 .在Node1、2、3上分别执行以下命令,将其加入kubernetes集群:

kubeadm join 192.168.50.100:6443 --token zntgke.md2t1payaylzsbe8 \
   --discovery-token-ca-cert-hash 
sha256:80e60fd9b041c3798ddf20c1057b620ba6b27fd41b0c3c6f47b6f5d0cee04591

三、其他补充说明

①如果遇到创建异常的系统Pod又删除不掉,可以使用以下命令进行删除:

kubectl delete pod<PodName> --namespace=kube-system --grace-period=0 --force

②/etc/hosts中主机名与IP的对应关系配置非常重要!一定要配置正确!

③健康检查是通过以下命令进行健康检查的,如果返回不是ok则会进行故障转移操作, Master1、 2、 3的优先级分别是100、 50、 30

curl https://192.168.50.140:6443/healthz --insecure
ok

④使用VM挂起虚拟机时,请按照Master3、 2、1的顺序挂起,否则VIP地址会根据预设的优先级进行漂移。

  • 4
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值