K8S高可用集群部署

节点信息:

主机名IP地址角色OS
k8s-master01192.168.1.1mastercentos-7.6
k8s-master02192.168.1.2mastercentos-7.6
k8s-master03192.168.1.3mastercentos-7.6
k8s-node01192.168.1.4nodecentos-7.6
k8s-node02192.168.1.5nodecentos-7.6

一、系统初始化-集群所涉及主机

1.1 设置系统主机名以及 Host 文件的相互解析,例如master1

hostnamectl set-hostname k8s-master01

1.2 安装依赖包

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git

1.3 设置防火墙为 Iptables 并设置空规则

systemctl stop firewalld && systemctl disable firewalld

安装iptables

yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save

1.4 关闭selinux和swapoff分区

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

1.5 调整内核参数,对于 K8S

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

cp kubernetes.conf /etc/sysctl.d/kubernetes.conf 

sysctl -p /etc/sysctl.d/kubernetes.conf

注意:主机之间时区,时间要同步,如不同步,则可以需要ntp同步一下

二、安装k8s-集群所涉及主机

2.1 kube-proxy开启ipvs的前置条件

系统添加br_netfilter模块
lsmod |grep br_netfilter  # 查看模块
modprobe br_netfilter  # 临时新增

# 永久新增模块
cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF

cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF

chmod 755 /etc/sysconfig/modules/br_netfilter.modules

添加ipvs模块信息

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

2.2 安装docker

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager \ --add-repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum -y install docker-ce-18.06.1.ce-3.el7 #指定安装版本

systemctl daemon-reload && systemctl restart docker && systemctl enable docker

2.3 安装 Kubeadm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1

systemctl enable kubelet.service

三、安装keepalived

在三台master节点上安装

yum -y install keepalived

配置文件 master1

[root@k8s-master01 keepalived]# cat keepalived.conf

global_defs {
​	router_id master01
}

vrrp_instance VI_1 {
​	state BACKUP 
​	interface ens33 
​	virtual_router_id 50
​	priority 100
​	advert_int 1
​	authentication {
​		auth_type PASS
​		auth_pass 1111
​	}
​	virtual_ipaddress {
​		192.168.1.100/24
​	}
}

其余两台master只需要修改优先级与router_id即可,其余内容与上面一致

启动,并设置开机启动

service keepalived start
systemctl enable keepalived

四、初始化master节点

只在任意一台执行即可,例如master1

编写 kubeadm-config.yaml文件添加,文件内容如下:

cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
apiServer:
   certSANs:
   - 192.168.1.100	#写vip
controlPlaneEndpoint: "192.168.1.100:6443" #vip
imageRepository: registry.aliyuncs.com/google_containers
networking:
   podSubnet: "10.244.0.0/16"
   serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
   SupportIPVSProxyMode: true
mode: ipvs

保存退出

执行init,如下

kubeadm init --config=kubeadm-config.yaml

成功之后,会提示两条join加入指令,其中有一条结尾是--control-plane,这条就是其他master节点加入集群的指令,而另一条则是node节点加入集群的指令

然后按照提示运行命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

添加环境变量

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

五、安装网络插件flannel

wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
 kubectl apply -f kube-flannel.yml

六、拷贝证书(关键步骤)

编写shell脚本

cat cert.sh
#!/bin/bash
USER=root
CONTROL_PLANE_IPS="master2 master3"		#另外两台master节点
for host in ${CONTROL_PLANE_IPS}; do
  scp  /etc/kubernetes/pki/ca.crt "${USER}"@$host:
  scp  /etc/kubernetes/pki/ca.key "${USER}"@$host:
  scp  /etc/kubernetes/pki/sa.key "${USER}"@$host:
  scp  /etc/kubernetes/pki/sa.pub "${USER}"@$host:
  scp  /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
  scp  /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
  scp  /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
  scp  /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
Done

然后到另外两台master节点将证书文件拷贝到对应位置,编写脚本

cat cert.sh
#!/bin/bash
USER=root
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key

七、剩余两个master节点加入集群

执行结尾是--control-plane的join指令,加入之后

然后按照提示运行命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

添加环境变量

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

八 、node节点加入集群

在node节点执行另一条join指令

附加:添加命令补全功能

echo "source <(kubectl completion bash)" >> ~/.bash_profile
source /root/.bash_profile

九、验证

kubectl get nodes
kubectl get pod -n kube-system

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值