前面的高可用集群的etcd是与control plane部署在同一个节点上的,两者会相互影响,etcd还有一种部署方式是与control plane分离部署,提供更高的稳定性。
这里部署一个3节点的etcd集群,然后使用外置的etcd集群创建k8s集群。
节点 | IP | 备注 |
etcd01 | 10.0.0.16 | etcd01节点 |
etcd02 | 10.0.0.17 | etcd02节点 |
etcd03 | 10.0.0.18 | etcd03节点 |
部署前准备
在所有节点上执行如下操作
1.关闭防火墙,selinux和交换分区,配置内核参数
关闭防火墙
[root@etcd01 ~]# systemctl stop firewalld && systemctl disable firewalld
关闭selinux
[root@etcd01 ~]# vi /etc/selinux/config
SELINUX=disabled
[root@etcd01 ~]# setenforce 0
关闭交换分区
[root@etcd01 ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab
[root@etcd01 ~]# swapoff -a
配置内核参数
[root@etcd01 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
2.安装docker和kubadm kubelet工具
配置docker和k8s的yum源
[root@etcd01 ~]# [root@k8s-master yum.repos.d]# cd /etc/yum.repos.d && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@etcd01 ~]# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
enabled=1
安装docker
[root@etcd01 ~]# yum install docker-ce-18.09.0-3.el7 -y
安装kubeadm和kubelet
[root@etcd01 ~]# yum install -y kubelet-1.15.2 kubeadm-1.15.2
3.启动docker和kubelet
启动docker,这里需要修改docker的cgroups为systemd与kubelet的一致。
[root@etcd01 ~]# vi /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}[root@etcd01 ~]# systemctl enable docker && systemctl start docker
启动kubelet
注意:使用默认的配置文件kubelet服务启动会报错,这里使用新的service文件覆盖默认的service文件。
[root@etcd01 ~]# cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
# Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd
Restart=always
EOF[root@etcd01 ~]# systemctl daemon-reload
[root@etcd01 ~]# systemctl start kubelet
4.导入etcd相关的镜像文件。
链接:https://pan.baidu.com/s/1iivHCMBDzKV7X4q36VNxVg
提取码:k00i
[root@etcd01 ~]# find . -name "*.tar" |xargs -n1 docker load -i
配置部署etcd集群
这里生成证书的思路为在etcd01节点上为所有节点生成节点对应的证书,然后将生成的证书复制到各个节点。
1.生成各个节点的etcd配置文件
各个IP变量化,方便后面使用
[root@etcd01 ~]# export HOST0=10.0.0.16
[root@etcd01 ~]# export HOST1=10.0.0.17
[root@etcd01 ~]# export HOST2=10.0.0.18
给各个节点生成etcd配置文件kubeadmcfg.yaml
[root@etcd01 ~]# mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
[root@etcd01 ~]# ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
[root@etcd01 ~]# NAMES=("infra0" "infra1" "infra2")[root@etcd01 ~]# for i in "${!ETCDHOSTS[@]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta2"
kind: ClusterConfiguration
etcd:
local:
serverCertSANs:
- "${HOST}"
peerCertSANs:
- "${HOST}"
extraArgs:
initial-cluster: ${NAMES[0]}=https://${ETCDHOSTS[0]}:2380,${NAMES[1]}=https://${ETCDHOSTS[1]}:2380,${NAMES[2]}=https://${ETCDHOSTS[2]}:2380
initial-cluster-state: new
name: ${NAME}
listen-peer-urls: https://${HOST}:2380
listen-client-urls: https://${HOST}:2379
advertise-client-urls: https://${HOST}:2379
initial-advertise-peer-urls: https://${HOST}:2380
EOF
done
执行完上面的命令后,etcd01节点会生成3个etcd配置文件
/tmp/10.0.0.16/kubeadmcfg.yaml
/tmp/10.0.0.17/kubeadmcfg.yaml
/tmp/10.0.0.18/kubeadmcfg.yaml
2.生成CA根证书
[root@etcd01 ~]# kubeadm init phase certs etcd-ca
这会在/etc/kubernetes/pki/etcd目录下生成两个CA证书文件:
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd/ca.key
3.为各个节点生成etcd相关的证书
为etcd03生成证书
[root@etcd01 ~]# kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
[root@etcd01 ~]# kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
[root@etcd01 ~]# kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
[root@etcd01 ~]# kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
[root@etcd01 ~]# cp -R /etc/kubernetes/pki /tmp/${HOST2}/
# 清理CA根证书之外的证书,为下一个节点生成证书准备
[root@etcd01 ~]# find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
为etcd02生成证书
[root@etcd01 ~]# kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
[root@etcd01 ~]# kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
[root@etcd01 ~]# kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
[root@etcd01 ~]# kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
[root@etcd01 ~]# cp -R /etc/kubernetes/pki /tmp/${HOST1}/
[root@etcd01 ~]# find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
为etcd01生成证书
[root@etcd01 ~]# kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
[root@etcd01 ~]# kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
[root@etcd01 ~]# kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
[root@etcd01 ~]# kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
# No need to move the certs because they are for HOST0# clean up certs that should not be copied off this host
[root@etcd01 ~]# find /tmp/${HOST2} -name ca.key -type f -delete
[root@etcd01 ~]#find /tmp/${HOST1} -name ca.key -type f -delete
3.将/tmp/${HOST1}/ /tmp/${HOST2}/下的分别复制到etcd02节点和etcd03节点
复制etcd配置文件和证书到etcd02
[root@etcd01 ~]# scp -r /tmp/10.0.0.17/* root@10.0.0.17:~
[root@etcd02 ~]# cd ~
[root@etcd02 ~]# chown -R root:root pki
[root@etcd02 ~]# mv pki /etc/kubernetes/
复制etcd配置文件和证书到etcd03
[root@etcd01 ~]# scp -r /tmp/10.0.0.18/* root@10.0.0.18:~
[root@etcd03 ~]# cd ~
[root@etcd03 ~]# chown -R root:root pki
[root@etcd03 ~]# mv pki /etc/kubernetes/
4.在各个节点上生成etcd的manifests配置文件
[root@etcd01 ~]# kubeadm init phase etcd local --config=/tmp/10.0.0.16/kubeadmcfg.yaml
[root@etcd02 ~]# kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml
[root@etcd03 ~]# kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml
这会在每个节点的/etc/kubernetes/manifests目录下生成一个etcd.yaml文件,kubelet会读取这个manifests配置文件,生成etcd的static pod。
等待片刻,可以看到各个节点上运行两个容器,etcd和pause容器。这里由于墙的原因,需要事先导入etcd和pause镜像。
5.检测etcd集群状态
[root@etcd01 ~]# docker run --rm -it \
--net host \
-v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.24 etcdctl \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://10.0.0.16:2379 cluster-health
可以看到三个节点都运行正常。
部署k8s集群
部署外置etcd集群的k8s集群与前述的堆叠的k8s集群基本一样,唯一的区别就是在master01节点上初始化集群有点不同,这里记录下在master01节点上初始化集群的步骤
1.将任意一台etcd节点的CA根证书和apiserver-etcd-client证书复制到master01节点上,对于master01节点来说,使用任意一台etcd节点的证书都可以正常连接etcd集群。这里使用etcd01的证书。
scp证书到master01
[root@etcd01 ~]# scp /etc/kubernetes/pki/apiserver-etcd-client.crt root@master01:/etc/kubernetes/pki/
[root@etcd01 ~]# scp /etc/kubernetes/pki/apiserver-etcd-client.key root@master01:/etc/kubernetes/pki/
[root@etcd01 ~]# scp /etc/kubernetes/pki/etcd/ca.crt root@master01:/etc/kubernetes/pki/etcd/
2.配置集群初始化文件,与前述的堆叠的不同点在于etcd的配置参数(红色标注部分)
[root@k8s-master01 ~]# vi kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.0.0.11
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "10.0.0.10:6444"
controllerManager: {}
dns:
type: CoreDNS
etcd:
external:
endpoints:
- https://10.0.0.16:2379
- https://10.0.0.17:2379
- https://10.0.0.18:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
3.初始化集群
[root@k8s-master01 ~]# kubeadm init --config=kubeadm-config.yaml --upload-certs
这里初始化的时候master01会将访问etcd的证书也上传,其他master节点加入时会自动下载证书。
后续的操作与堆叠的也是相同的,这里就不多赘述了。
https://v1-15.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/