K8S-Demo集群实践04:部署etcd三节点高可用集群
- ETCD 是基于Raft算法的分布式,一致性的KV存储系统,由CoreOS公司发起的一个开源项目,授权协议为Apache。
- 通过前面环境变量你已经知道了集群节点名称和IP
- master1:192.168.66.10
- master2:192.168.66.11
- master3:192.168.66.12
- 如果没有特殊说明,本文档的所有操作均在master1节点上执行
- k8s-demo采用 Etcd v3.4.x版本
- 如果跨主机通讯方案选择flanneld,则需要将Etcd降级到v3.3.x版本
一、下载和分发etcd二进制文件
[root@master1 ~]# cd /opt/install/soft
[root@master1 soft]# wget https://github.com/coreos/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz
[root@master1 soft]# tar -xvf etcd-v3.4.3-linux-amd64.tar.gz
[root@master1 soft]# for node_ip in ${MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
scp /opt/install/soft/etcd-v3.4.3-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin
ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
done
二、配置etcd服务
1、准备服务模板
[root@master1 ~]# cd /opt/install/service
[root@master1 service]# cat > etcd.service.template <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=${ETCD_DATA_DIR}
ExecStart=/opt/k8s/bin/etcd \\
--data-dir=${ETCD_DATA_DIR} \\
--wal-dir=${ETCD_WAL_DIR} \\
--name=##NODE_NAME## \\
--cert-file=/opt/k8s/etc/etcd/cert/etcd.pem \\
--key-file=/opt/k8s/etc/etcd/cert/etcd-key.pem \\
--trusted-ca-file=/opt/k8s/etc/cert/ca.pem \\
--peer-cert-file=/opt/k8s/etc/etcd/cert/etcd.pem \\
--peer-key-file=/opt/k8s/etc/etcd/cert/etcd-key.pem \\
--peer-trusted-ca-file=/opt/k8s/etc/cert/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--listen-peer-urls=https://##NODE_IP##:2380 \\
--initial-advertise-peer-urls=https://##NODE_IP##:2380 \\
--listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \\
--advertise-client-urls=https://##NODE_IP##:2379 \\
--initial-cluster-token=etcd-cluster-0 \\
--initial-cluster=${ETCD_NODES} \\
--initial-cluster-state=new \\
--auto-compaction-mode=periodic \\
--auto-compaction-retention=1 \\
--max-request-bytes=33554432 \\
--quota-backend-bytes=6442450944 \\
--heartbeat-interval=250 \\
--election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
2、为每个节点生产服务配置文件
[root@master1 ~]# cd /opt/install/service
[root@master1 service]# for (( i=0; i < 3; i++ ))
do
sed -e "s/##NODE_NAME##/${MASTER_NAMES[i]}/" -e "s/##NODE_IP##/${MASTER_IPS[i]}/" etcd.service.template > etcd-${MASTER_IPS[i]}.service
done
3、分发服务配置文件到3个Master节点
[root@master1 ~]# cd /opt/install/service
[root@master1 service]# for node_ip in ${MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service
done
4、启动Etcd服务
[root@master1 ~]# for node_ip in ${MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd "
done
- 如果遇到错误,请检查etcd数据目录和工作目录是否创建成功
三、检查etcd服务状态
1、检查etcd服务是否成功启动
[root@master1 ~]# for node_ip in ${MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl status etcd|grep Active"
done
- 确保状态为active (running),否则查看日志:
[root@master01 ~]# journalctl -u etcd
2、检查etcd服务健康状况
[root@master1 ~]# for node_ip in ${MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
/opt/k8s/bin/etcdctl \
--endpoints=https://${node_ip}:2379 \
--cacert=/opt/k8s/etc/cert/ca.pem \
--cert=/opt/k8s/etc/etcd/cert/etcd.pem \
--key=/opt/k8s/etc/etcd/cert/etcd-key.pem endpoint health
done
- 预期输出
>>> 192.168.66.10
https://192.168.66.10:2379 is healthy: successfully committed proposal: took = 6.196779ms
>>> 192.168.66.11
https://192.168.66.11:2379 is healthy: successfully committed proposal: took = 7.343025ms
>>> 192.168.66.12
https://192.168.66.12:2379 is healthy: successfully committed proposal: took = 7.327491ms
3、查看当前Leader节点
[root@master1 ~]# /opt/k8s/bin/etcdctl \
-w table --cacert=/etc/kubernetes/cert/ca.pem \
--cert=/opt/k8s/etc/etcd/cert/etcd.pem \
--key=/opt/k8s/etc/etcd/cert/etcd-key.pem \
--endpoints=${ETCD_ENDPOINTS} endpoint status
预期输出
±---------------------------±-----------------±--------±--------±----------±-----------±----------±-----------±-------------------±-------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
±---------------------------±-----------------±--------±--------±----------±-----------±----------±-----------±-------------------±-------+
| https://192.168.66.10:2379 | b2f5e996ff142369 | 3.4.3 | 20 kB | true | false | 112 | 15 | 15 | |
| https://192.168.66.11:2379 | 8aac12c9432579ff | 3.4.3 | 20 kB | false | false | 112 | 15 | 15 | |
| https://192.168.66.12:2379 | eee70ab8f420a137 | 3.4.3 | 20 kB | false | false | 112 | 15 | 15 | |
±---------------------------±-----------------±--------±--------±----------±-----------±----------±-----------±-------------------±-------+
- 当前的leader为192.168.66.10
四、etcdctl命令介绍
# 存入
etcdctl put key value
# 读取
etcdctl get key
# 区间查找
etcdctl get key1 key2
# 只读取key的值
etcdctl get --print-value-only key
# 读取key开头的数据
etcdctl get --prefix key
# 读取所有key
etcdctl get / --prefix --keys-only
# 从key开始读取后面的数据
etcdctl get --from-key key
# 查找所有的key-value
etcdctl get --from-key ""
# 删除
etcdctl del key
etcdctl del --prev-kv key
etcdctl del --prev-kv --from-key key
etcdctl del --prev-kv --prefix key
# 删除所有数据
etcdctl del --prefix ""
- 参考 https://www.cnblogs.com/doscho/p/6252556.html
五、etcd备份和恢复
# 备份快照:可以定期执行
etcdctl snapshot save /data/backup/xxxx.db
# 恢复步骤
1、关闭所有Master节点的 Etcd服务 systemctl stop etcd
2、备份 ETCD 存储目录下数据
3、拷贝 ETCD 备份快照到每个Etcd节点
4、在每个节点执行恢复命令
ETCDCTL_API=3 etcdctl snapshot restore /data/backup/etcd-snapshot-xxxxx.db \
--name etcd-0 \
--initial-cluster "master1=https://192.168.66.10:2380,master2=https://192.168.66.11:2380,master3=https://192.168.66.12:2380" \
--initial-cluster-token etcd-cluster \
--initial-advertise-peer-urls https://192.168.66.10:2380 \
--data-dir=/var/lib/etcd/default.etcd
- 参考 https://zhuanlan.zhihu.com/p/101523337
- 集群的状态数据都存储在Etcd中,及时备份,完善的恢复流程和脚本很重要
附:K8s-Demo集群版本信息
组件 | 版本 | 命令 |
---|---|---|
kubernetes | 1.18.5 | kubectl version |
docker-ce | 19.03.11 | docker version 或者 rpm -qa | grep docker |
etcd | 3.4.3 | etcdctl version |
calico | 3.13.3 | calico -v |
coredns | 1.7.0 | coredns -version |
附:专栏链接
K8S-Demo集群实践00:搭建镜像仓库Harbor+安全扫描
K8S-Demo集群实践01:准备VMware虚拟机模板
K8S-Demo集群实践02:准备VMware虚拟机3台Master+3台Node
K8S-Demo集群实践03:准备集群各组件间HTTPS通讯需要的x509证书
K8S-Demo集群实践04:部署etcd三节点高可用集群
K8S-Demo集群实践05:安装kubectl并配置集群管理员账户
K8S-Demo集群实践06:部署kube-apiserver到master节点(3个无状态实例)
K8S-Demo集群实践07:kube-apiserver高可用方案
K8S-Demo集群实践08:部署高可用kube-controller-manager集群
K8S-Demo集群实践09:部署高可用kube-scheduler集群
K8S-Demo集群实践10:部署ipvs模式的kube-proxy组件
K8S-Demo集群实践11:部署ipvs模式的kube-kubelet组件
K8S-Demo集群实践12:部署Calico网络
K8S-Demo集群实践13:部署集群CoreDNS
K8S-Demo集群实践14:部署集群监控服务Metrics Server
K8S-Demo集群实践15:部署Kubernetes Dashboard
K8S-Demo集群实践16:部署Kube-Prometheus
K8S-Demo集群实践17:部署私有云盘owncloud(10.6版本)
K8S-Demo集群实践18:构建宇宙中第一个基础容器镜像
- 先用起来,通过操作实践认识k8s,积累多了自然就理解了
- 把理解的知识分享出来,自造福田,自得福缘
- 追求简单,容易使人理解,知识的上下文也是知识的一部分,例如版本,时间等
- 欢迎留言交流,也可以提出问题,一般在周末回复和完善文档
- Jason@vip.qq.com 2021-1-19。