K8S集群维护之etcd集群数据备份还原

K8S集群维护之etcd集群数据备份还原

一、背景描述

k8s版本:v1.16.6
etcd版本:3.4.3
k8s、etcd复用三台服务器!
服务器地址分别是:
172.16.1.11
172.16.1.12
172.16.1.13

二、备份etcd数据

备份etcd集群任意一个节点的数据就行
①、无证书版本:

export ETCD_ENDPOINTS="https://172.16.1.11:2379,https://172.16.1.12:2379,https://172.16.1.13:2379"
etcdctl --endpoints=${ETCD_ENDPOINTS} snapshot save "/home/snapshot.db"

save 数据备份目录
②、证书版本备份

[root@k8s01 autoshell]# cat /mnt/autoshell/etcd_backup.sh
#!/bin/bash
#时间戳,用来区分不同备份
timestamp=`date +%Y%m%d-%H%M%S`
#备份到哪个文件夹
#back_dir="/data/kubernetes/etcd/datas_bak"
back_dir="/mnt/data_backup/etcd_backup"
#etcd集群列表
#endpoints="https://172.16.1.11:2379,https://172.16.1.12:2379,https://172.16.1.13:2379"
endpoints="https://172.16.1.11:2379"
#etcd证书路径
cert_file="/etc/etcd/cert/etcd.pem"
#etcd证书的key路径
key_file="/etc/etcd/cert/etcd-key.pem"
#ca证书路径
cacert_file="/etc/kubernetes/cert/ca.pem"

mkdir -p $back_dir
ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
--endpoints=$endpoints \
--cert=$cert_file \
--key=$key_file \
--cacert=$cacert_file \
snapshot save $back_dir/snapshot_$timestamp.db

三、etcd集群数据还原

①、还原注意事项
1、将备份文件”snapshot_20210509-144401.db“ 分别拷贝到另外两台服务器。
2、还原备份数据到该路径:–data-dir="/data/k8s/etcd_new/etcd.restore" 注:etcd启动服务配置文件数据目录也需要修改为相应目录!修改如下文件: /etc/systemd/system/etcd.service

②、172.16.1.11 节点数据还原
后续服务器需要修改 --name “hostname” ,修改hostname 为对应服务器名称。
修改 --initial-advertise-peer-urls “https://IP:2380” ,修改IP 为对应服务器IP地址。

export ETCDCTL_API=3
etcdctl snapshot restore /usr/local/src/snapshot_20210509-144401.db \
--data-dir="/data/k8s/etcd_new/etcd.restore" \
--name k8s01 \
--initial-cluster "k8s01=https://172.16.1.11:2380,k8s02=https://172.16.1.12:2380,k8s03=https://172.16.1.13:2380" \
--initial-cluster-token etcd-cluster \
--initial-advertise-peer-urls "https://172.16.1.11:2380"

③、172.16.1.12 节点数据还原

export ETCDCTL_API=3
etcdctl snapshot restore /usr/local/src/snapshot_20210509-144401.db \
--data-dir="/data/k8s/etcd_new/etcd.restore" \
--name k8s02 \
--initial-cluster "k8s01=https://172.16.1.11:2380,k8s02=https://172.16.1.12:2380,k8s03=https://172.16.1.13:2380" \
--initial-cluster-token etcd-cluster \
--initial-advertise-peer-urls "https://172.16.1.12:2380"

④、172.16.1.13 节点数据还原

export ETCDCTL_API=3
etcdctl snapshot restore /usr/local/src/snapshot_20210509-144401.db \
--data-dir="/data/k8s/etcd_new/etcd.restore" \
--name k8s03 \
--initial-cluster "k8s01=https://172.16.1.11:2380,k8s02=https://172.16.1.12:2380,k8s03=https://172.16.1.13:2380" \
--initial-cluster-token etcd-cluster \
--initial-advertise-peer-urls "https://172.16.1.13:2380"

⑤、停止各个节点的etcd服务

[root@k8s01 autoshell]# systemctl stop etcd

⑥、修改各个节点etcd 配置文件,我这里配置到启动文件 /etc/systemd/system/etcd.service中。
修改了三个点:
WorkingDirectory=
–data-dir=
–wal-dir=

[root@k8s01 autoshell]# cat /etc/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/data/k8s/etcd_new
ExecStart=/opt/k8s/bin/etcd \
  --data-dir=/data/k8s/etcd_new/etcd.restore \
  --wal-dir=/data/k8s/etcd_new/wal \
  --name=k8s01 \
  --cert-file=/etc/etcd/cert/etcd.pem \
  --key-file=/etc/etcd/cert/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-cert-file=/etc/etcd/cert/etcd.pem \
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --listen-peer-urls=https://172.16.1.11:2380 \
  --initial-advertise-peer-urls=https://172.16.1.11:2380 \
  --listen-client-urls=https://172.16.1.11:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=https://172.16.1.11:2379 \
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=k8s01=https://172.16.1.11:2380,k8s02=https://172.16.1.12:2380,k8s03=https://172.16.1.13:2380 \
  --initial-cluster-state=new \
  --auto-compaction-mode=periodic \
  --auto-compaction-retention=1 \
  --max-request-bytes=33554432 \
  --quota-backend-bytes=6442450944 \
  --heartbeat-interval=250 \
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

⑦、启动各个节点etcd服务(这里演示K8S01节点)

[root@k8s01 autoshell]# systemctl start etcd

查看集群leader

export ETCD_ENDPOINTS="https://172.16.1.11:2379,https://172.16.1.12:2379,https://172.16.1.13:2379"
/opt/k8s/bin/etcdctl \
  -w table --cacert=/etc/kubernetes/cert/ca.pem \
  --cert=/etc/etcd/cert/etcd.pem \
  --key=/etc/etcd/cert/etcd-key.pem \
  --endpoints=${ETCD_ENDPOINTS} endpoint status 

四、k8s 集群其他服务
重启K8S集群master节点服务

systemctl restart kubelet
systemctl restart kube-apiserver
systemctl restart kube-controller-manager
systemctl restart kube-scheduler
systemctl restart kube-proxy
systemctl restart kube-nginx
systemctl restart containerd.service

重启K8S集群work节点服务

systemctl restart kubelet
systemctl restart kube-nginx
systemctl restart kube-proxy

参考:https://www.cnblogs.com/golinux/p/12576331.html

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值