ETCD数据库备份与还原

1 备份etcd数据

etcdctl backup --data-dir /var/lib/etcd/default.etcd --backup-dir /root/etcdback

2 etcd备份脚本

#!/bin/bash
date_time=`date +%Y%m%d`
etcdctl backup --data-dir /var/lib/etcd/default.etcd --backup-dir /root/etcd71-${date_time}.etcd
tar cvzf etcd71-${date_time}.tar.gz etcd71-${date_time}.etcd
 
find /root/*.etcd -ctime +7 -exec rm -r {} \;
find /root/*.gz -ctime +7 -exec rm -r {} \;
tar cvzf etcdback-

3 V3版本备份

# mkdir -p /var/lib/etcd_backup/
# ETCDCTL_API=3 etcdctl snapshot  save /var/lib/etcd_backup/etcd_$(date "+%Y%m%d%H%M%S").db

4 恢复etcd数据(集群不可用,灾难恢复)

下面介绍下当整个etcd集群不可用的情况下,如何快速的恢复一个etcd集群。

1.首先需要停止master节点的kube-apiserver服务:

systemctl stop kube-apiserver

确保kube-apiserver已经停止了,执行下列命令返回值为0

2 停掉集群中的所有etcd服务

systemctl stop etcd
# ps -ef|grep etcd|grep -v etcd|wc -l
0
确保etcd停止成功

3 移除所有etcd服务实例的数据目录

mv /var/lib/etcd/data.etcd /var/lib/etcd/data.etcd_bak

分别在各个节点恢复数据,首先需要拷贝数据到每个etcd节点,假设备份数据存储在/var/lib/etcd_backup/backup_20180107172459.db

scp /var/lib/etcd_backup/backup_20180107172459.db root@etcd01:/var/lib/etcd_backup/
scp /var/lib/etcd_backup/backup_20180107172459.db root@etcd02:/var/lib/etcd_backup/
scp /var/lib/etcd_backup/backup_20180107172459.db root@etcd03:/var/lib/etcd_backup/
scp /var/lib/etcd_backup/backup_20180107172459.db root@etcd04:/var/lib/etcd_backup/
scp /var/lib/etcd_backup/backup_20180107172459.db root@etcd05:/var/lib/etcd_backup/

在需要恢复的所有etcd实例上执行恢复命令:

ETCDCTL_API=3 etcdctl snapshot --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem  restore <备份数据> --name=<ETCD_NAME> --data-dir=<元数据存储路径> --initial-cluster=<ETCD_CLUSTER> --initial-cluster-token=<ETCD_INITIAL_CLUSTER_TOKEN>

4.同时启动etcd集群的所有etcd实例

systemctl start etcd

5.检查etcd集群member及健康状态

etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem member list
etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem cluster-health

6.启动master节点的所有kube-apiserver服务:

# systemctl start kube-apiserver
# systemctl status kube-apiserver

摘除etcd节点

向我们遇到的问题,需要将ceph节点的机器换成本地sata盘的机器,就需要先将部署在ceph上的etcd实例从集群中先摘除掉,然后在增加新的etcd实例到集群中。

1.查看etcd集群member信息

etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem member list

2.根据member信息移除具体的etcd实例

etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem member remove <member_id>

3.停止etcd集群中被移除的etcd实例

# systemctl stop etcd
# yum remove -y etcd-xxxx

4.查看etcd实例是否从集群中被移除

etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem member list

新增etcd节点

在已经存在的etcd节点上执行如下命令,增加新的etcd节点到集群中。

# etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem member add <etcd_name> http://<etcd_node_address>:2380
ETCD_NAME=etcd01
ETCD_INITIAL_CLUSTER="etcd01=http://ip1:2380,etcd02=http://ip2:2380,etcd03=http://ip3:2380,etcd04=http://ip4:2380,etcd05=http://ip5:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

注意:

  • etcd_name: etcd.conf配置文件中ETCD_NAME内容
  • etdc_node_address: etcd.conf配置文件中的ETCD_LISTEN_PEER_URLS内容

此时新的etcd节点已经被加到了现有的etcd集群。修改新增加的etcd节点的配置文件/etc/etcd/etcd.conf, 将ETCD_INITIAL_CLUSTER修改成上面输出的内容,并增加相关的配置。

启动新的etcd节点:

systemctl start etcd

并对已经存在的etcd节点的配置项ETCD_INITIAL_CLUSTER增加<new_etcd_node_name>=http://<new_etcd_node_address>:2380参数。并“同时”重启所有的etcd。

更新etcd节点

ETCDCTL_API=3 etcdctl member update  <member-ID> http://<etcd_node_address_ip>:2380

 

转载于:https://my.oschina.net/54188zz/blog/3054428

  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值