Eelastcsearch跨集群数据迁移

本文详细介绍了如何在CentOS系统中进行Elasticsearch数据的跨集群迁移,包括设置NFS服务器作为共享存储,旧集群的备份,新集群的配置,数据恢复等步骤,确保在新集群中无缝恢复旧集群的数据。同时,强调了权限配置和系统环境参数调整的重要性。
摘要由CSDN通过智能技术生成

1、系统环境

es集群版本:6.5.4

系统环境:centos6.7

2、需要一台nfs服务器做共享存储

2.1、nfs server端安装

yum install nfs-utils rpcbind -y

2.1、配置NFS server端

/data/backup_newcluster *(rw,no_root_squash,sync)

mkdir -p /data/backup_newcluster

2.2、加载配置,重启服务

systemctl restart nfs
systemctl enable rpcbind.service
systemctl enable nfs-server.service
systemctl restart rpcbind.service
systemctl restart nfs-server.service

3、【旧集群操作】

3.1、旧集群节点挂载nfs备份目录

每台旧集群节点机器挂载nfs服务server端的共享目录/data/es/backup_newcluster,es用户为启动es集群的用户

 #mkdir -p /data/es_backup/
 #mount -t nfs 10.6.118.110:/data/backup_newcluster /data/es_backup/  -o proto=tcp -o nolock
 #chown  es:es /data/es_backup/  -R

3.2、修改elasticsearch.yml文件

新增配置:path.repo: ["/data/es_backup"],且赋予备份目录的所属组和所有者权限

cluster.name: es-cluster
node.name: wh-upos--19
node.master: True
node.data: True
path.data: /home/es/esdata
path.logs: /home/es/eslog
#新增此配置,说明快照和备份数据放在此目录
path.repo: ["/data/es_backup"]

network.host: 10.6.209.20
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.6.209.19", "10.6.209.21"]
transport.tcp.port: 9300
discovery.zen.minimum_master_nodes: 2

3.3、重启Elasticsearch服务

3.4、创建备份快照

修改实际指定的IP和端口

backup为快照名称,后面备份数据都是基于此快照

location: /data/es_backup 目录为上述elasticsearch.yml里面备份数据目录的地址

curl -XPUT 'http://10.6.209.19:9200/_snapshot/backup' -H "Content-Type: application/json" -d '{
"type": "fs",
"settings": {
    "location": "/data/es_backup",
	"max_restore_bytes_per_sec":"10mb",
	"max_snapshot_bytes_per_sec":"10mb", 
	"chunk_size":"10mb" ,
    "compress": true
  }
}'

3.5、开始备份

3.5.1、备份所有索引

backup 为基于指定快照的名称【类似于Mysql 数据库名】

backup/back_all_index_20201229 备份快照数据的名称【类似于Mysql 表名】

curl -XPUT  http://10.6.209.19:9200/_snapshot/backup/back_all_index_20201229?wait_for_completion=true -H "Content-Type: application/json"

如下说明备份成功

curl -XPUT  http://10.6.209.19:9200/_snapshot/backup/back_all_index_20201229?wait_for_completion=true -H "Content-Type: application/json"
{"snapshot":{"snapshot":"back_all_index_20201229","uuid":"WrRdFhQmTJCrcyj4rLflYQ","version_id":6050499,"version":"6.5.4","indices":["device_log_2020-02","uih_chatbot_session","service_log_2020-05","device_log_2020-10","service_log_2020-02","device_log_1969-12",".kibana_5","device_log_2020-04","device_log_0201-07","device_log_2020-08","es_version","uih_uclass","device_log_2020-01",".kibana_1","removedebris","solar_failed_parsed_log","service_log_2020-03","device_log_2019-08","service_log_2020-12","device_log_1970-01","device_log_2020-06","device_log_2014-12","device_log_2019-05","service_log_1969-12","service_log_2020-01","device_log_2019-11",".monitoring-es-6-2020.12.23","device_log_1970-04","device_log_2019-12","clouduploadcheck","service_log_2021-01","device_log_2019-07",".monitoring-es-6-2020.12.22","service_log_2019-10",".monitoring-kibana-6-2020.12.24",".monitoring-kibana-6-2020.12.23","service_log_2018-08","cloudget","service_log_2019-12",".monitoring-kibana-6-2020.12.25","service_log_2020-06","device_log_2019-04","device_log_2020-07","service_log_2018-12","service_log_2010-10",".monitoring-es-6-2020.12.26","device_log_2018-11",".monitoring-es-6-2020.12.27","dicom_parse_error",".kibana_2","device_log_2020-09",".tasks","service_log_1970-01","service_log_2020-11","device_log_2019-01","service_log_2020-10","device_log_2020-05","device_log_2019-06",".monitoring-es-6-2020.12.25","uih_chatbot_business","uih_chatbot_history","clouduploadinfo",".kibana_3","report",".monitoring-es-6-2020.12.29","service_log_2019-07","service_log_2019-09","device_log_2020-12","device_log_2020-11",".kibana_4","cloudupload",".monitoring-es-6-2020.12.24","service_log_2020-09","service_log_2020-08","service_log_2019-11",".monitoring-es-6-2020.12.28","service_log_2019-08","device_log_2020-03","service_log_2013-12","service_log_2020-04","device_log_2010-10","device_log_2021-01","service_log_2020-07","uih_uclass_video"],"include_global_state":true,"state":"SUCCESS","start_time":"2020-12-29T06:09:45.301Z","start_time_in_millis":1609222185301,"end_time":"2020-12-29T07:21:46.469Z","end_time_in_millis":1609226506469,"duration_in_millis":4321168,"failures":[],"shards":{"total":352,"failed":0,"successful":352}}}

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-elfFdV3v-1644493896929)(C:\Users\qi.wen\AppData\Roaming\Typora\typora-user-images\image-20201229152340001.png)]

3.5.2、备份单个索引,或者多个索引数据

curl -XPUT -uuser:password  http://10.6.209.19:9200/_snapshot/backup/bak_index_20201229?wait_for_completion=true -H "Content-Type: application/json"  -d '
   {
 "indices": "index1,index2,index3,index4"   #需要备份几个索引就写几个	
   }'

4、搭建部署新集群

4.1、三台节点

配置以实际数据为主,系统为Centos7.6操作系统 ,Elasticsearch集群版本:6.5.4

U±POC-es跨集群数据迁移-文奇-110.6.118.2158c16G200G
U±POC-es跨集群数据迁移-文奇-210.6.118.2328c16G200G
U±POC-es跨集群数据迁移-文奇-310.6.118.2258c16G200G

4.2、elasticsearch配置

部署目录:/data/elasticsearch-6.5.4, 数据目录: /data/es/data, 日志目录: /data/es/logs ,备份目录为: /data/es_backup【以实际为准,当前测试环境配置】

4.2.1、10.6.118.215 Elasticsearch配置文件: elasticsearch.yml

cluster.name: es-new-cluster
node.name: node-118-215
node.master: True
node.data: True
path.data: /data/es/data
path.logs: /data/es/logs 
path.repo: ["/data/es_backup"]

network.host: 10.6.118.215
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.6.118.232", "10.6.118.225"]
transport.tcp.port: 9300
discovery.zen.minimum_master_nodes: 2

4.2.2、10.6.118.225 Elasticsearch配置文件: elasticsearch.yml

cluster.name: es-new-cluster
node.name: node-118-225
node.master: True
node.data: True
path.data: /data/es/data
path.logs: /data/es/logs 
path.repo: ["/data/es_backup"]

network.host: 10.6.118.225
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.6.118.215", "10.6.118.232"]
transport.tcp.port: 9300
discovery.zen.minimum_master_nodes: 2

4.2.3、10.6.118.232 Elasticsearch配置文件: elasticsearch.yml

cluster.name: es-new-cluster
node.name: node-118-232
node.master: True
node.data: True
path.data: /data/es/data
path.logs: /data/es/logs 
path.repo: ["/data/es_backup"]

network.host: 10.6.118.232
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.6.118.215", "10.6.118.225"]
transport.tcp.port: 9300
discovery.zen.minimum_master_nodes: 2

5、三台机器节点挂载NFS服务端的数据目录

为旧集群备份的快照目录,也就是挂载/data/backup_prod_newcluster

三台节点都需要操作

#yum install nfs-utils -y
#mount -t nfs 10.6.118.110:/data/backup_newcluster /data/es_backup/  -o proto=tcp -o nolock

6、修改Elasticsearch各个数据目录的权限【重中之重,稍有不测,搞晕你】

6.1、首先查看老集群数据备份目录的用户id和用户名,都需要保持一致

老集群操作【10.6.209.19】

查看运行有用户为es
[es@wh-upos--19 config]$ id es
uid=1001(es) gid=1002(es) groups=1002(es)

新集群上操作【三台都要操作】

首先确认新集群这个id号是否已经被占用【cat /etc/password】,因为快照数据的用户为es,且gid和uid都需要和老集群保持一致才能无缝操作

#gid为1002
groupadd -g 1002 es
# uid为1001
useradd -u 1001 -g es es
#创建所有的数据目录
mkdir -p /data/es_backup
mkdir -p /data/es/logs
mkdir -p /data/es/data
#所有数据目录的所属组和所有者权限修改
chown es:es /data/elasticsearch-6.5.4 -R && chown es:es /data/es_backup -R  && chown es:es /data/es -R

7、配置java变量

路径和版本根据实际情况填写,如已经有java环境,直接忽略此步

# /etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_77

export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
source /etc/profile

8、配置系统环境参数

8.1、 /etc/security/limits.conf

*               soft    nofile          65536
*               hard    nofile          65536
*               soft    nproc           4096
*               hard    nproc           4096

8.2、/etc/sysctl.conf

vm.max_map_count=655360
vm.overcommit_memory=1
sysctl -p

7、启动Elasticsearch集群

查看整个集群状态是否正常curl http://10.6.118.225:9200/_cat/health?v

su - es
cd /data/elasticsearch-6.5.4/bin
./elasticsearch -d

8、在新集群创建快照

root权限下创建,和旧集群的快照一样一样

curl -XPUT 'http://10.6.118.225:9200/_snapshot/backup' -H "Content-Type: application/json" -d '{
"type": "fs",
"settings": {
    "location": "/data/es_backup",
	"max_restore_bytes_per_sec":"10mb",
	"max_snapshot_bytes_per_sec":"10mb", 
	"chunk_size":"10mb" ,
    "compress": true
  }
}'

9、恢复数据

backup为创建的旧集群快照名,back_all_index_20201229为创建的备份索引数据名

#恢复所有索引的数据
curl -XPOST http://10.6.118.225:9200/_snapshot/backup/back_all_index_20201229/_restore

10、查看集群状态以及数据同步装填

curl http://10.6.118.225:9200/_cat/health?v

11、其他

#删除索引
curl -XDELETE http://10.6.118.225:9200/service_log_2020-33
#查看所有索引
curl 'http://10.6.118.225:9200/_cat/indices?v'
#查看仓库
curl -XGET 'http://10.6.118.225:9200/_snapshot/backup?pretty'
#删除快照
curl -XDELETE http://10.6.118.225:9200/_snapshot/backup/back_all_index_20201229
#查看快照备份情况
curl -XGET http://10.6.118.225:9200/_snapshot/backup/back_all_index_20201229/_status?pretty
#关闭索引
curl -XPOST http://10.25.177.47:9200/.kibana/_close
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值