elasticsearch的hadoop插件使用

 ES的Hadoop插件,总共有3个,我们要使用的是 hadoop HDFS Snapshot/Restore plugin,它主要用于备份ES数据到HDFS,或者从HDFS恢复数据,也就是ES的snapshot/restore特性。还原可以还原到别的集群,集群名字和节点数量不一样都可以,也就是 可以做 数据迁移


安装插件

?
1
bin /plugin  -i elasticsearch /elasticsearch-repository-hdfs/2 .0.2

这里hadoop的版本一定要对应好,否则后面会失败的。如果自动不能安装,可以手动去库里面下载,然后放到ES插件目录下面。

插件库地址


创建配置

?
1
2
3
4
5
6
7
8
curl -XPUT  'http://localhost:9200/_snapshot/my_backup'  -d '{
   "type" "hdfs" ,
   "settings" : {
     "uri" "hdfs://myhadoop:8020" ,
     "path" "/es" ,
     "conf_location" "hdfs-site.xml"
   }
}'

?
1
{ "acknowledged" : true #返回这个表示创建成功

通过下面的命令,可以查看所有创建的配置

?
1
curl http: //localhost :9200 /_snapshot/_all

备份数据

?
1
curl -XPUT  "localhost:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true"


备份成功,我们可以在 http://myhadoop:50070/explorer.html#/es 下面看到我们创建的备份

还原数据

?
1
curl -XPOST  "localhost:9200/_snapshot/my_backup/snapshot_1/_restore?wait_for_completion=true"
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Table of Contents Elasticsearch for Hadoop Credits About the Author About the Reviewers www.PacktPub.com Support files, eBooks, discount offers, and more Why subscribe? Free access for Packt account holders Preface What this book covers What you need for this book Who this book is for Conventions Reader feedback Customer support Downloading the example code Downloading the color images of this book Errata Piracy Questions 1. Setting Up Environment Setting up Hadoop for Elasticsearch Setting up Java Setting up a dedicated user Installing SSH and setting up the certificate Downloading Hadoop Setting up environment variables Configuring Hadoop Configuring core-site.xml Configuring hdfs-site.xml Configuring yarn-site.xml Configuring mapred-site.xml The format distributed filesystem Starting Hadoop daemons Setting up Elasticsearch Downloading Elasticsearch Configuring Elasticsearch Installing Elasticsearch's Head plugin Installing the Marvel plugin Running and testing Running the WordCount example Getting the examples and building the job JAR file Importing the test file to HDFS Running our first job Exploring data in Head and Marvel Viewing data in Head Using the Marvel dashboard Exploring the data in Sense Summary 2. Getting Started with ES-Hadoop Understanding the WordCount program Understanding Mapper Understanding the reducer Understanding the driver Using the old API – org.apache.hadoop.mapred Going real — network monitoring data Getting and understanding the data Knowing the problems Solution approaches Approach 1 – Preaggregate the results Approach 2 – Aggregate the results at query-time Writing the NetworkLogsMapper job Writing the mapper class Writing Driver Building the job Getting the data into HDFS Running the job Viewing the Top N results Getting data from Elasticsearch to HDFS Understanding the Twitter dataset Trying it yourself Creating the MapReduce job to import data from Elasticsearch to HDFS Writing the Tweets2Hdfs mapper Running the example Testing the job execution output Summary ...
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值