java 导出es数据_elasticseach 数据的导出与导出工具elasticdump使用

Centos下安装elasticdumpyum install elasticdump安装完成后[root@i-vvxxxxswtw5ne ~]# elasticdump --helpelasticdump: Import and export tools for elasticsearchversion: 2.2.0Usage: elasticdump --input SOURCE --outp...
摘要由CSDN通过智能技术生成

Centos下安装elasticdump

yum install elasticdump

安装完成后

[root@i-vvxxxxswtw5ne ~]# elasticdump --help

elasticdump: Import and export tools for elasticsearch

version: 2.2.0

Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS]

--input

Source location (required)

--input-index

Source index and type

(default: all, example: index/type)

--output

Destination location (required)

--output-index

Destination index and type

(default: all, example: index/type)

--limit

How many objects to move in batch per operation

limit is approximate for file streams

(default: 100)

--debug

Display the elasticsearch commands being used

(default: false)

--type

What are we exporting?

(default: data, options: [data, mapping])

--delete

Delete documents one-by-one from the input as they are

moved. Will not delete the source index

(default: false)

--searchBody

Preform a partial extract based on search results

(when ES is the input,

(default: '{"query": { "match_all": {} } }'))

--sourceOnly

Output only the json contained within the document _source

Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}

sourceOnly: {SOURCE}

(default: false)

--all

Load/store documents from ALL indexes

(default: false)

--ignore-errors

Will continue the read/write loop on write error

(default: false)

--scrollTime

Time the nodes will hold the requested search in order.

(default: 10m)

--maxSockets

How many simultaneous HTTP requests can we process make?

(default:

5 [node <= v0.10.x] /

Infinity [node >= v0.11.x] )

--timeout

Integer containing the number of milliseconds to wait for

a request to respond before aborting the request. Passed

directly to the request library. Mostly used when you don't

care too much if you lose some data when importing

but rather have speed.

--offset

Integer containing the number of rows you wish to skip

ahead from the input transport. When importing a large

index, things can go wrong, be it connectivity, crashes,

someone forgetting to `screen`, etc. This allows you

to start the dump again from the last known line written

(as logged by the `offset` in the output). Please be

advised that since no sorting is specified when the

dump is initially created, there's no real way to

guarantee that the skipped rows have already been

written/parsed. This is more of an option for when

you want to get most data as possible in the i

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
导出ESElasticsearch)的数据,可以使用以下Java代码来实现: 1. 首先,你需要导入相应的Java库,例如 Elasticsearch 的 Java 客户端库(例如 Elasticsearch High-Level Rest Client)。 2. 创建连接到 Elasticsearch 实例的客户端,指定 Elasticsearch 的主机名和端口号。 3. 构建一个搜索请求对象,指定你想要导出的索引和查询条件。 4. 使用客户端的搜索方法来发送请求并获取搜索结果。 5. 遍历搜索结果,提取你想要导出数据。 6. 将导出数据写入到目标文件或其他输出源。 以下是一个简单的示例代码,演示如何导出 ES 数据到一个 CSV 文件: ```java import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.client.RequestOptions; import org.elasticsearch.client.RestClient; import org.elasticsearch.client.RestHighLevelClient; import org.elasticsearch.index.query.MatchAllQueryBuilder; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.sort.SortBuilders; import org.elasticsearch.search.sort.SortOrder; import java.io.BufferedWriter; import java.io.FileWriter; import java.io.IOException; public class ESDataExporter { public static void main(String[] args) { String host = "localhost"; // Elasticsearch 主机名 int port = 9200; // Elasticsearch 端口号 String index = "your_index"; // 要导出的索引 try (RestHighLevelClient client = new RestHighLevelClient( RestClient.builder(host + ":" + port))) { SearchRequest searchRequest = new SearchRequest(index); SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); searchSourceBuilder.query(QueryBuilders.matchAllQuery()); searchSourceBuilder.sort(SortBuilders.fieldSort("timestamp").order(SortOrder.ASC)); // 根据时间戳排序 searchRequest.source(searchSourceBuilder); SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT); // 解析搜索结果并写入到文件 BufferedWriter writer = new BufferedWriter(new FileWriter("output.csv")); for (SearchHit hit : searchResponse.getHits().getHits()) { String source = hit.getSourceAsString(); writer.write(source); writer.newLine(); } writer.close(); } catch (IOException e) { e.printStackTrace(); } } } ``` 请注意,上述代码仅是一个简单示例,你可能需要根据自己的实际情况进行适当修改和调整。同时,你也可以根据需要使用其他导出格式,例如 JSON、Excel 等。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值