mysql超过1W条查询不显示_解决 Elasticsearch 超过 10000 条无法查询的问题

解决 Elasticsearch 超过 10000 条无法查询的问题

问题描述

分页查询场景,当查询记录数超过 10000 条时,会报错。

使用 Kibana 的 Dev Tools 工具查询 从第 10001 条到 10010 条数据。

查询语句如下:

GET alarm/_search

{"from": 10000,"size": 10}

查询结果,截图如下:

4de15bd3e3d843cc6e72eeb7053386e0.png

报错信息如下:

{"error": {"root_cause": [

{"type": "query_phase_execution_exception","reason": "Result window is too large, from + size must be less than or equal to: [10000] but was [10010]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting."}

],"type": "search_phase_execution_exception","reason": "all shards failed","phase": "query","grouped": true,"failed_shards": [

{"shard": 0,"index": "alarm","node": "hdLJanxRTbmF52eK6-FFgg","reason": {"type": "query_phase_execution_exception","reason": "Result window is too large, from + size must be less than or equal to: [10000] but was [10010]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting."}

}

]

},"status": 500}

原因分析

Elasticsearch 默认查询结果最多展示前 10000 条数据。

解决方案

按照报错信息里的提示,可以看到,通过设置 max_result_window的值来调整显示数据的大小:

This limit can be set by changing the [index.max_result_window] index level setting.

两种方式可以实现:

【方式一】(修改完配置文件,需要重启集群中的 ES 服务)

修改Elasticsearch 集群中的 配置文件 config/elasticsearch.yml

在配置文件最后增加一行,如下:

max_result_window: 200000000

【方式二】(推荐)

具体操作命令,如下(比如,设置可查询 200000000 条数据,其中 alarm 是index名称):

PUT alarm/_settings

{"max_result_window" : 200000000}

命令执行效果,截图如下:

dc54f5d8a715009eccf18f00b2535e81.png

再次执行查询语句,即可正常查询,效果截图如下:

c74f36715b2c1f146d0c31b311d6d7c0.png

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Elasticsearch的聚合查询默认返回的是10数据,但是可以通过设置`size`参数来改变返回的结果数量。以下是一个Java聚合查询的示例代码,其中使用了`size`参数来控制返回的结果数量: ```java import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.client.transport.TransportClient; import org.elasticsearch.common.transport.InetSocketTransportAddress; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.search.aggregations.AggregationBuilders; import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval; import org.elasticsearch.search.aggregations.bucket.histogram.Histogram; import org.elasticsearch.search.aggregations.bucket.terms.Terms; import org.elasticsearch.transport.client.PreBuiltTransportClient; import java.net.InetAddress; import java.net.UnknownHostException; public class ElasticsearchAggregationQueryExample { public static void main(String[] args) throws UnknownHostException { //创建客户端 TransportClient client = new PreBuiltTransportClient(ElasticsearchConstants.SETTINGS) .addTransportAddress(new InetSocketTransportAddress( InetAddress.getByName(ElasticsearchConstants.HOST), ElasticsearchConstants.PORT)); //聚合查询 SearchResponse response = client.prepareSearch(ElasticsearchConstants.INDEX) .setQuery(QueryBuilders.matchAllQuery()) .addAggregation( AggregationBuilders.terms("group_by_field_name").field("field_name").size(20) .subAggregation(AggregationBuilders.dateHistogram("group_by_date") .field("date_field_name") .dateHistogramInterval(DateHistogramInterval.MONTH) .format("yyyy-MM-dd") .subAggregation(AggregationBuilders.avg("avg_field_name").field("field_name")))) .execute().actionGet(); //获取聚合结果 Terms groupByField = response.getAggregations().get("group_by_field_name"); for (Terms.Bucket bucket : groupByField.getBuckets()) { String key = bucket.getKeyAsString(); Histogram groupByDate = bucket.getAggregations().get("group_by_date"); for (Histogram.Bucket dateBucket : groupByDate.getBuckets()) { String date = dateBucket.getKeyAsString(); Double avgValue = dateBucket.getAggregations().get("avg_field_name").getValue(); System.out.println(key + " " + date + " " + avgValue); } } //关闭客户端 client.close(); } } ``` 在聚合查询中使用了`.size(20)`方法来设置返回的结果数量为20。你可以根据自己的需求来设置不同的`size`值。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值