Java Elasticsearch按时间分组聚合查询

核心代码

SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
DateHistogramAggregationBuilder aggregationBuilder = AggregationBuilders
		 // 分组别名
        .dateHistogram("dateDownStreamRequestTime")
         // 查询字段
        .field("downStreamRequestTime")
        .calendarInterval(DateHistogramInterval.DAY)
        .offset("-8h")
        .minDocCount(0)
        .order(BucketOrder.aggregation("_key", true));
sourceBuilder.aggregation(aggregationBuilder);

聚合查询示例

@Override
public RequestResult getDailyInvokeCount(InterfaceDTO interfaceDTO) throws Exception {
    Date queryRequestLogStartTime = DateUtil.toDate(interfaceDTO.getQueryRequestLogStartTime(), "yyyy-MM-dd HH:mm:ss");
    Date queryRequestLogEndTime = DateUtil.toDate(interfaceDTO.getQueryRequestLogEndTime(), "yyyy-MM-dd HH:mm:ss");
    List<Date> betweenDates = DateUtil.getBetweenDates(queryRequestLogStartTime, queryRequestLogEndTime);

    BoolQueryBuilder queryBuilder = QueryBuilders.boolQuery();
    if (interfaceDTO.getOrderId() != null) {
        queryBuilder.must(QueryBuilders.termQuery("orderId", interfaceDTO.getOrderId()));
    }
    if (StringUtil.isNotEmpty(interfaceDTO.getPartnerId())) {
        queryBuilder.must(QueryBuilders.termQuery("partnerId", interfaceDTO.getPartnerId().toLowerCase()));
    }
    if (interfaceDTO.getServiceId() != null) {
        queryBuilder.must(QueryBuilders.termQuery("serviceId", interfaceDTO.getServiceId()));
    }
    if (StringUtil.isNotEmpty(interfaceDTO.getAppId())) {
        queryBuilder.must(QueryBuilders.termQuery("appId", interfaceDTO.getAppId().toLowerCase()));
    }
    if (interfaceDTO.getId() != null) {
        queryBuilder.must(QueryBuilders.termQuery("interfaceId", interfaceDTO.getId()));
    }
    queryBuilder.must(QueryBuilders.rangeQuery("downStreamRequestTime").from(queryRequestLogStartTime.getTime()).to(queryRequestLogEndTime.getTime()));

    SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
    DateHistogramAggregationBuilder aggregationBuilder = AggregationBuilders
            .dateHistogram("dateDownStreamRequestTime")
            .field("downStreamRequestTime")
            .calendarInterval(DateHistogramInterval.DAY)
            .offset("-8h")
            .minDocCount(0)
            .order(BucketOrder.aggregation("_key", true));
    sourceBuilder.aggregation(aggregationBuilder);
    sourceBuilder.query(queryBuilder);

    SearchRequest searchRequest = new SearchRequest(getEsIndices(betweenDates));
    searchRequest.indicesOptions(IndicesOptions.fromOptions(true, true, true, false));
    searchRequest.source(sourceBuilder);

    List<Map<String, Object>> listMap = new ArrayList<>();

    SearchResponse searchResponse = eslClient.search(searchRequest, RequestOptions.DEFAULT);
    Aggregations aggregations = searchResponse.getAggregations();
    if (aggregations == null) {
        return RequestResult.success(interfaceInvokeEmptyData("day", "invokeCount", listMap));
    }

    Histogram histogram = searchResponse.getAggregations().get("dateDownStreamRequestTime");
    for (Histogram.Bucket entry : histogram.getBuckets()) {
        String key = entry.getKeyAsString();
        String time = DateUtil.toString(new Date(Long.parseLong(key)), "yyyy-MM-dd");

        Map<String, Object> map = new LinkedHashMap<>();
        map.put("day", time);
        map.put("invokeCount", entry.getDocCount());
        listMap.add(map);
    }

    for (Date betweenDate : betweenDates) {
        String tempDate = DateUtil.toString(betweenDate, "yyyy-MM-dd");
        Map<String, Object> findMap = listMap.stream().filter(p -> p.get("day").equals(tempDate)).findAny().orElse(null);
        if (findMap == null) {
            Map<String, Object> map = new LinkedHashMap<>();
            map.put("day", tempDate);
            map.put("invokeCount", 0);
            listMap.add(map);
        }
    }
    listMap.remove(listMap.size() - 1);
    listMap = listMap .stream().sorted(Comparator.comparing(map -> map.get("day").toString(), Comparator.naturalOrder())).collect(Collectors.toList());

    return RequestResult.success(listMap);
}
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Elasticsearch 聚合查询(Aggregation)是一种用于对数据进行多维度分析的功能。聚合查询可以用于分析数据的分布情况、计算数据的统计信息、生成图表等。在 Elasticsearch 中,聚合查询是通过使用特定的聚合器(Aggregator)来完成的。 Java 中使用 Elasticsearch 聚合查询需要使用 Elasticsearch Java API。首先需要创建一个 SearchRequest 对象,并设置需要查询的索引和查询条件。然后创建一个 AggregationBuilder 对象,用于定义聚合查询的规则和参数。最后将 AggregationBuilder 对象添加到 SearchRequest 中,执行查询操作。 以下是一个简单的 Java 代码示例,用于查询某个索引下的数据,并按照某个字段进行分组聚合查询: ``` SearchRequest searchRequest = new SearchRequest("index_name"); SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); TermsAggregationBuilder aggregationBuilder = AggregationBuilders.terms("group_by_field").field("field_name"); searchSourceBuilder.aggregation(aggregationBuilder); searchRequest.source(searchSourceBuilder); SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT); Terms terms = searchResponse.getAggregations().get("group_by_field"); for (Terms.Bucket bucket : terms.getBuckets()) { String key = bucket.getKeyAsString(); long count = bucket.getDocCount(); System.out.println("key: " + key + ", count: " + count); } ``` 在上面的代码中,首先创建了一个 SearchRequest 对象,设置需要查询的索引和查询条件。然后创建了一个 TermsAggregationBuilder 对象,用于按照某个字段进行分组聚合查询。最后将 TermsAggregationBuilder 对象添加到 SearchRequest 中,执行查询操作。 查询结果会返回一个 Terms 对象,其中包含了分组聚合查询的结果。可以使用 Terms 对象的 getBuckets() 方法获取分组聚合查询的结果列表。每个分组聚合查询结果由一个 Terms.Bucket 对象表示,其中包含了分组聚合查询的键值和文档数量等信息。 以上是简单的聚合查询示例,Elasticsearch 聚合查询功能非常强大,支持多种聚合器和聚合规则,可以根据具体需求进行调整和扩展。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值