ES按照指定条件筛选后聚合,按小时聚合,然后统计

  public String getHabits(String bloggerId, Integer type, String stime, String etime) {
        String result = "";
        String index = "";
        if (type == 1) {
            index = "AAAAAAAAA";
        } else if (type == 2) {
            index = "BBBBBBBBBB";
        }
        String[] types = new String[]{index};
        SearchClient client = null;
        Map<String, Integer> map = new HashMap<>();

        try {
            client = searchIndexer.getSearchClient(null, types);
            if (type == 1) {
                //  client.addPrimitiveTermQuery("fb_id", bloggerId, ISIOperator.MUST);
                client.addPrimitiveTermQuery("tw_id", bloggerId, ISIOperator.MUST);
            } else if (type == 2) {
                client.addPrimitiveTermQuery("tw_id", bloggerId, ISIOperator.MUST);
            }

            if (StringUtils.isNotEmpty(stime) && StringUtils.isNotEmpty(etime)) {
                client.addRangeQuery("pubtime", stime, RangeCommon.GTE, etime, RangeCommon.LT, ISIOperator.MUST);
            }


            result = client.addPubtimeAgg();


        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            if(client != null){
                client.close();
            }

        }

        return result;
    }   




public String addPubtimeAgg(){
        DateHistogramBuilder dateHistogramBuilder = AggregationBuilders.dateHistogram("pubtime_histogram")
                .field("pubtime").interval(DateHistogram.Interval.HOUR)
                .minDocCount(0L)
            //    .extendedBounds(0L, 24L)  // 设置时间范围为左开右闭的 [0-24) 点
//                .subAggregation(
//                        AggregationBuilders.cardinality("unique_docs")
//                                .field("auto_id")
//                );
        ;

        searchbuilder.addAggregation(dateHistogramBuilder);
        if (this.query != null) {
            this.searchbuilder.setQuery(this.query);
        }
        SearchResponse sr = searchbuilder.execute().actionGet();

        // 解析结果
        Histogram pubtimeHistogram = sr.getAggregations().get("pubtime_histogram");
//        Cardinality uniqueDocsAgg = (Cardinality) pubtimeHistogram.getBucketByKey("unique_docs");
//        long totalUniqueDocs = uniqueDocsAgg.getValue();

//        System.out.println("0-8点文档个数:" + getBucketCount(pubtimeHistogram, 0, 8));
//        System.out.println("8-12点文档个数:" + getBucketCount(pubtimeHistogram, 8, 12));
//        System.out.println("12-18点文档个数:" + getBucketCount(pubtimeHistogram, 12, 18));
//        System.out.println("18-24点文档个数:" + getBucketCount(pubtimeHistogram, 18, 24));
//        System.out.println("总文档个数:" + totalUniqueDocs);
        HashMap<String, Long> map = new HashMap<>();
        long bucketCount = getBucketCount(pubtimeHistogram, 0, 8);
        long bucketCount1 = getBucketCount(pubtimeHistogram, 8, 12);
        long bucketCount2 = getBucketCount(pubtimeHistogram, 12, 18);
        long bucketCount3 = getBucketCount(pubtimeHistogram, 18, 24);
        map.put("0-8", bucketCount);
        map.put("8-12", bucketCount1);
        map.put("12-18", bucketCount2);
        map.put("18-24", bucketCount3);
        String keyWithMaxValue = map.entrySet().stream()
                .max(Map.Entry.comparingByValue())
                .get().getKey();
        return keyWithMaxValue;
    }

    private static long getBucketCount(Histogram histogram, int startHour, int endHour) {
        long count = 0;
        for (Histogram.Bucket bucket : histogram.getBuckets()) {
            String hourStr =  bucket.getKey();
            Integer hour = TimeUtil.getHour(hourStr);
            if (hour >= startHour && hour < endHour) {
                 count = count +bucket.getDocCount();
            }
        }
        return count;
    }

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
在 Elasticsearch 中,可以使用聚合(Aggregation)实现对文档进行聚合统计,其中包括出现次数的统计。下面是一个示例: 假设我们有一个名为 "sales" 的索引,包含以下文档: ``` { "product": "A", "price": 10.0, "timestamp": "2021-08-01T10:00:00Z" } { "product": "B", "price": 15.0, "timestamp": "2021-08-01T10:05:00Z" } { "product": "A", "price": 12.0, "timestamp": "2021-08-01T10:10:00Z" } { "product": "C", "price": 20.0, "timestamp": "2021-08-01T10:15:00Z" } { "product": "A", "price": 8.0, "timestamp": "2021-08-01T10:20:00Z" } { "product": "B", "price": 18.0, "timestamp": "2021-08-01T10:25:00Z" } ``` 现在,我们想要统计每个产品出现的次数,可以使用以下聚合查询: ``` { "aggs": { "products": { "terms": { "field": "product" } } } } ``` 其中,"aggs" 是聚合查询的关键字,"products" 是我们给这个聚合起的名字,"terms" 表示我们要按照某个字段进行分组,"field" 指定了我们要按照哪个字段进行分组。 运行上述查询后,得到的结果如下: ``` { "aggregations": { "products": { "buckets": [ { "key": "A", "doc_count": 3 }, { "key": "B", "doc_count": 2 }, { "key": "C", "doc_count": 1 } ] } } } ``` 其中,"key" 表示产品名称,"doc_count" 表示该产品出现的次数。 如果想要对出现次数进行排序,可以使用以下聚合查询: ``` { "aggs": { "products": { "terms": { "field": "product", "order": { "_count": "desc" } } } } } ``` 其中,"order" 表示按照什么字段进行排序,"_count" 表示按照出现次数进行排序,"desc" 表示降序排列。 运行上述查询后,得到的结果如下: ``` { "aggregations": { "products": { "buckets": [ { "key": "A", "doc_count": 3 }, { "key": "B", "doc_count": 2 }, { "key": "C", "doc_count": 1 } ] } } } ``` 其中,产品 A 出现的次数最多,排在第一位。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值