Es 根据时间分组聚合 (趋势图 折线图数据)

1 篇文章 0 订阅

DSL 语句

{
  "size": 0,
  "query": {
    "bool": {
      "must": [
        {
          "range": {
            "logtime": {
              "from": "2022-11-08T00:00:00.000Z",
              "to": null,
              "include_lower": true,
              "include_upper": true,
              "boost": 1
            }
          }
        },
        {
          "range": {
            "logtime": {
              "from": null,
              "to": "2022-11-08T23:59:59.999999999Z",
              "include_lower": true,
              "include_upper": true,
              "boost": 1
            }
          }
        },
        {
          "term": {
            "ip": {
              "value": "127.0.0.2",
              "boost": 1
            }
          }
        }
      ],
      "adjust_pure_negative": true,
      "boost": 1
    }
  },
  "track_total_hits": 2147483647,
  "aggregations": {
    "logtime": {
      "date_histogram": {    # date_histogram 按时间进行聚合
        "field": "logtime",      # 聚合字段
        "format": "HH:mm",		 # 时间格式化格式
        "fixed_interval": "1h",   # 间隔时间
        "offset": -28800000,    是因为es默认是按照UTC的时间进行查询的,所以需要减掉8小时 -8h
        "order": {
          "_key": "asc"
        },
        "keyed": false,
        "min_doc_count": 0,
        "extended_bounds": {    # 设置范围聚合的扩展边界 设置之后查询时间为空也会返回 确			  保聚合结果始终包含完整的范围,并避免数据丢失。
          "min": "now-30d",
          "max": "now-1"
        }
      },
      "aggregations": {
        "disposal_status": {
          "terms": {
            "field": "disposal_status",    # 根据状态再次聚合
            "size": 10,
            "min_doc_count": 1,
            "shard_min_doc_count": 0,
            "show_term_doc_count_error": false,
            "order": [
              {
                "_count": "desc"
              },
              {
                "_key": "asc"
              }
            ]
          }
        }
      }
    }
  }
}

java 实现

DateHistogramAggregationBuilder formatAgg = AggregationBuilders.dateHistogram(PortrayalConstant.LOG_TIME)
                //聚合字段
                .field(PortrayalConstant.LOG_TIME)
                //时间格式化
                .offset("-8h")
                // 返回0的数据
                .minDocCount(0L);

        if (StringUtils.isNotEmpty(eventAlarmDto.getSearchType()) && "1".equals(eventAlarmDto.getSearchType())) {
            formatAgg.fixedInterval(DateHistogramInterval.HOUR);
            formatAgg.format("HH:mm");
        } else {
            //如果查询时间按天则按小时分组
            formatAgg.fixedInterval(DateHistogramInterval.DAY);
            formatAgg.format("MM-dd")
            .extendedBounds(new LongBounds("now-30d", "now")));
        }

        TermsAggregationBuilder disposalAgg = 			  AggregationBuilders.terms(PortrayalConstant.DISPOSAL_STATUS).field(PortrayalConstant.DISPOSAL_STATUS);
        formatAgg.subAggregation(disposalAgg);
        searchSourceBuilder.aggregation(formatAgg);
        SearchRequest searchRequest = new SearchRequest(new String[]{PortrayalConstant.ES_EVENT_INDEX}, searchSourceBuilder);

        List<EventTrendVo> eventTrendVos = new ArrayList<>();
        try {
            SearchResponse response = inciDentEsMapper.search(searchRequest, RequestOptions.DEFAULT);
            Aggregations aggregations = response.getAggregations();
            ParsedDateHistogram dateHistogram = aggregations.get(PortrayalConstant.LOG_TIME);
            for (Histogram.Bucket bucket : dateHistogram.getBuckets()) {
                EventTrendVo trendVo = new EventTrendVo();
                String date = bucket.getKeyAsString();
                long total = bucket.getDocCount();
                trendVo.setLogTime(date);
                trendVo.setTotal(total);
                ParsedTerms statusTerms = bucket.getAggregations().get(PortrayalConstant.DISPOSAL_STATUS);
                for (Terms.Bucket statusBucket : statusTerms.getBuckets()) {
                    String status = statusBucket.getKeyAsString();
                    switch (status) {
                        case "0":
                            trendVo.setNotDisposed(statusBucket.getDocCount());
                            break;
                        case "1":
                            trendVo.setDisposed(statusBucket.getDocCount());
                            break;
                        case "2":
                            trendVo.setDisposal(statusBucket.getDocCount());
                            break;
                        default:
                            trendVo.setDisposal(0L);
                            break;
                    }
                }
                eventTrendVos.add(trendVo);
            }
  • 2
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Elasticsearch (ES) is a powerful search engine and database that can be used to store, search, and analyze large amounts of data. You can use it to visualize your data in various formats, including line charts. To group data in ES by time and display it in a line chart, you need to perform the following steps: 1. Store your data in ES: You need to have data stored in ES in order to query and visualize it. You can do this by using the index API to index your data. 2. Use a date field: The data you want to visualize should have a date field that you can use to group it by time. 3. Create a date histogram aggregation: To group your data by time, you can use a date histogram aggregation in your ES query. This will group your data into time intervals (such as days, weeks, months, etc.) and count the number of documents in each interval. 4. Visualize the data: You can use a visualization tool such as Kibana, which is part of the Elastic Stack, to create a line chart of your data. In Kibana, you can use the search results from your ES query as the data source for your visualization. Here's an example of a simple ES query that groups data by day and returns the count of documents in each day: ``` GET my_index/_search { "size": 0, "aggs": { "group_by_day": { "date_histogram": { "field": "my_date_field", "interval": "day" } } } } ``` Note that this is just a simple example, and you can modify the query to match your specific requirements. You can also use other aggregations, such as terms aggregations or metrics aggregations, to further group and analyze your data.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值