2019.12.24笔记——SpringBoot整合Elasticsearch及其使用

Elasticsearch与springboot整合的方式

spring的官网中对Elasticsearch客户端的描述如下
https://docs.spring.io/spring-boot/docs/2.1.11.RELEASE/reference/html/boot-features-nosql.html#boot-features-elasticsearch%60

在spring的官网中主要介绍了三种整合方式,分别是REST、Jest和Spring Data,比较常见的就是REST和Spring Data,不过下面主要只会介绍Spring Data方式。

原生客户端

Elasticsearch的java原生客户端很复杂,并不是基于面向对象的,使用起来非常不方便

@Test
void testRestInsert() throws IOException {
    //1.连接rest接口
    HttpHost http = new HttpHost("192.168.204.209", 9200, "http");
    RestClientBuilder restClientBuilder = RestClient.builder(http);//rest构建器
    RestHighLevelClient restHighLevelClient = new RestHighLevelClient(restClientBuilder);//获取高级客户端对象

    //2.封装请求对象
    //BulkRequest bulkRequest = new BulkRequest(); //用于批量操作
    IndexRequest indexRequest = new IndexRequest("test_rest", "_doc", "1");
    HashMap skuMap = new HashMap();
    skuMap.put("name","法拉利 LaFerrari Aperta");
    skuMap.put("brandName","法拉利");
    skuMap.put("categoryName","超级跑车");
    HashMap spec = new HashMap();
    spec.put("动力","963匹");
    spec.put("扭矩","880N/m");
    spec.put("车长","4975mm");
    spec.put("重量","1250kg");
    skuMap.put("spec",spec);
    skuMap.put("createTime","2017-08-10");
    skuMap.put("price",43000000);
    skuMap.put("saleNum",209);
    skuMap.put("commentNum",6128746);
    indexRequest.source(skuMap);
    //bulkRequest.add(indexRequest); //用于批量操作
    //3.获取响应结果
    IndexResponse indexResponse = restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT);
    //BulkResponse bulkResponse = restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT); //用于批量操作
    int status = indexResponse.status().getStatus();
    System.out.println(status);
    restHighLevelClient.close();
}

可以发现原生的客户端是将命令封装成一个map数据,类似于我们原生的json命令格式,最后返回的也是一个字符串,如下所示,所以我们还需要自己进行字符串拆分。
在这里插入图片描述

REST

下面是官网对REST客户端的描述
在这里插入图片描述

Jest

下面是官网对Jest客户端的描述
在这里插入图片描述

Spring Data

下面是官网对Spring Data的描述
在这里插入图片描述

spring data的官方文档
https://docs.spring.io/spring-data/elasticsearch/docs/3.2.3.RELEASE/reference/html/

spring data官方文档的描述如下,这个客户端也是本文重点介绍的,也是目前来说使用的最为广泛的。
在这里插入图片描述

Spring Data的配置

首先还是需要引入的依赖

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

然后写一个配置类

我们需要创建一个继承ElasticsearchConfigurationSupport的实现类,通过这个类我们可以配置Elasticsearch的相关参数。下面就是官网对此类的描述
在这里插入图片描述
下面是我们创建的ElasticsearchConfigurationSupport子类实现,三个方法分别实现了对Elasticsearch的配置和将其整合进我们的springboot项目中。

@Configuration
public class TransportClientConfig extends ElasticsearchConfigurationSupport {

  @Bean
  public Client elasticsearchClient() throws UnknownHostException {
    Settings settings = Settings.builder().put("cluster.name", "taibai").build();
    TransportClient client = new PreBuiltTransportClient(settings);
    client.addTransportAddress(new TransportAddress(InetAddress.getByName("192.168.204.209"), 9300));
    return client;
  }

  @Bean(name = {"elasticsearchOperations", "elasticsearchTemplate"})
  public ElasticsearchTemplate elasticsearchTemplate() throws UnknownHostException {
      return new ElasticsearchTemplate(elasticsearchClient(), entityMapper());
  }

  @Bean
  @Override
  public EntityMapper entityMapper() {
    ElasticsearchEntityMapper entityMapper = new ElasticsearchEntityMapper(elasticsearchMappingContext(),
  	  new DefaultConversionService());
    entityMapper.setConversions(elasticsearchCustomConversions());
    return entityMapper;
  }
}

创建索引的实体类

将我们需要使用的索引创建其对应的实体类,通过@Document注解配置索引的一些参数,通过@Id注解创建一个必须要有的id字段,接着就可以通过@Field注解配置其他的字段,其中analyzer属性可以配置它的分词器(只有text类型的字段可以配置),type属性可以配置字段的类型。

@Document(indexName = "test_springdata_es",type = "_doc",shards = 1,replicas = 0,
        createIndex = true,useServerConfiguration = false,versionType = VersionType.EXTERNAL)
public class Item {
    @Override
    public String toString() {
        return "Item{" +
                "ld=" + ld +
                ", name='" + name + '\'' +
                ", sex='" + sex + '\'' +
                ", age=" + age +
                ", interest=" + interest +
                ", childItem=" + childItem +
                '}';
    }

    @Id
    private long ld;

    @Field(analyzer = "ik_max_word",type = FieldType.Text)
    private String name;

    @Field(type = FieldType.Keyword)
    private String sex;

    @Field(type = FieldType.Integer)
    private Integer age;

    @Field(type = FieldType.Keyword)
    private List<String> interest;

    @Field(type = FieldType.Object)
    private ChildItem childItem;

    public Item(){}
    public Item(long ld, String name, String sex, Integer age, List<String> interest, ChildItem childItem) {
        this.ld = ld;
        this.name = name;
        this.sex = sex;
        this.age = age;
        this.interest = interest;
        this.childItem = childItem;
    }

    public long getLd() {
        return ld;
    }

    public void setLd(long ld) {
        this.ld = ld;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public String getSex() {
        return sex;
    }

    public void setSex(String sex) {
        this.sex = sex;
    }

    public Integer getAge() {
        return age;
    }

    public void setAge(Integer age) {
        this.age = age;
    }

    public List<String> getInterest() {
        return interest;
    }

    public void setInterest(List<String> interest) {
        this.interest = interest;
    }

    public ChildItem getChildItem() {
        return childItem;
    }

    public void setChildItem(ChildItem childItem) {
        this.childItem = childItem;
    

通过上面的实体类我们可以创建索引或映射

我们之前在ElasticsearchConfigurationSupport的子类中注入的ElasticsearchTemplate的bean我们就可以在这里使用。下面是使用这个ElasticsearchTemplate创建索引或映射

@Autowired
private ElasticsearchTemplate elasticsearchTemplate;

@Test
void createIndex(){
    //创建索引及映射
    elasticsearchTemplate.createIndex(Item.class);
    //创建映射
	//elasticsearchTemplate.putMapping(Item.class);
}

创建索引的mapper接口

如果我们需要对我们的索引中的数据进行操作还是需要创建一个mapper来操作的,类似于mybatis的mapper,不过同样的,我们不需要真正去实现它,只需要创建一个mapper接口,剩下的交给spring和elasticsearch实现。然后我们就可以直接像调用一个普通bean一样调用这个mapper了。

下面就是刚刚我们创建的实体类索引的mapper,只需要继承ElasticsearchRepository接口就可以了,在泛型中写上我们的实体类。

public interface ItemMapper extends ElasticsearchRepository<Item,String>{

}
Spring Data的使用

下面会介绍一些基本的API使用,这里我们前面已经完成了Elasticsearch与spring的整合,已经创建了索引实体类和对象的mapper。

插入或修改数据

下面通过mapper的save方法(此方法不需要我们创建,是父接口提供的)实现了单条数据的插入或者修改。

//添加一条数据 and  修改一条数据
@Test
void insertGoods() {
	SimpleDateFormat simpleDateFormat=new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
	String str = simpleDateFormat.format(new Date());
	Goods goods=new Goods(1025332689,"","OPPO R17新年版 2500万美颜拍照 6.4英寸水滴屏 光感屏幕指纹 6G+128G 全网通 移动联通电信4G 双卡双待手机",
			37400,20,100,"https://m.360buyimg.com/mobilecms/s720x720_jfs/t1/10441/9/5525/162976/5c177debEaf815b43/3aa7d4dc182cc4d9.jpg!q70.jpg.webp",
			"https://m.360buyimg.com/mobilecms/s720x720_jfs/t1/10441/9/5525/162976/5c177debEaf815b43/3aa7d4dc182cc4d9.jpg!q70.jpg.webp",
			10,str,str,"10000243333000",558,"手机","OPPO","{'颜色': '王者荣耀定制版', '版本': 'R17'}",1,1,1,4L);
	System.out.println(goodsMapper.save(goods));
}

在实体类中我们可以创建一个version字段,这个字段我们可以进行并发控制,加上@Version注解
在这里插入图片描述
通过saveAll方法我们可以将一个集合中的实体类插入到索引中

//批量插入和批量修改
@Test
void insertGoodsList() {

	List<Goods> list=new ArrayList<>();
	SimpleDateFormat simpleDateFormat=new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
	String str = simpleDateFormat.format(new Date());
	Goods goods=new Goods(1001332689,"","OPPO R17新年版 2500万美颜拍照 6.4英寸水滴屏 光感屏幕指纹 6G+128G 全网通 移动联通电信4G 双卡双待手机",
			37400,10000,100,"https://m.360buyimg.com/mobilecms/s720x720_jfs/t1/10441/9/5525/162976/5c177debEaf815b43/3aa7d4dc182cc4d9.jpg!q70.jpg.webp",
			"https://m.360buyimg.com/mobilecms/s720x720_jfs/t1/10441/9/5525/162976/5c177debEaf815b43/3aa7d4dc182cc4d9.jpg!q70.jpg.webp",
			10,str,str,"10000243333000",558,"手机","OPPO","{'颜色': '王者荣耀定制版', '版本': 'R17'}",1,1,1);

	Goods goods2=new Goods(1001368912,"","OPPO R17新年版 2500万美颜拍照 6.4英寸水滴屏 光感屏幕指纹 6G+128G 全网通 移动联通电信4G 双卡双待手机",
			37400,10000,100,"https://m.360buyimg.com/mobilecms/s720x720_jfs/t1/10441/9/5525/162976/5c177debEaf815b43/3aa7d4dc182cc4d9.jpg!q70.jpg.webp",
			"https://m.360buyimg.com/mobilecms/s720x720_jfs/t1/10441/9/5525/162976/5c177debEaf815b43/3aa7d4dc182cc4d9.jpg!q70.jpg.webp",
			10,str,str,"10000243333000",558,"手机","OPPO","{'颜色': '王者荣耀定制版', '版本': 'R17'}",1,1,1);


	Goods goods3=new Goods(100001402792L,"","OPPO R17新年版 2500万美颜拍照 6.4英寸水滴屏 光感屏幕指纹 6G+128G 全网通 移动联通电信4G 双卡双待手机",
			37400,10000,100,"https://m.360buyimg.com/mobilecms/s720x720_jfs/t1/10441/9/5525/162976/5c177debEaf815b43/3aa7d4dc182cc4d9.jpg!q70.jpg.webp",
			"https://m.360buyimg.com/mobilecms/s720x720_jfs/t1/10441/9/5525/162976/5c177debEaf815b43/3aa7d4dc182cc4d9.jpg!q70.jpg.webp",
			10,str,str,"10000243333000",558,"手机","OPPO","{'颜色': '王者荣耀定制版', '版本': 'R17'}",1,1,1);

	list.add(goods);
	list.add(goods2);
	list.add(goods3);
	goodsMapper.saveAll(list);
}
删除数据

通过deleteById可以删除指定id的数据,通过deleteAll方法则可以删除映射索引中的所有数据

//删除数据
@Test
void updateGoods() {
	//删除一条
//		goodsMapper.deleteById("1001332689");
	//删除所有或者传入集合 删除集合中的数据
	goodsMapper.deleteAll();
}
全量查询

提供了十分方便的分页排序的工具类Pageable

//查询数据
@Test
void uqueryGoods() {
	//查询一条数据
//		Optional<Goods> goods = goodsMapper.findById("100001402792");
//		System.out.println(goods.get());

//		System.out.println("====================================");
	//查询所有数据
//		Iterable<Goods> goodsAll = goodsMapper.findAll();
//		Iterator<Goods> goodsIterator = goodsAll.iterator();
//		int count=0;
//		while (goodsIterator.hasNext()){
//			Goods goods1 = goodsIterator.next();
//			System.out.println(goods1);
//			count++;
//		}
//		System.out.println(count);

//		System.out.println("====================================");
	//分页排序
	// page页码   并不是跳过多少数据
	//返回数
	Pageable pageable=PageRequest.of(1,100,Sort.by(Sort.Direction.ASC, "num"));
	Page<Goods> goodsPage = goodsMapper.findAll(pageable);
	Iterator<Goods> goodsIterator = goodsPage.iterator();
	int count=0;
	while (goodsIterator.hasNext()){
		Goods goods1 = goodsIterator.next();
		System.out.println(goods1);
		count++;
	}
	System.out.println(count);
}
判断文档是否存在

通过existsById判断指定id的数据是否存在

@Test
void exists(){
	//判断文档是否存在
	boolean exists = goodsMapper.existsById("文档ID");
}
结构化查询
term查询

不会进行分词,属于精确查询,借助TermQueryBuilder类完成查询对象的封装

//term查询
@Test
void termGoods(){
	//主要用于精确匹配哪些值,比如数字,日期,布尔值或 not_analyzed 的字符串(未经分析的文本数据类型)
	// 搜索前不会再对搜索词进行分词,所以我们的搜索词必须是文档分词集合中的一个。
	TermQueryBuilder termQueryBuilder=new TermQueryBuilder("name","2018");
	Pageable pageable=PageRequest.of(0,100);
	Iterable<Goods> goods = goodsMapper.search(termQueryBuilder,pageable);
	Iterator<Goods> goodsIterator = goods.iterator();
	int count=0;
	while (goodsIterator.hasNext()){
		Goods goods1 = goodsIterator.next();
		System.out.println(goods1);
		count++;
	}
	System.out.println(count);
}

还可以同时查询多个条件

//terms查询
@Test
void termsGoods(){
	//terms 跟 term 有点类似,但 terms 允许指定多个匹配条件。
	// 如果某个字段指定了多个值,那么文档需要一起去做匹配    或者关系
	TermsQueryBuilder termsQueryBuilder=new TermsQueryBuilder("name","2018","最新","女鞋");
	Pageable pageable=PageRequest.of(0,100);
	Iterable<Goods> goods = goodsMapper.search(termsQueryBuilder,pageable);
	Iterator<Goods> goodsIterator = goods.iterator();
	int count=0;
	while (goodsIterator.hasNext()){
		Goods goods1 = goodsIterator.next();
		System.out.println(goods1);
		count++;
	}
	System.out.println(count);
}
range查询

范围查询,借助RangeQueryBuilder对象封装需要查询的范围

//range查询
@Test
void rangelGoods(){
	//范围查询
	RangeQueryBuilder rangeQueryBuilder=new RangeQueryBuilder("price");
	rangeQueryBuilder.gt(20);
	rangeQueryBuilder.lt(10000);
	Pageable pageable=PageRequest.of(0,100);
	Iterable<Goods> goods = goodsMapper.search(rangeQueryBuilder,pageable);
	Iterator<Goods> goodsIterator = goods.iterator();
	int count=0;
	while (goodsIterator.hasNext()){
		Goods goods1 = goodsIterator.next();
		System.out.println(goods1);
		count++;
	}
	System.out.println(count);
}
exists查询

判断是否存在某个字段,注意这个个前面的exist不一样,一个是查询数据是否存在,一个是查询字段是否存在。

//exists查询
@Test
void existsGoods(){
	//exists 查询可以用于查找文档中是否包含指定字段或没有某个字段,类似于SQL语句中的 IS_NULL 条件
	//包含这个字段就返回返回这条数据
	ExistsQueryBuilder existsQueryBuilder=new ExistsQueryBuilder("category_name");
	Pageable pageable=PageRequest.of(0,100);
	Iterable<Goods> goods = goodsMapper.search(existsQueryBuilder,pageable);
	Iterator<Goods> goodsIterator = goods.iterator();
	int count=0;
	while (goodsIterator.hasNext()){
		Goods goods1 = goodsIterator.next();
		System.out.println(goods1);
		count++;
	}
	System.out.println(count);
}
match查询

会经过分词之后再查询,这种查询和term查询形成对比

//match查询
@Test
void matchGoods(){
       //match查询会先对搜索词进行分词,分词完毕后再逐个对分词结果进行匹配,因此相比于term的精确搜索,match是分词匹配搜索
	//如果用 match 下指定了一个确切值,在遇到数字,日期,布尔值或者 not_analyzed 的字符串时,它将为你搜索你给定的值
	MatchQueryBuilder matchQueryBuilder=new MatchQueryBuilder("name","2018年最新女鞋");
	Pageable pageable=PageRequest.of(0,100);
	Iterable<Goods> goods = goodsMapper.search(matchQueryBuilder,pageable);
	Iterator<Goods> goodsIterator = goods.iterator();
	int count=0;
	while (goodsIterator.hasNext()){
		Goods goods1 = goodsIterator.next();
		System.out.println(goods1);
		count++;
	}
	System.out.println(count);
}
bool查询

条件查询,存在多个条件,可以形成各种嵌套,比较复杂的查询
在这里插入图片描述

//bool查询 和 filter 查询
@Test
void boolGoods(){
//		bool 查询可以用来合并多个条件查询结果的布尔逻辑,它包含一下操作符:
//		must :: 多个查询条件的完全匹配,相当于 and 。
//		must_not :: 多个查询条件的相反匹配,相当于 not 。
//		should :: 至少有一个查询条件匹配, 相当于 or 。
	BoolQueryBuilder boolQueryBuilder=new BoolQueryBuilder();
	//搜索2018年价格在1000-2000之内的女鞋   颜色不能是白色的  只能是黑色或者是红色
	RangeQueryBuilder rangeQueryBuilder=new RangeQueryBuilder("price");
	rangeQueryBuilder.lte(2000);
	rangeQueryBuilder.gte(1000);
	MatchQueryBuilder matchQueryBuilder=new MatchQueryBuilder("name","2018女鞋");

	MatchQueryBuilder matchQueryBuilder2=new MatchQueryBuilder("spec","蓝色");
	boolQueryBuilder.must(rangeQueryBuilder);
	boolQueryBuilder.must(matchQueryBuilder);
	boolQueryBuilder.mustNot(matchQueryBuilder2);

	MatchQueryBuilder matchQueryBuilder3=new MatchQueryBuilder("spec","黑色 红色");
	boolQueryBuilder.must(matchQueryBuilder3);

	TermQueryBuilder termsQueryBuilder=new TermQueryBuilder("num",10000);
	boolQueryBuilder.filter(termsQueryBuilder);

//		Pageable pageable=PageRequest.of(0,100);
	Iterable<Goods> goods = goodsMapper.search(boolQueryBuilder);
	Iterator<Goods> goodsIterator = goods.iterator();
	int count=0;
	while (goodsIterator.hasNext()){
		Goods goods1 = goodsIterator.next();
		System.out.println(goods1);
		count++;
	}
	System.out.println(count);
}
自定义方法

我们可以使用自定义方法,这样就不需要写那些复杂的查询逻辑了,假如下面的方法是根据name字段并返回相应分页后的结果。方法我们写在mapper接口中就可以了,我们不需要实现,需要注意的是命名是有规范的。
在这里插入图片描述
下面是对自定义方法的测试
在这里插入图片描述
下面是自定义方法名的规则
在这里插入图片描述

聚合操作

聚合之后的结果对象需要通过真正返回的json分析使用特定的api得到结果,这个就需要注解我们的ElasticsearchTemplate对象。

平均、最大、最小、求和
//平均最大最小求和聚合
@Test
void testAggregation(){
    AvgAggregationBuilder avgAggregationBuilder = AggregationBuilders.avg("taibai").field("price");
//        MaxAggregationBuilder maxAggregationBuilder = AggregationBuilders.max("taibai").field("price");
    SearchQuery searchQuery = new NativeSearchQueryBuilder().addAggregation(avgAggregationBuilder)
            .withIndices("goods")
            .withTypes("_doc")
            .build();
    Aggregations aggregations = elasticsearchTemplate.query(searchQuery, new ResultsExtractor<Aggregations>() {

        @Override
        public Aggregations extract(SearchResponse response) {
            return response.getAggregations();
        }
    });
    Aggregation aggregation1 = aggregations.asList().get(0);
    InternalNumericMetricsAggregation.SingleValue singleValue= (InternalNumericMetricsAggregation.SingleValue) aggregation1;
    double value = singleValue.value();
    System.out.println(value);
}
去重
//去重
@Test
void testAggregationCardinality(){
    CardinalityAggregationBuilder aggregationBuilder = AggregationBuilders.cardinality("taibai").field("price");
    SearchQuery searchQuery = new NativeSearchQueryBuilder().addAggregation(aggregationBuilder)
            .withIndices("goods")
            .withTypes("_doc")
            .build();
    Aggregations aggregations = elasticsearchTemplate.query(searchQuery, new ResultsExtractor<Aggregations>() {

        @Override
        public Aggregations extract(SearchResponse response) {
            return response.getAggregations();
        }
    });
    Aggregation aggregation1 = aggregations.getAsMap().get("taibai");
    InternalNumericMetricsAggregation.SingleValue singleValue= (InternalNumericMetricsAggregation.SingleValue) aggregation1;
    double value = singleValue.value();
    System.out.println(value);
}
扩展查询
//扩展查询
@Test
void testAggregationextended_stats(){
    ExtendedStatsAggregationBuilder extendedStatsAggregationBuilder = AggregationBuilders.extendedStats("taibai").field("price");
    SearchQuery searchQuery = new NativeSearchQueryBuilder().addAggregation(extendedStatsAggregationBuilder)
            .withIndices("goods")
            .withTypes("_doc")
            .build();
    Aggregations aggregations = elasticsearchTemplate.query(searchQuery, new ResultsExtractor<Aggregations>() {

        @Override
        public Aggregations extract(SearchResponse response) {
            return response.getAggregations();
        }
    });
    Aggregation aggregation1 = aggregations.getAsMap().get("taibai");
    InternalNumericMetricsAggregation.MultiValue multiValue= (InternalNumericMetricsAggregation.MultiValue ) aggregation1;
    System.out.println(multiValue.value("max"));
}
terms词聚合
//terms词聚合
@Test
void testAggregationTerms(){
    TermsAggregationBuilder termsAggregationBuilder = AggregationBuilders.terms("taibai").field("price");
    SearchQuery searchQuery = new NativeSearchQueryBuilder().addAggregation(termsAggregationBuilder)
            .withIndices("goods")
            .withTypes("_doc")
            .build();
    Aggregations aggregations = elasticsearchTemplate.query(searchQuery, new ResultsExtractor<Aggregations>() {

        @Override
        public Aggregations extract(SearchResponse response) {
            return response.getAggregations();
        }
    });
    Aggregation aggregation1 = aggregations.getAsMap().get("taibai");
    Terms term1 = (Terms)aggregation1;
    List<? extends Terms.Bucket> buckets = term1.getBuckets();
    for (Terms.Bucket bucket : buckets) {
        System.out.println(bucket.getKey()+"||"+bucket.getDocCount());
    }
}
top_hits最高匹配权值聚合
//top_hits最高匹配权值聚合
@Test
void testAggregationtop_hits(){
    TermsAggregationBuilder termsAggregationBuilder = AggregationBuilders.terms("taibai").field("price");
    TopHitsAggregationBuilder topHitsAggregationBuilder = AggregationBuilders.topHits("top").size(3);
    termsAggregationBuilder.subAggregation(topHitsAggregationBuilder);
    SearchQuery searchQuery = new NativeSearchQueryBuilder().addAggregation(termsAggregationBuilder)
            .withIndices("goods")
            .withTypes("_doc")
            .build();
    Aggregations aggregations = elasticsearchTemplate.query(searchQuery, new ResultsExtractor<Aggregations>() {

        @Override
        public Aggregations extract(SearchResponse response) {
            return response.getAggregations();
        }
    });
    Aggregation aggregation1 = aggregations.getAsMap().get("taibai");
    Terms term1 = (Terms)aggregation1;
    List<? extends Terms.Bucket> buckets = term1.getBuckets();
    for (Terms.Bucket bucket : buckets) {
        System.out.println(bucket.getKey()+"||"+bucket.getDocCount());
        Aggregation aggregation = bucket.getAggregations().getAsMap().get("top");
        TopHits topHits= (TopHits) aggregation;
        Iterator<SearchHit> iterator = topHits.getHits().iterator();
        while (iterator.hasNext()){
            SearchHit next = iterator.next();
            Object object=JSONObject.parse(next.getSourceAsString());
            System.out.println(object);
        }
    }
}

在这里插入图片描述

综合查询

在这里插入图片描述

@Test
void testAggregationLast(){
    RangeAggregationBuilder rangeAggregationBuilder = AggregationBuilders.range("age_range").field("age")
            .addRange(20,30).addRange(30,40).addRange(40,50);
    TermsAggregationBuilder termsAggregationBuilder = AggregationBuilders.terms("gender_group").field("gender.keyword");
    AvgAggregationBuilder aggregationBuilder = AggregationBuilders.avg("balance_avg").field("balance");
    termsAggregationBuilder.subAggregation(aggregationBuilder);
    rangeAggregationBuilder.subAggregation(termsAggregationBuilder);
    SearchQuery searchQuery = new NativeSearchQueryBuilder().addAggregation(rangeAggregationBuilder)
            .withIndices("bank")
            .withTypes("_doc")
            .build();
    Aggregations aggregations = elasticsearchTemplate.query(searchQuery, new ResultsExtractor<Aggregations>() {

        @Override
        public Aggregations extract(SearchResponse response) {
            return response.getAggregations();
        }
    });
    Aggregation aggregation1 = aggregations.getAsMap().get("age_range");
    Range range = (Range)aggregation1;
    List<? extends Range.Bucket> buckets = range.getBuckets();
    for (Range.Bucket bucket : buckets) {
        System.out.println(bucket.getKeyAsString()+"--"+bucket.getDocCount());
        Aggregation gender_group = bucket.getAggregations().getAsMap().get("gender_group");
        Terms terms=(Terms)gender_group;
        List<? extends Terms.Bucket> buckets1 = terms.getBuckets();
        for (Terms.Bucket bucket1 : buckets1) {
            System.out.println(bucket1.getKeyAsString()+"--"+bucket1.getDocCount());
            Aggregation balance_avg = bucket1.getAggregations().getAsMap().get("balance_avg");
            Avg avg= (Avg) balance_avg;
            System.out.println(avg.getValue());
        }
    }
}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值