Elasticsearch教程(12) ES 存储树形结构 整合Spring Data Elasticsearch

如果你不用Spring Data Elasticsearch,可以参考下一个博客的实现
Elasticsearch笔记(十四) Elasticsearch工具类 支持树形结构

1 前言

在工作中,我们经常遇到父子结构的数据, 比如省市县, 部门人员, 数据字典, 菜单结构等.
我们在设计表结构时, 一定要设计好, 否则查询会比较耗时.
通常设计父子结构的表时有如下方案:

左右值法

在节点里维护左右值, 查询子孙节点比较方便, 但是维护比较麻烦, 数据也不直观

parentId

每一行数据保存父亲节点的Id, 这样查直属下级比较方便, 但是查询子孙节点, 递归很慢

parentId+path

在第2点基础上加上path, path是所有祖先节点的id,用逗号分隔.
比如有如下结构:
在这里插入图片描述
那么"三级 2-1-1"的path就会是"2,2-1,2-1-1",
你要统计"一级2"所有子孙节点个数时, 你或许会这么做
在用MySQL时: select count(1) from tablename where path like ‘2,%’
在用ES时, 你可能

GET tree/_search
{ "query": {
    "prefix" : { "path" : "2," }
  }
}

但数据量大时, MySQL的like和ES的prefix会比较慢.

优化parentId+path

在ES里, 将path设置为Nested类型, 它包含id,level这2个子字段

  #这里用ancestorId(祖先ID)表示path的意思
  "ancestorId":{
            "type":"nested",
            "properties": {
              "id":    { "type": "keyword"  },
              "level": { "type": "keyword"  }
          }
        }

那"三级 2-1-1"的ancestorId值为:

[
    {
        "level":1,
        "id":"2"
    },
    {
        "level":2,
        "id":"2-1"
    },
    {
        "level":3,
        "id":"2-1-1"
    }
]

2 验证想法

上面的想法只是我根据自己浅薄知识的猜测, 还是先验证下是否可行吧.

下面设计一个通用数据字典表结构

PUT /pigg_data_node


PUT /pigg_data_node/_mapping/_doc
{
    "properties":{
        "id":{
            "type":"keyword"
        },
        "parentId":{
            "type":"keyword"
        },
        "ancestorId":{
            "type":"nested",
            "properties":{
                "id":{
                    "type":"keyword"
                },
                "level":{
                    "type":"keyword"
                }
            }
        },
        "level":{
            "type":"keyword"
        },
        "dataName":{
            "type":"keyword"
        },
        "dataValue":{
            "type":"keyword"
        },
        "dataType":{
            "type":"keyword"
        },
        "extendInfo":{
            "type":"keyword",
            "index":false
        },
        "order":{
            "type":"keyword"
        },
        "remark":{
            "type":"keyword",
            "index":false
        },
        "status":{
            "type":"keyword"
        },
        "createTime":{
            "type":"date"
        },
        "creator":{
            "type":"keyword"
        },
        "updateTime":{
            "type":"date"
        },
        "updater":{
            "type":"keyword"
        }
    }
}

写DSL,测试查询直属下级节点数量

GET /pigg_data_node/_doc/_count
{
    "query":{
        "bool":{
            "must":[
                {
                    "term":{
                        "level":{
                            "value":2
                        }
                    }
                },
                {
                    "nested":{
                        "query":{
                            "bool":{
                                "must":[
                                    {
                                        "term":{
                                            "ancestorId.id":{
                                                "value":"5e9dddad551f2215bee100e8"
                                            }
                                        }
                                    },
                                    {
                                        "term":{
                                            "ancestorId.level":{
                                                "value":1
                                            }
                                        }
                                    }
                                ]
                            }
                        },
                        "path":"ancestorId"
                    }
                }
            ]
        }
    }
}

定义实体类

1 实体父类PiggBaseEntity

package com.pigg.study.tree.common.entity;

import lombok.Data;
import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Field;
import org.springframework.data.elasticsearch.annotations.FieldType;

import java.util.Date;

@Data
public class PiggBaseEntity {

    @Id
    @Field(type = FieldType.Keyword)
    private String id;

    @Field(type = FieldType.Keyword)
    private String status = "1";

    @Field(type = FieldType.Keyword)
    private String creator;

    @Field(type = FieldType.Date)
    private Date createTime;

    @Field(type = FieldType.Keyword)
    private String updater;

    @Field(type = FieldType.Date)
    private Date updateTime;
}

2 字典实体PiggDataNode

package com.pigg.study.tree.common.entity;

import com.pigg.study.tree.entity.Path;
import lombok.Data;
import org.springframework.data.annotation.Transient;
import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.Field;
import org.springframework.data.elasticsearch.annotations.FieldType;

import java.util.List;

@Data
@Document(indexName = "pigg_data_node", type = "_doc", shards = 1, replicas = 0)
public class PiggDataNode extends PiggBaseEntity{

    @Field(type = FieldType.Keyword)
    private String parentId;

    @Field(type = FieldType.Nested)
    private List<Path> ancestorId;

    @Field(type = FieldType.Keyword)
    private short level;

    @Field(type = FieldType.Keyword)
    private String dataName;

    @Field(type = FieldType.Keyword)
    private String dataValue;

    @Field(type = FieldType.Keyword)
    private String dataType;

    @Field(type = FieldType.Keyword)
    private short order;

    @Field(type = FieldType.Keyword, index = false)
    private String remark;

    @Field(type = FieldType.Keyword, index = false)
    private String extendInfo;

    @Transient
    private List<PiggDataNode> children;
}

3 Path实体类

package com.pigg.study.tree.entity;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.springframework.data.elasticsearch.annotations.Field;
import org.springframework.data.elasticsearch.annotations.FieldType;

@Data
@NoArgsConstructor
@AllArgsConstructor
public class Path {

    @Field(type = FieldType.Keyword)
    private short level;

    @Field(type = FieldType.Keyword)
    private String id;
}

4 service层的方法-统计子孙节点个数

    /**
     * 获取直接下级节点个数(有效的)
     * @param parentId
     * @return
     */
    @Override
    public Long countByParentId(Path parentId) {
        return countByAncestorIdWithCondition(parentId, true, "1", null);
    }

    /**
     * 获取直接下级节点个数(全部状态的)
     * @param parentId
     * @return
     */
    @Override
    public Long countByParentIdOfAllStatus(Path parentId) {
        return countByAncestorIdWithCondition(parentId, true,null, null);
    }

    /**
     * 获取所有子孙节点个数(有效的)
     * @param ancestorId
     * @return
     */
    @Override
    public Long countByAncestorId(Path ancestorId) {
        return countByAncestorIdWithCondition(ancestorId, false,"1", null);
    }

    /**
     * 获取所有子孙节点个数(全部状态的)
     * @param ancestorId
     * @return
     */
    @Override
    public Long countByAncestorIdOfAllStatus(Path ancestorId) {
        return countByAncestorIdWithCondition(ancestorId, false,null, null);
    }

    /**
     * 根据祖先ID和条件统计节点个数
     * @param ancestorId
     * @param onlyNextLevel
     * @param status
     * @param status
     * @return queryBuilder
     */
    @Override
    public Long countByAncestorIdWithCondition(Path ancestorId, Boolean onlyNextLevel, String status, QueryBuilder queryBuilder) {
        BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();

        if (onlyNextLevel) {
            boolQueryBuilder.filter(QueryBuilders.termQuery("level", ancestorId.getLevel() + 1));
        }

        if (!StringUtils.isEmpty(status)) {
            boolQueryBuilder.filter(QueryBuilders.termQuery("status", 1));
        }

        if (queryBuilder != null) {
            boolQueryBuilder.filter(queryBuilder);
        }

        BoolQueryBuilder boolQueryBuilderForNested = QueryBuilders.boolQuery();
        boolQueryBuilderForNested.filter(QueryBuilders.termQuery("ancestorId.id", ancestorId.getId()));
        boolQueryBuilderForNested.filter(QueryBuilders.termQuery("ancestorId.level", ancestorId.getLevel()));

        boolQueryBuilder.filter(QueryBuilders.nestedQuery("ancestorId", boolQueryBuilderForNested, ScoreMode.None));

        SearchQuery searchQuery = new NativeSearchQueryBuilder()
                .withQuery(
                        boolQueryBuilder
                ).build();
        System.out.println(searchQuery.getQuery().toString());
        return elasticsearchTemplate.count(searchQuery, PiggDataNode.class);
    }

5 测试方法

import com.pigg.study.tree.Application;
import com.pigg.study.tree.common.entity.PiggDataNode;
import com.pigg.study.tree.dao.TestDao;
import com.pigg.study.tree.entity.Path;
import com.pigg.study.tree.entity.TestEntity;
import com.pigg.study.tree.service.PiggDataNodeService;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;

import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Optional;


@RunWith(SpringRunner.class)
@SpringBootTest(classes = Application.class)
public class Test {

    @Autowired
    TestDao testDao;

    @Autowired
    PiggDataNodeService piggDataNodeService;

    @org.junit.Test
    public void test(){
        //create
        TestEntity testEntity1 = new TestEntity();
        testEntity1.setId("4");
        testEntity1.setName("test4444");
        testDao.save(testEntity1);

        Boolean ifExist = testDao.existsById("5");
        System.out.println(ifExist);
    }

    @org.junit.Test
    public void testAdd(){
        PiggDataNode dataNode = new PiggDataNode();
        dataNode.setLevel((short) 1);
        dataNode.setDataName("部门-1");
        dataNode.setDataValue("dept-1");
        dataNode.setDataType("dept");
        dataNode.setOrder((short) 1);
        piggDataNodeService.save(dataNode);
        System.out.println(dataNode.getId());
    }

    @org.junit.Test
    public void testAddList(){
        List<PiggDataNode> dataNodes = new ArrayList<>();

        for (int i = 1; i< 10; i++){
            PiggDataNode dataNode = new PiggDataNode();
            dataNode.setLevel((short) 1);
            dataNode.setDataName("部门-" + i);
            dataNode.setDataValue("dept-"+ i);
            dataNode.setDataType("dept");
            dataNode.setOrder((short) i);
            dataNodes.add(dataNode);
        }
        piggDataNodeService.save(dataNodes);

        dataNodes.stream().forEach(a -> System.out.println(a.getId()));
    }

    @org.junit.Test
    public void testAddList2(){
        List<PiggDataNode> dataNodes = new ArrayList<>();
        String parentId = "5e9dddad551f2215bee100e8";

        for (int i = 1; i< 10; i++){
            PiggDataNode dataNode = new PiggDataNode();
            dataNode.setLevel((short) 2);
            dataNode.setDataName("部门-1-" + i);
            dataNode.setDataValue("dept-1-"+ i);
            dataNode.setDataType("dept");
            dataNode.setOrder((short) i);
            dataNode.setParentId(parentId);

            dataNode.setCreateTime(new Date());

            List<Path> ancestorId = new ArrayList<>();
            ancestorId.add(new Path((short) 1, parentId));
            dataNode.setAncestorId(ancestorId);

            dataNodes.add(dataNode);
        }
        piggDataNodeService.save(dataNodes);

        dataNodes.stream().forEach(a -> System.out.println(a.getId()));
    }

    @org.junit.Test
    public void testAddList3(){
        List<PiggDataNode> dataNodes = new ArrayList<>();
        String parentId = "5e9ddf58551f630a85e96a2c";

        for (int i = 1; i< 10; i++){
            PiggDataNode dataNode = new PiggDataNode();
            dataNode.setLevel((short) 3);
            dataNode.setDataName("部门-1-1-" + i);
            dataNode.setDataValue("dept-1-1-"+ i);
            dataNode.setDataType("dept");
            dataNode.setOrder((short) i);
            dataNode.setParentId(parentId);

            List<Path> ancestorId = new ArrayList<>();
            ancestorId.add(new Path((short) 1, "5e9dddad551f2215bee100e8"));
            ancestorId.add(new Path((short) 2, "5e9ddf58551f630a85e96a2c"));
            dataNode.setAncestorId(ancestorId);

            dataNodes.add(dataNode);
        }
        piggDataNodeService.save(dataNodes);

        dataNodes.stream().forEach(a -> System.out.println(a.getId()));
    }

    @org.junit.Test
    public void testCountByAncestorId() {
        String parentId = "5e9dddad551f2215bee100e8";
        Long count = piggDataNodeService.countByAncestorId(new Path((short) 1, parentId));
        System.out.println(count);
    }

    @org.junit.Test
    public void testCountByParentId() {
        String parentId = "5e9dddad551f2215bee100e8";
        Long count = piggDataNodeService.countByParentId(new Path((short) 1, parentId));
        System.out.println(count);
    }

}
  • 5
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
Spring Data ElasticsearchSpring Data 家族的一员,为 Elasticsearch 提供了集成支持。通过 Spring Data Elasticsearch,我们可以使用更加简洁的方式来操作 Elasticsearch,并且可以与 Spring 的其他组件无缝集成。 下面是一个简单的 Spring Boot 应用程序,使用 Spring Data Elasticsearch 进行 Elasticsearch 操作的示例: 1. 添加依赖 在 pom.xml 文件中添加以下依赖: ```xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-elasticsearch</artifactId> </dependency> ``` 2. 配置 Elasticsearch 在 application.properties 文件中添加以下配置: ```properties spring.data.elasticsearch.cluster-nodes=localhost:9300 spring.data.elasticsearch.cluster-name=my-application ``` 3. 创建实体类 创建一个实体类,用于映射 Elasticsearch 中的文档: ```java @Document(indexName = "my_index", type = "my_type") public class MyEntity { @Id private String id; private String name; private Integer age; // getters and setters } ``` 4. 创建 Repository 创建一个 Repository 接口,继承自 ElasticsearchRepository: ```java public interface MyEntityRepository extends ElasticsearchRepository<MyEntity, String> { } ``` 5. 使用 Repository 进行操作 在 Service 或者 Controller 中注入 MyEntityRepository,并使用其提供的方法进行 Elasticsearch 操作: ```java @Service public class MyService { @Autowired private MyEntityRepository myEntityRepository; public void save(MyEntity entity) { myEntityRepository.save(entity); } public List<MyEntity> search(String keyword) { return myEntityRepository.findByNameLike(keyword); } } ``` 上面的代码演示了如何使用 MyEntityRepository 进行保存和搜索操作。Spring Data Elasticsearch 为我们提供了一些内置的方法,我们可以根据需要自定义方法。 以上就是 Spring Data Elasticsearch 的简单使用教程,希望对你有所帮助。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

瑟 王

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值