微服务----es

docker安装

安装es

1.拉取镜像

docker pull elasticsearch:7.4.2
docker pull kibana:7.4.2


mkdir -p /mydata/elasticsearch/config
mkdir -p /mydata/elasticsearch/data
mkdir -p /mydata/elasticsearch/plugins

2.允许所有ip访问

echo "http.host: 0.0.0.0" >> /mydata/elasticsearch/config/elasticsearch.yml

3.保证权限,否则不能启动

chmod -R 777 /mydata/elasticsearch/ 

否则会报错

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
{"type": "server", "timestamp": "2023-06-17T16:03:40,668Z", "level": "WARN", "component": "o.e.b.ElasticsearchUncaughtExceptionHandler", "cluster.name": "elasticsearch", "node.name": "2d91ee3cba44", "message": "uncaught exception in thread [main]", 
"stacktrace": ["org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125) ~[elasticsearch-cli-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.4.2.jar:7.4.2]",
"Caused by: org.elasticsearch.ElasticsearchException: failed to bind service",
"at org.elasticsearch.node.Node.<init>(Node.java:614) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.node.Node.<init>(Node.java:255) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:221) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:221) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.4.2.jar:7.4.2]",
"... 6 more",
"Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes",
"at sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) ~[?:?]",
"at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]",
"at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]",
"at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:389) ~[?:?]",
"at java.nio.file.Files.createDirectory(Files.java:693) ~[?:?]",
"at java.nio.file.Files.createAndCheckIsDirectory(Files.java:800) ~[?:?]",
"at java.nio.file.Files.createDirectories(Files.java:786) ~[?:?]",
"at org.elasticsearch.env.NodeEnvironment.lambda$new$0(NodeEnvironment.java:272) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:209) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:269) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.node.Node.<init>(Node.java:275) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.node.Node.<init>(Node.java:255) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:221) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:221) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.4.2.jar:7.4.2]",
"... 6 more"] }

4.启动es

docker run --name elasticsearch -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e ES_JAVA_OPTS="-Xms64m -Xmx512m" \
-v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /mydata/elasticsearch/data:/usr/share/elasticsearch/data \
-v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
--name es \
-d elasticsearch:7.4.2

9200->外部访问端口
9300->集群其他es访问端口

-e ES_JAVA_OPTS=“-Xms64m -Xmx256m” \ 测试环境下,设置 ES 的初始内存和最大内存,否则导致过大启动不了 ES

安装kibana

docker run --name kibana -e ELASTICSEARCH_HOSTS=http://abnlch.fun:9200 -p 5601:5601 \
-d kibana:7.4.2

使用es

概念类比MySQL

index -> database(也可以用作新增操作)
type -> table
Document -> 记录

简单命令

_cat查询es的一些信息

GET /_cat/nodes:查看所有节点
GET /_cat/health:查看 es 健康状况
GET /_cat/master:查看主节点
GET /_cat/indices:查看所有索引 show databases;
示例:

http://abnlch.fun:9200/_cat/indices

结果:

127.0.0.1 16 95 1 0.01 0.04 0.05 dilm * e9dab79d4a95

添加数据

PUT请求

http://abnlch.fun:9200/customer/external/1

请求体

{
"name": "John Doe"
}

customer->index
external -> type
1 -> 数据的唯一标识
请求体 -> doc

返回结果

{
    "_index": "customer", 
    "_type": "external",
    "_id": "1",
    "_version": 1,
    "result": "created",
    "_shards": {  //分片,集群下使用
        "total": 2,
        "successful": 1,
        "failed": 0
    },
    "_seq_no": 0,
    "_primary_term": 1
}

PUT 和 POST 都可以。
POST 新增。如果不指定 id,会自动生成 id;指定 id 就会新增或修改这个数据,并新增版本号。
PUT 可以新增可以修改。PUT 必须指定 id;由于 PUT 需要指定 id,我们一般都用来做修改
操作,不指定 id 会报错。

获取数据

GET请求

http://abnlch.fun:9200/customer/external/1
{
    "_index": "customer",
    "_type": "external",
    "_id": "1",
    "_version": 1,
    "_seq_no": 0,		#并发控制字段,每次更新就会+1,用来做乐观锁
    "_primary_term": 1, #同上,主分片重新分配,如重启,就会变化
    "found": true, 		#找到了数据
    "_source": { 		#数据
        "name": "John Doe"
    }
}

更新数据

POST http://abnlch.fun:9200/customer/external/1/_update

{
	"doc":{
		"name": "John Doew"
	}
}

POST http://abnlch.fun:9200/customer/external/1

{
	"name": "John Doe2"
}

PUT http://abnlch.fun:9200/customer/external/1

{
	"name": "John Doe"
}

区别:
POST 操作会对比源文档数据,如果相同不会有什么操作,文档 version 不增加
PUT 操作总会将数据重新保存并增加 version 版本,不会对比源数据;

看场景:
对于大并发更新,不带 _update;
对于大并发查询偶尔更新,带 _update;对比更新,重新计算分配规则。

删除数据

DELETE请求

http://abnlch.fun:9200/customer/external/1

也可以删除整个index

http://abnlch.fun:9200/customer

不能删除type

bulk 批量 API

使用kibnan测试,使用DevTools
在这里插入图片描述

POST customer/external/_bulk
{"index":{"_id":"1"}}
{"name": "John Doe" }
{"index":{"_id":"2"}}
{"name": "Jane Doe" }

插入两组数据,POST customer/external/_bulk表示批量插入;index表示插入操作;_id指定唯一标识,第二行为插入的数据;三四行是第二组数据

结果

{
  "took" : 5, #花费多少毫秒
  "errors" : false, #是否失败
  "items" : [ #保存的数据信息
    {
      "index" : {  #保存的单个数据信息
        "_index" : "customer",
        "_type" : "external",
        "_id" : "1",
        "_version" : 2,
        "result" : "updated",
        "_shards" : {
          "total" : 2,
          "successful" : 1,
          "failed" : 0
        },
        "_seq_no" : 1,
        "_primary_term" : 1,
        "status" : 200
      }
    },
    {
      "index" : {
        "_index" : "customer",
        "_type" : "external",
        "_id" : "2",
        "_version" : 1,
        "result" : "created",
        "_shards" : {
          "total" : 2,
          "successful" : 1,
          "failed" : 0
        },
        "_seq_no" : 2,
        "_primary_term" : 1,
        "status" : 201
      }
    }
  ]
}

复杂实例:

POST /_bulk   #没有指定index、type所以操作的是整个es
{ "delete": { "_index": "website", "_type": "blog", "_id": "123" }}
{ "create": { "_index": "website", "_type": "blog", "_id": "123" }}
{ "title":"My first blog post" }
{ "index":{ "_index": "website", "_type": "blog" }}
{ "title":"My second blog post" }
{ "update": { "_index": "website", "_type": "blog", "_id": "123"} }
{ "doc" : {"title" : "My updated blog post"} }

返回结果

{
  "took" : 138,
  "errors" : false,
  "items" : [
    {
      "delete" : {
        "_index" : "website",
        "_type" : "blog",
        "_id" : "123",
        "_version" : 1,
        "result" : "not_found",
        "_shards" : {
          "total" : 2,
          "successful" : 1,
          "failed" : 0
        },
        "_seq_no" : 0,
        "_primary_term" : 1,
        "status" : 404  #没有该index所以操作失败
      }
    },
    {
      "create" : {
        "_index" : "website",
        "_type" : "blog",
        "_id" : "123",
        "_version" : 2,
        "result" : "created",
        "_shards" : {
          "total" : 2,
          "successful" : 1,
          "failed" : 0
        },
        "_seq_no" : 1,
        "_primary_term" : 1,
        "status" : 201
      }
    },
    {
      "index" : {
        "_index" : "website",
        "_type" : "blog",
        "_id" : "IibkzYgBRFr4yr84sp51",
        "_version" : 1,
        "result" : "created",
        "_shards" : {
          "total" : 2,
          "successful" : 1,
          "failed" : 0
        },
        "_seq_no" : 2,
        "_primary_term" : 1,
        "status" : 201
      }
    },
    {
      "update" : {
        "_index" : "website",
        "_type" : "blog",
        "_id" : "123",
        "_version" : 3,
        "result" : "updated",
        "_shards" : {
          "total" : 2,
          "successful" : 1,
          "failed" : 0
        },
        "_seq_no" : 3,
        "_primary_term" : 1,
        "status" : 200
      }
    }
  ]
}

bulk API 以此按顺序执行所有的 action(动作)。如果一个单个的动作因任何原因而失败,
它将继续处理它后面剩余的动作。

测试数据

官方测试数据
POST bank/account/_bulk
测试数据

检索

SearchAPI

ES 支持两种基本方式检索 :

  • 一个是通过使用 REST request URI发送搜索参数(uri+检索参数)
    示例:

    GET bank/_search?q=*&sort=account_number:asc
    

    结果

    {
      "took" : 16,
      "timed_out" : false,
      "_shards" : {
        "total" : 1,
        "successful" : 1,
        "skipped" : 0,
        "failed" : 0
      },
      "hits" : { #查询结果
        "total" : {
          "value" : 1000,  #查到了有1000条数据
          "relation" : "eq" #检索关系,等值检索
        },
        "max_score" : null, #最大得分,这里用的等职检索所以没有最大得分
        "hits" : [ #检索的数据
          {
            "_index" : "bank",
            "_type" : "account",
            "_id" : "0",
            "_score" : null,
            "_source" : {
              "account_number" : 0,
              "balance" : 16623,
              "firstname" : "Bradshaw",
              "lastname" : "Mckenzie",
              "age" : 29,
              "gender" : "F",
              "address" : "244 Columbus Place",
              "employer" : "Euron",
              "email" : "bradshawmckenzie@euron.com",
              "city" : "Hobucken",
              "state" : "CO"
            },
            "sort" : [
              0
            ]
       	},	
          ...
        ]
      }
    }
    
    
  • 另一个是通过使用REST request body 来发送它们(uri+请求体)
    示例:

    GET bank/_search
    {
    	"query": { #查询条件
    		"match_all": {}
    	},
    	"sort": [ #排序条件
    		{
    			"account_number": {
    				"order": "desc"
    			}
    		}
    	]
    }
    

Query DSL

上面SearchAPI第二种方式的请求体就是QueryDSL

语法格式:
一个查询语句 的典型结构

{
	QUERY_NAME: {
		ARGUMENT: VALUE,
		ARGUMENT: VALUE,...
	}
}

如果是针对某个字段,那么它的结构如下:

{
	QUERY_NAME: {
		FIELD_NAME: {
			ARGUMENT: VALUE,
			ARGUMENT: VALUE,...
		}
	}
}

比如分页

GET bank/_search
{
	"query": {
	  
	    "match_all": {}
	  
	},
	"sort":{
	  "balance":"desc"
	},
	"from": 0, #从哪开始
	"size":5 ,   #一页数据
	"_source":["balance"]  #返回字段只有"balance"
}
常用QueryDSL
match【匹配查询】
GET bank/_search
{
	"query": {
		"match": {
			"address": "mill road"
		}
	}
}
最终查询出 address 中包含 mill 或者 road 或者 mill road 的所有记录,并给出相关性得分
match_phrase【短语匹配】
将需要匹配的值当成一个整体单词(不分词)进行检索
GET bank/_search
{
	"query": {
		"match_phrase": {
			"address": "mill road"
		}
	}
}
查出 address 中包含 mill road 的所有记录,并给出相关性得分
multi_match【多字段匹配】
GET bank/_search
{
	"query": {
		"multi_match": {
			"query": "mill road",
			"fields": ["state","address"]
		}
	}
}
state 或者 address 包含 mill 或者 road 或者 mill road 的所有记录,并给出相关性得分
bool【复合查询】
bool 用来做复合查询:
复合语句可以合并 任何 其它查询语句,包括复合语句。这就意味着,复合语句之间可以互相嵌套,
可以表达非常复杂的逻辑。
示例一:
must:必须达到 must 列举的所有条件
get bank/_search
{
  "query":{
    "bool": {
      "must": [
        {"match": {
          "gender": "m"
        }},
        {
          "match": {
            "address": "mill"
          }
        }
      ],
      "must_not": [
        {"match": {
          "age": "38"
        }}
      ]
    }
  }
}
should:应该达到 should 列举的条件,如果达到会增加相关文档的评分,并不会改变
查询的结果。如果 query 中只有 should 且只有一种匹配规则,那么 should 的条件就会
被作为默认匹配条件而去改变查询结果
get bank/_search
{
  "query":{
    "bool": {
      "must": [
        {
          "match": {
            "address": "mill"
          }
        }
      ],
      "should": [
        {"match": {
          "address": "lane"
        }}
      ]
    }
  }
}
并不是所有的查询都需要产生分数,特别是那些仅用于 “filtering”(过滤)的文档。为了不
计算分数 Elasticsearch 会自动检查场景并且优化查询的执行。

get bank/_search
{
  "query":{
    "bool": {
      "must": [
        {
          "match": {
            "address": "mill"
          }
        }
      ],
      "filter": {
        "range": {
          "balance": {
            "gte": 10000,
            "lte": 20000
          }
        }
      }
    }
  }
}
因为有"must"结果有相关性得分



get bank/_search
{
  "query":{
    "bool": {
      "filter": {
        "range": {
          "balance": {
            "gte": 10000,
            "lte": 20000
          }
        }
      }
    }
  }
}
只有"filter"结果没有相关性得分
和 match 一样。匹配某个属性的值。全文检索字段用 match,其他非 text 字段匹配用 term。
get bank/_search
{
  "query":{
    "bool": {
      "must": [
        {
          "term": {
            "age": "39"
          }
        }
      ]
    }
  }
}
aggregations(执行聚合)
聚合提供了从数据中分组和提取数据的能力。最简单的聚合方法大致等于 SQL GROUP
BYSQL 聚合函数。在 Elasticsearch 中,您有执行搜索返回 hits(命中结果),并且同时返
回聚合结果,把一个响应中的所有 hits(命中结果)分隔开的能力。这是非常强大且有效的,
您可以执行查询和多个聚合,并且在一次使用中得到各自的(任何一个的)返回结果,使用
一次简洁和简化的 API 来避免网络往返。

# 搜索 address 中包含 mill 的所有人的年龄分布以及平均年龄

查询
get bank/_search
{
  "query":{"bool": {"must": [{"match": {"address": "mill"}}]}},
  "aggs":{ 		#设置聚合
    "ageAgg":{	#年龄分布聚合
      "terms": {
        "field": "age",
        "size": 10
      }
    },
    "ageAvg":{  #年龄平均聚合
      "avg": {
        "field": "age"
      }
    }
  }
}


结果
{
  "took" : 2,
  "timed_out" : false,
  "_shards" : {... },
  "hits" : { ... },
  "aggregations" : { #所有聚合结果
    "ageAgg" : {    #年龄分布聚合
      "doc_count_error_upper_bound" : 0,
      "sum_other_doc_count" : 0,
      "buckets" : [
        {
          "key" : 38,
          "doc_count" : 2
        },
        {
          "key" : 28,
          "doc_count" : 1
        },
        {
          "key" : 32,
          "doc_count" : 1
        }
      ]
    },
    "ageAvg" : { #年龄平均聚合
      "value" : 34.0
    }
  }
}


初次之外还能进行子聚合
#按照年龄聚合,并且请求这些年龄段的这些人的平均薪资
GET bank/account/_search
{
	"query": {
		"match_all": {}
	},
	"aggs": {
		"age_avg": {
			"terms": {
				"field": "age",
				"size": 1000
			},
			"aggs": { #聚合里面写聚合
				"banlances_avg": {
					"avg": {
						"field": "balance"
					}
				}
			}
		}
	},
	"size": 1000
}

部分结果
"buckets" : [
        {
          "key" : 31,
          "doc_count" : 61,
          "banlances_avg" : {
            "value" : 28312.918032786885
          }
        },
        ...
]        

Mapping

Mapping 是用来定义一个文档(document),以及它所包含的属性(field)是如何存储和
索引的。比如,使用 mapping 来定义:

  • 哪些字符串属性应该被看做全文本属性(full text fields)。

  • 哪些属性包含数字,日期或者地理位置。

  • 文档中的所有属性是否都能被索引(_all 配置)。

  • 日期的格式。

  • 自定义映射规则来执行动态添加属性。

使用:
查看/bank下的mapping

GET bank/_mapping

结果

{
  "bank" : {
    "mappings" : {
      "properties" : {
        "account_number" : {
          "type" : "long" #long类型
        },
        "address" : {
          "type" : "text", #text全文检索类型
          "fields" : {
            "keyword" : {
              "type" : "keyword", #keyword类型不支持全文检索,所以使用address.keyword来match是必须全部一样才行
              "ignore_above" : 256
            }
          }
        },
       ...
      }
} 

常见类型

常见操作

1、创建映射

PUT /my-index
{
	"mappings": {
		"properties": {
			"age":{ "type": "integer" },
			"email":{ "type": "keyword"},
			"name":{ "type": "text"}
		}
	}
}

2、添加新的字段映射

PUT /my-index/_mapping
{
	"properties": {
		"employee-id": {
			"type": "keyword",
			"index": false
		}
	}
}

3、更新映射
对于已经存在的映射字段,我们不能更新。更新必须创建新的索引进行数据迁移
步骤一:创建索引

PUT /new_bank
{
  "mappings": {
    "properties" : {
        "account_number" : {
          "type" : "long"
        },
        "address" : {
          "type" : "text"
        },
        "age" : {
          "type" : "long"
        },
        "balance" : {
          "type" : "long"
        },
        "city" : {
          "type" : "keyword"
        },
        "email" : {
          "type" : "keyword"
        },
        "employer" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "firstname" : {
          "type" : "text"
        },
        "gender" : {
          "type" : "keyword"
        },
        "lastname" : {
          "type" : "keyword"
        },
        "state" : {
          "type" : "keyword"
        }
      }
  }
}

步骤二:数据迁移,如下

4、数据迁移

7.0之后
先创建出 new_twitter 的正确映射。然后使用如下方式进行数据迁移

POST _reindex[固定写法]  #新版本index下没有类型
{
	"source": {
		"index": "bank"
	},
	"dest": {
		"index": "new_bank"
	}
}

6.0之前
将旧索引的 type 下的数据进行迁移

POST _reindex
	{
	"source": { 	#旧版本index下有类型,如果需要迁移旧版本就需要在"source"指定"type"
		"index": "bank",
		"type": "account"
	},
	"dest": {
		"index": "new_bank"
	}
}

可以使用get /bank/_search来查看是否有type

分词

ik分词器下载连接
将其上传到/mydata/elasticsearch/plugins/ik 目录
unzip elasticsearch-analysis-ik-7.4.2.zip 进行解压

测试:

POST _analyze
{
  "analyzer": "ik_smart",
  "text": "我是中国人"
}

POST _analyze
{
	"analyzer": "ik_max_word",
	"text": "我是中国人"
}

自定义分词

cd /mydata/nginx/html
mkdir ik
cd ik
vim fenci.txt
>六麻了

cd /mydata/elasticsearch/plugins/ik/config
vim IKAnalyzer.cfg.xml

docker restart es
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>IK Analyzer 扩展配置</comment>
<!--用户可以在这里配置自己的扩展字典 -->
<entry key="ext_dict"></entry>
<!--用户可以在这里配置自己的扩展停止词字典-->
<entry key="ext_stopwords"></entry>
<!--用户可以在这里配置远程扩展字典 -->
<entry key="remote_ext_dict">http://abnlch.fun:8081/ik/fenci.txt</entry>
在这里配置自己得分词文件
<!--用户可以在这里配置远程扩展停止词字典-->
<!-- <entry key="remote_ext_stopwords">words_location</entry> -->
</properties>

测试

POST _analyze
{
  "analyzer": "ik_smart",
  "text": "六麻了"
}

Java使用es

1、SpringBoot 整合

<dependency>
	<groupId>org.elasticsearch.client</groupId>
	<artifactId>elasticsearch-rest-high-level-client</artifactId>
	<version>7.4.2</version>
</dependency>

因为spring-boot-dependences中的
中指定了es的版本
所以在该模块中修改es的版本

<properties>
    <java.version>8</java.version>
    <elasticsearch.version>7.4.2</elasticsearch.version>
</properties>

2、配置

@SpringBootConfiguration
public class ESConfig {
	public static final RequestOptions COMMON_OPTIONS;
    static {
        RequestOptions.Builder builder = RequestOptions.DEFAULT.toBuilder();
        COMMON_OPTIONS = builder.build();
    }
    
    @Bean
    public RestHighLevelClient client() {
        RestHighLevelClient client = new RestHighLevelClient(RestClient.builder(
                new HttpHost("abnlch.fun", 9200, "http")));
        return client;
    }
}

3、使用
参照官方文档:

	@Test
    void indexData() throws IOException {
        IndexRequest indexRequest = new IndexRequest("users");//初始化indexREquest
        indexRequest.id("1");//设置indexRequest的id
        String jsonString = JSON.toJSONString(new User("1Stack1",12));
        IndexRequest source = indexRequest.source(jsonString, XContentType.JSON);//保存到es的数据

        IndexResponse index = client.index(indexRequest, ESConfig.COMMON_OPTIONS);//执行index操作
        System.out.println(index);
    }
    
    @Data
    @AllArgsConstructor
    @NoArgsConstructor
    class User{
        private String name;
        private Integer age;
    }

各个操作文档

复杂search
按照年龄聚合,并且请求这些年龄段的这些人的平均薪资

GET bank/account/_search
{
	"query": {
		"match": {
			"address": {
				"query": "mill"
			}
		}
	},
	"aggregations": {
		"age_avg": {
			"terms": {
				"field": "age",
				"size": 1000
			},
			"aggregations": {
				"banlances_avg": {
					"avg": {
						"field": "balance"
					}
				}
			}
		}
	}
}

    @Test
    void searchData() throws IOException {
        SearchRequest searchRequest = new SearchRequest();
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        //查询语句来封装searchSourceBuilder
        //封装query
        MatchQueryBuilder matchQueryBuilder = QueryBuilders.matchQuery("address", "mill");
        searchSourceBuilder.query(matchQueryBuilder);

        //封装agg
        AvgAggregationBuilder banlancesAvg = AggregationBuilders
                .avg("banlances_avg").field("balance");//子聚合
        TermsAggregationBuilder ageAvg = AggregationBuilders
                .terms("age_avg").field("age").size(1000)
                .subAggregation(banlancesAvg);//.subAggregation()提交子聚合
        searchSourceBuilder.aggregation(ageAvg);

        //将source封装到searchRequest
        searchRequest.source(searchSourceBuilder);
        System.out.println(searchSourceBuilder);
        
        SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);//执行search语句
        
        System.out.println(searchResponse);
    }
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值