1.使用maven项目 在pom文件中添加pom依赖
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>7.17.5</version>
</dependency>
<!-- elasticsearch的客户端 -->
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>7.17.5</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-client</artifactId>
<version>7.17.5</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>x-pack-transport</artifactId>
<version>7.17.5</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.9.9</version>
</dependency>
2.java连接elasticsearch有用户名、密码的客户端
private RestHighLevelClient client;
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials("elastic", "123456"));//elastic 为用户名;123456 为用户名密码
final HttpHost[] httpHosts = Arrays.stream({"http://127.0.0.1:9200"}).map(HttpHost::create).toArray(HttpHost[]::new);//{"http://127.0.0.1:9200"} 为elasticsearch 安装主机地址和es端口号
client = new RestHighLevelClient(RestClient.builder(httpHosts)
.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
httpClientBuilder.disableAuthCaching();
return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
}
}));
System.out.println("client:" + client);
client.close();
httpClientBuilder.disableAuthCaching() 作用:— 禁用抢占式身份验证
3.创建索引前先判断要创建的索引是否存在
GetIndexRequest request = new GetIndexRequest("user");
Boolean exists = client.indices().exists(request, RequestOptions.DEFAULT);
System.out.println("user索引是否存在 = " + exists);
4.创建索引以及对应的属性,这边的属性就是mapping、settings
// 创建Request对象, 准备创建的索引名为user
CreateIndexRequest request = new CreateIndexRequest("user");
// 设置Request参数
request.settings(Settings.builder()
.put("auto_expand_replicas", "0-all")
.put("max_result_window","1000000")
.put("max_inner_result_window","1000000"));
// 通过JSON字符串的方式,设置ES索引结构的mapping
request.mapping(
"{\n" +
" \"properties\": {\n" +
" \"message\": {\n" +
" \"type\": \"text\"\n" +
" }\n" +
" }\n" +
"}", XContentType.JSON);
//返回Response对象
CreateIndexResponse createIndexResponse = client.indices().create(request, RequestOptions.DEFAULT);
// 响应状态
boolean acknowledged = createIndexResponse.isAcknowledged();
System.out.println("user索引操作 :" + acknowledged);
auto_expand_replicas:
原文:Auto-expand the number of replicas based on the number of data nodes in the cluster. Set to a dash delimited lower and upper bound (e.g. 0-5
) or use all
for the upper bound (e.g. 0-all
). Defaults to false
(i.e. disabled). Note that the auto-expanded number of replicas only takes allocation filtering rules into account, but ignores other allocation rules such as total shards per node, and this can lead to the cluster health becoming YELLOW
if the applicable rules prevent all the replicas from being allocated.
If the upper bound is all
then shard allocation awareness and cluster.routing.allocation.same_shard.host are ignored for this index.
翻译:根据集群中的数据节点数量,自动扩大复制的数量。设置为以破折号为界的下限和上限(例如0-5),或者使用全部作为上限(例如0-全部)。默认为false(即禁用)。请注意,自动扩展的复制数量只考虑分配过滤规则,但忽略了其他分配规则,如每个节点的总碎片,如果适用的规则阻止所有的复制被分配,这可能导致集群健康状况变成黄色。
如果上限是全部,那么分片分配意识和cluster.routing.allocation.same_shard.host对于这个索引来说是被忽略的。
max_result_window:
原文:The maximum value of from + size
for searches to this index. Defaults to 10000
. Search requests take heap memory and time proportional to from + size
and this limits that memory. See Scroll or Search After for a more efficient alternative to raising this.
翻译:对该索引进行搜索时,从+大小的最大值。默认为10000。搜索请求所占用的堆内存和时间与从+大小成正比,这限制了内存。请参阅 "滚动 "或 "搜索",以获得更有效的替代方法。
max_inner_result_window:
原文:The maximum value of from + size
for inner hits definition and top hits aggregations to this index. Defaults to 100
. Inner hits and top hits aggregation take heap memory and time proportional to from + size
and this limits that memory.
翻译:从+大小的最大值,用于内部命中定义和顶部命中聚合到这个索引。默认为100。内部命中率和顶级命中率聚集需要堆内存和时间,与从+大小成正比,这限制了内存。
如果想对elasticsearch的index了解更多:建议访问官方文档 Index modules | Elasticsearch Guide [7.17] | Elastichttps://www.elastic.co/guide/en/elasticsearch/reference/7.17/index-modules.html
DSL语句:
PUT /index
{
"settings": {
"index": {
"auto_expand_replicas": "0-all",
"max_result_window": "1000000000",
"max_inner_result_window": "1000000000"
}
},
"mappings": {
"properties": {
"message": {
"type": "text"
}
}
}
}
Kibana展示:
5.查询索引详细信息
// 查询索引User
GetIndexRequest request = new GetIndexRequest("user");
// 查询所有索引
//GetIndexRequest request = new GetIndexRequest("*");
GetIndexResponse getIndexResponse = client.indices().get(request, RequestOptions.DEFAULT);
// 响应状态
System.out.println(getIndexResponse.getAliases());//别名
System.out.println(getIndexResponse.getMappings());//mappings
System.out.println(getIndexResponse.getSettings());//settings
DSL语句:
GET /index
Kibana展示:
6.删除索引
//查询索引
DeleteIndexRequest request = new DeleteIndexRequest("user");
//返回Response对象
AcknowledgedResponse response = client.indices().delete(request, RequestOptions.DEFAULT);
// 响应状态
System.out.println("user索引操作 :"+response.isAcknowledged());
DSL语句:
DELETE /index
Kibana展示:
最后,创作不易,请大家多多支持。