创建索引:
PUT /my_alias_v3
{
"settings": {
"index": {
"number_of_shards": 2,
"number_of_replicas": 1
}
}
}
PUT my_alias_v2/docs/_mapping
{
"properties": {
"id": {"type": "long"},
"name": {"type": "text"},
"counter": {"type": "integer"},
"tags": {"type": "text"}
}
}
写入四个文档:
POST my_alias_v2/docs/_bulk
{"index": {"_id": 1}}
{"id":1, "name": "张三", "counter":"10", "tags":["red", "black"]}
{"index": {"_id": 2}}
{"id":2, "name": "李四", "counter":"20", "tags":["green", "purple"]}
POST my_alias_v2/docs/_bulk
{"index": {"_id": 3}}
{"id":3, "name": "haxi", "counter":"10", "tags":["red", "black"]}
{"index": {"_id": 4}}
{"id":4, "name": "xiha", "counter":"20", "tags":["green", "purple"]}
查询一个文档
GET my_alias_v2/_search
{
"query": {
"term": {
"id": {
"value": 4
}
}
}
}
删除一个文档
POST my_alias_v1/docs/_delete_by_query
{
"query": {
"term": {
"id": {
"value": 4
}
}
}
}
查看一个索引,删除后有一个delete的文档标记
GET _cat/shards/my_alias_v2?v
看segement都有deleted的标记文档
GET _cat/segments/my_alias_v2?v
强制合并索引
强制合并segement后主分片没有delete标记的文档了,但是副本分片多了一个segement。这时候主副本分片segement数量不一致,存储大小也是不一致的。原因是什么呢?
POST my_alias_v2/_forcemerge?max_num_segments=1
执行commit并清空translog的行为,确保所有数据都被写入磁盘
POST my_alias_v2/_flush
重新再执行一遍所有的流程,无论是一个主副分片还是两个主副分片,发现不会再出现强制合并索引大小不一致的情况了。搞不懂原因是什么。
elasticsearch中forcemerge清除文件占用的磁盘空间
https://blog.csdn.net/tiancityycf/article/details/115736887