文章目录
版本es742,在使用logstash写索引的时候,突然发现logstash日志在报错
报错信息:
[2020-06-17T12:01:18,241][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>[“index”, {:_id=>nil, :_index=>“bizlog-wxxcx-2020.06.17”, :_type=>"_doc", :_routing=>nil}, # LogStash::Event:0x5b51884a], :response=>{“index”=>{"_index"=>“bizlog-wxxcx-2020.06.17”, “_type”=>"_doc", “_id”=>nil, “status”=>400, “error”=>{“type”=>“illegal_argument_exception”, “reason”=>“Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [5000]/[5000] maximum shards open;”}}}}
[2020-06-17T12:01:18,241][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"bizlog-wxxcx-2020.06.17", :_type=>"_doc", :_routing=>nil}, #<LogStash::Event:0x5b51884a>], :response=>{"index"=>{"_index"=>"bizlog-wxxcx-2020.06.17", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [5000]/[5000] maximum shards open;"}}}}
原因:
elasticsearch7以上版本,默认只允许1000个分片,问题是因为集群分片数不足引起的。
解决:
修改集群对分片数量的设置
PUT/_cluster/settings
{
"transient":{
"cluster":{
"max_shards_per_node":10000
}
}
}
这样就可以了。
或者使用URL的方式
curl -XPUT -H "Content-Type:application/json" http://localhost:9200/_cluster/settings -d '{"transient":{"cluster":{"max_shards_per_node":2000}}}'