环境Ubuntu 14.04
ElasticSearch 2.4.1
CPU 4核i5
内存 4G
背景使用elasticsearch-php执行elasticsearch搜索,报错内存溢出如下:[Elasticsearch\Common\Exceptions\ServerErrorResponseException]
out_of_memory_error: Java heap space
[Elasticsearch\Common\Exceptions\ServerErrorResponseException]
{"error":{"root_cause":[{"type":"out_of_memory_error","reason":"Java heap space"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":t
rue,"failed_shards":[{"shard":0,"index":"api-dev-2016-11-18","node":"zuUJrHUXRDWm1RX_D-0OjQ","reason":{"type":"out_of_memory_error","reason":"Java heap space"}}]},"status":500}
解决
修改elasticsearch安装目录下的bin/elasticsearch,添加ES_HEAP_SIZE=5G
或者(Xms表示最小内存,Xmx表示最大内存)ES_JAVA_OPTS="-Xms5g -Xmx5g"
重启elasticsearch,执行ps -ef |grep elasticsearch查看是否生效,找到-Xms5G -Xmx5G字段表示修改已生效
补充
物理内存的限制,也会导致这个内存溢出的报错,我遇到的就是这种情况,单条100W+数据的索引,本机只有4G内存,分到es上也没多少,改再大也没用,尴尬~~~
建议多查看节点状态curl "localhost:9200/_nodes/stats"
jvm.mem.heap_used_percent如果长期在75以上,就是内存不足,该调大调大,该加内存加内存
参考elasticsearch官网说明 走你特别建议:为了预防问题发生,都建议根据机器配置调整下集群中每台机器这个参数值,es默认512m-1g数据量大点就不够用了