查看kibana日志发现是因为Elasticsearch 引擎没有足够的 RAM 来正常工作,这要归功于正在进行的查询需要更多的内存并且无法使用当前资源进行响应。
Data too large, data for [<http_request>] would be [124171416/118.4mb], which is larger than the lim
由于我当初是用docker容器启动的ES,设置了ES内存大小-Xmx128m
docker run --name elasticsearch -p 9200:9200 -p 9300:9300 \-e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx128m" -v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data -v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins -d elasticsearch:7.4.2
解决方法1:
重启docker 的ES服务
解决方法2:
删除ES容器,重新启动容器,把启动命令中的内存写大一点
docker run --name elasticsearch -p 9200:9200 -p 9300:9300 \-e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms256m -Xmx512m" -v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data -v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins -d elasticsearch:7.4.2