2.0 Elasticsearch7.1 ES在Linux下安装问题处理(外网访问良心优化版)

下载解压

各个版本都有
https://elasticsearch.cn/download/

[root@localhost hadoop]# tar -zxf jdk-11.0.8_linux-x64_bin.tar.gz -C /usr/local/java
[root@localhost hadoop]# tar -zxf elasticsearch-7.1.0-linux-x86_64.tar.gz -C /usr/local
[root@localhost hadoop]# tar -zxf filebeat-7.1.0-linux-x86_64.tar.gz -C /usr/local
[root@localhost hadoop]# tar -zxf kibana-7.1.0-linux-x86_64.tar.gz -C /usr/local
[root@localhost hadoop]# tar -zxf logstash-7.1.0.tar.gz -C /usr/local

JVM配置

在es目录修改JVM - config/jvm.options 7.1默认设置1GB
配置建议 Xmx 和 Xms一样 Xmx不超过机器内存50% 不超过30GB

vim /etc/profile
export JAVA_HOME=/usr/local/install/jdk-11.0.8
export PATH=$JAVA_HOME/bin:$PATH

问题一 java.lang.RuntimeException: can not runelasticsearch as root

[root@iZbp1bb2egi7w0ueys548pZ local]# chown -R hadoop elasticsearch-6.0.0
切换至elasticsearch用户
[root@iZbp1bb2egi7w0ueys548pZ etc]# su elasticsearch

使用后台启动方式启动:./elasticsearch -d

问题二 ES不能外网访问

elasticsearch.yml

network.host: 192.168.188.100
# 本机地址或者 network.host: 0.0.0.0
# Set a custom port for HTTP:
#
http.port: 9200

问题三 ES不能外网访问 解决了一大波问题但是还不能访问

试试关了防火墙

service firewalld stop
 
#查看防火墙状态 
systemctl status firewalld.service 
#临时关闭防火墙 
systemctl stop firewalld.service 
#禁止firewall开机启动 
systemctl disable firewalld.service

问题四 failed to obtain node locks

Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/local/elasticsearch-7.1.0/data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?

先前启动的没杀干净

[hadoop@fly elasticsearch-7.1.0]$ jps
23696 Elasticsearch
24314 Jps
[hadoop@fly elasticsearch-7.1.0]$ kill -9 23696

问题五 seccomp unavailable 错误 这个我7.1我没遇到

解决方法:elasticsearch.yml 配置
bootstrap.memory_lock: false
bootstrap.system_call_filter: false

问题六 max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]

vim /etc/security/limits.conf ***填启动es用户 我的是hadoop
*** hard nofile 80000
*** soft nofile 80000

To Increase the File Descriptor Limit (Linux) 其他材料我就用了第六招:
不见效果 记得重启 reboot

ulimit -Hn 立马变了

1.	Display the current hard limit of your machine.
The hard limit is the maximum server limit that can be set without tuning the kernel parameters in proc file system.
$ ulimit -aH
core file size (blocks)       unlimited
data seg size (kbytes)        unlimited
file size (blocks)            unlimited
max locked memory (kbytes)    unlimited
max memory size (kbytes)      unlimited
open files                    1024
pipe size (512 bytes)         8
stack size (kbytes)           unlimited
cpu time (seconds)            unlimited
max user processes            4094
virtual memory (kbytes)       unlimited
2.	Edit the /etc/security/limits.conf and add the lines:
*     soft   nofile  1024
*     hard   nofile  65535 
3.	Edit the /etc/pam.d/login by adding the line:
session required /lib/security/pam_limits.so
4.	Use the system file limit to increase the file descriptor limit to 65535.
The system file limit is set in /proc/sys/fs/file-max .
echo 65535 > /proc/sys/fs/file-max
5.	Use the ulimit command to set the file descriptor limit to the hard limit specified in /etc/security/limits.conf.
ulimit -n unlimited
6.	Restart your system.

问题七 max virtual memory areas vm.max_map_count [65530] is too low

vim /etc/sysctl.conf 
vm.max_map_count = 262144

然后 sysctl -p 生效

问题八 the default discovery settings are unsuitable…, last least one of […] must be configured

解决方法:elasticsearch.yml 开启配置:

node.name: node-1
cluster.initial_master_nodes: ["node-1"]
©️2020 CSDN 皮肤主题: 大白 设计师:CSDN官方博客 返回首页