注意elasticsearch 不能root用户启动
所以在整个安装过程中使用普通用户
1 .创建普通用户
2 .在普通用户目录下实现集群间免密登录
[yao@hadoop1 ~]$ ssh-keygen
所有的问题都用enter键
[yao@hadoop1 ~]$ cd .ssh/
[yao@hadoop1 .ssh]$ scp id_rsa.pub yao@192.168.1.61:/home/yao/.ssh/id_rsa.pub.hadoop1
[yao@hadoop1 .ssh]$ scp id_rsa.pub yao@192.168.1.62:/home/yao/.ssh/id_rsa.pub.hadoop1
在hadoop2 hadoop3中分别修改:
[yao@hadoop3 .ssh]$ touch authorized_keys
[yao@hadoop3 .ssh]$ chmod 600 authorized_keys
[yao@hadoop3 .ssh]$ cat id_rsa.pub.hadoop1 >> authorized_keys
在hadoop1中登录hadoop2,hadoop3验证。
[yao@hadoop1 .ssh]$ ssh yao@hadoop2
免密登录成功
3 . 普通用户下安装jdk
4 .安装elasticsearch6.4
[yao@hadoop1 ~]$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.tar.gz
[yao@hadoop1 ~]$ tar -zxvf elasticsearch-6.4.0.tar.gz
[yao@hadoop1 ~]$ cd elasticsearch-6.4.0/
[yao@hadoop1 elasticsearch-6.4.0]$ vim config/elasticsearch.yml
cluster.name: es
node.name: node3
node.master: true
node.data: true
path.data: /home/yao/elasticsearch-6.4.0/data
path.logs: /home/yao/elasticsearch-6.4.0/logs
network.host: 192.168.1.62
transport.tcp.port: 9300
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.60:9300","192.168.1.61:9300","192.168.1.63:9300"]
[yao@hadoop1 bin]# ./elasticsearch &
5.关闭防火墙,使浏览器可访问
[root@hadoop2 ~]# systemctl stop firewalld
[root@hadoop2 ~]# systemctl disabled firewalld
6. 浏览器查看是否安装成功
192.168.1.60:9200
192.168.1.61:9200
192.168.1.62:9200:
出现问题:
问题 1 :
[2018-09-05T10:42:56,077][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
异常原因: 不能以root权限运行Elasticsearch
解决办法:
(1)运行时加上参数: bin/elasticsearch -Des.insecure.allow.root=true
(2)修改bin/elasticsearch文件: ES_JAVA_OPTS="-Des.insecure.allow.root=true"
从新启动Elasticsearch
问题2:
参考: https://blog.csdn.net/feinifi/article/details/73633235?utm_source=itdadao&utm_medium=referral
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
问题原因: 所给的文件空间太小,需要增大 虚拟内存空间太小,需要增大
解决办法:
[root@hadoop1 logs]# vim /etc/security/limits.conf
yao hard nofile 65536
yao soft nofile 65536
[yao@hadoop1 ~]$ ulimit -Hn
65536
[root@hadoop1 yao]# vim /etc/sysctl.conf
vm.max_map_count=655360
[root@hadoop1 yao]# sysctl -p
vm.max_map_count = 655360
问题解决