docker安装配置分布式elasticsearch、kibana、head、cerebro

有三个节点分别是node-master(192.168.152.45),node-data1(192.168.152.39), node-data2(192.168.152.29)

在每一太节点上,使用docker安装elasticsearch

docker pull docker.elastic.co/elasticsearch/elasticsearch:6.2.3

Master node

在node-master节点上,新建一个配置文件:

bootstrap.memory_lock: false
bootstrap.system_call_filter: false

cluster.name: my-application
node.name: node-master
node.master: true
node.data: false
network.publish_host: 192.168.152.45
network.host: 0.0.0.0
http.max_content_length: 1024mb
http.cors.enabled: true
http.cors.allow-origin: "*" 
discovery.zen.ping_timeout: 10s 
discovery.zen.fd.ping_timeout: 10000s
discovery.zen.fd.ping_retries: 10

network.publish_host,表示发布地址,是唯一的,用来集群各节点的相互通信。

执行命令运行ES:

docker run  --name esmaster --ulimit nofile=65536:131072 -p 9200:9200 -p 9300:9300 -v /home/iie4bu/ddy/docker-elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /home/iie4bu/ddy/docker-elasticsearch/data:/usr/share/elasticsearch/data  -e "ES_JAVA_OPTS=-Xms5120m -Xmx5120m" docker.elastic.co/elasticsearch/elasticsearch:6.2.3

如果想要在后台运行,可以添加 -d 选项

Data node

data node 的配置文件如下:

bootstrap.memory_lock: false
bootstrap.system_call_filter: false

cluster.name: my-application
node.name: node-data1
node.master: false
node.data: true
network.publish_host: 192.168.152.39
network.host: 0.0.0.0
http.max_content_length: 1024mb
discovery.zen.ping.unicast.hosts: ["192.168.152.45"]
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.zen.ping_timeout: 10s
discovery.zen.fd.ping_timeout: 10000s
discovery.zen.fd.ping_retries: 10

运行docker命令:

docker run  --name es-data1 --ulimit nofile=65536:131072 -p 9200:9200 -p 9300:9300 -v /home/iie4bu/ddy/docker-elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /home/iie4bu/ddy/docker-elasticsearch/data:/usr/share/elasticsearch/data  -e "ES_JAVA_OPTS=-Xms5120m -Xmx5120m" docker.elastic.co/elasticsearch/elasticsearch:6.2.3

如果想要在后台运行,可以添加 -d 选项

在每个节点上启动ES后,使用docker container ls 查看:

iie4bu@swarm-manager:~$ docker container ls
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED              STATUS              PORTS                                            NAMES
869fcd7708ed        docker.elastic.co/elasticsearch/elasticsearch:6.2.3   "/usr/local/bin/dock…"   About a minute ago   Up About a minute   0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp   esmaster
iie4bu@swarm-worker1:~$ docker container ls
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED              STATUS              PORTS                                            NAMES
c8c97f4d0685        docker.elastic.co/elasticsearch/elasticsearch:6.2.3   "/usr/local/bin/dock…"   About a minute ago   Up About a minute   0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp   es-data1
iie4bu@swarm-worker2:~$ docker container ls
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED              STATUS              PORTS                                            NAMES
518ce01ecdd7        docker.elastic.co/elasticsearch/elasticsearch:6.2.3   "/usr/local/bin/dock…"   About a minute ago   Up About a minute   0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp   es-data2

安装elasticsearch-head

获取镜像:

iie4bu@swarm-manager:~$ docker pull tobias74/elasticsearch-head

执行容器:

iie4bu@swarm-manager:~$ docker run -d --name es-head -p 9100:9100 --restart always tobias74/elasticsearch-head

使用elasticsearch-head查看

可以成功运行。

安装kibana

安装kibana:

docker pull docker.elastic.co/kibana/kibana:6.2.3

运行kibana:

docker run -e ELASTICSEARCH_URL=http://192.168.152.45:9200 -p 5601:5601 --name kibana docker.elastic.co/kibana/kibana:6.2.3

e3c454655b32feedce6c658ebe744666778.jpg

在一个数据节点上开启两个ES

在节点node-data1(192.168.152.39)上在新建一个elasticsearch节点node-data3

elasticsearch.yml文件内容如下:

cluster.name: my-application
node.name: node-data3
node.master: false
node.data: true
transport.tcp.port: 9301
http.port: 9201
network.publish_host: 192.168.152.39
network.host: 0.0.0.0
http.max_content_length: 1024mb
discovery.zen.ping.unicast.hosts: ["192.168.152.45:9300"]
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.zen.ping_timeout: 10s
discovery.zen.fd.ping_timeout: 10000s
discovery.zen.fd.ping_retries: 10

network.bind_host: 192.168.152.39因为是在docker中,所以不需要绑定ip

这里需要注意的是transport.tcp.port和http.port必须设置,因为node-data1与node-data3在同一台服务器上,否则报错信息如下:

[2019-07-11T08:11:50,250][INFO ][o.e.d.z.ZenDiscovery     ] [node-data3] failed to send join request to master [{node-master}{ahgcLIkTT32KVcZORZzUZQ}{CueMB9y5TVWEi_h7B0I9QA}{192.168.152.45}{192.168.152.45:9300}{ml.machine_memory=67430027264, ml.max_open_jobs=20, ml.enabled=true}], reason [RemoteTransportException[[node-master][172.17.0.2:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[node-data3][192.168.152.39:9300] handshake failed. unexpected remote node {node-data1}{jji1eahBReWUu-IfBLEMyg}{Jo_QU2nlS8Oea0JundbRMQ}{192.168.152.39}{192.168.152.39:9300}{ml.machine_memory=67430027264, ml.max_open_jobs=20, ml.enabled=true}]; ]

运行命令:

docker run  --name es-data3 --ulimit nofile=65536:131072 -p 9201:9201 -p 9301:9301 -v /home/iie4bu/ddy/docker-elasticsearch2/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /home/iie4bu/ddy/docker-elasticsearch2/data:/usr/share/elasticsearch/data  -e "ES_JAVA_OPTS=-Xms5120m -Xmx5120m" docker.elastic.co/elasticsearch/elasticsearch:6.2.3

3a5e61e3f7ccbe5e6d258e005005f6fa540.jpg

这样就在一台服务器上启动两个ES节点了。

ES安装过程中出现的问题:

问题1:

[temuser@n1 data-node1]$ docker run  --name esmaster --ulimit nofile=65536:131072 -p 9202:9202 -p 9302:9302 -v /home/temuser/ddy/elasticsearch/master/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /home/temuser/ddy/elasticsearch/master/data:/usr/share/elasticsearch/data  -e "ES_JAVA_OPTS=-Xms5120m -Xmx5120m" docker.elastic.co/elasticsearch/elasticsearch:6.2.3
[2019-07-13T05:01:57,878][INFO ][o.e.n.Node               ] [node-master2] initializing ...
[2019-07-13T05:01:57,890][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-master2] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Failed to create node environment
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.2.3.jar:6.2.3]
	at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.3.jar:6.2.3]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.3.jar:6.2.3]
Caused by: java.lang.IllegalStateException: Failed to create node environment
	at org.elasticsearch.node.Node.<init>(Node.java:267) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.3.jar:6.2.3]
	... 6 more
Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
	at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384) ~[?:?]
	at java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_161]
	at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) ~[?:1.8.0_161]
	at java.nio.file.Files.createDirectories(Files.java:767) ~[?:1.8.0_161]
	at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:204) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.node.Node.<init>(Node.java:264) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.3.jar:6.2.3]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.3.jar:6.2.3]
	... 6 more

原因是宿主机上的/home/temuser/ddy/elasticsearch/master/data权限不足导致的。但是错误日志报的却是docker容器下的 /usr/share/elasticsearch/data/nodes目录。

解决办法:chmod 777 /home/temuser/ddy/elasticsearch/master/data

问题2:

elasticsearch访问时需要登录:

方法一:

原因是在docker中默认安装了x-pack。需要卸载x-pack:

步骤:

  1.  docker exec -it es1 /bin/bash 进入容器
  2.  卸载x-pack插件
  •     ./bin/elasticsearch-plugin remove x-pack
  • 删除配置文件(由于配置文件保护,需要手动删除)
    • cd /usr/share/elasticsearch/config
    • rm -rf x-pack
  • 重启。docker restart es1

方法二(推荐):

将x-pack关闭:

elasticsearch.yml文件内容如下:

cluster.name: my-application2
node.name: node-master2
node.master: true
node.data: false
transport.tcp.port: 9312
http.port: 9212
network.publish_host: 192.168.152.39
network.host: 0.0.0.0
http.max_content_length: 1024mb
discovery.zen.ping.unicast.hosts: ["192.168.152.39:9312"]
http.cors.enabled: true
http.cors.allow-origin: "*" 
discovery.zen.ping_timeout: 10s 
discovery.zen.fd.ping_timeout: 10000s
discovery.zen.fd.ping_retries: 10

xpack.security.enabled: false
xpack.monitoring.enabled: false
xpack.graph.enabled: false
xpack.watcher.enabled: false

kibana.yml文件内容如下:

server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.152.39:9212"

xpack.security.enabled: false
xpack.monitoring.enabled: false
xpack.graph.enabled: false
xpack.reporting.enabled: false

运行kibana:

docker run -d --name kibana -v /home/vincent/docker/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml -p 5601:5601 docker.elastic.co/kibana/kibana:6.2.3 

参考:https://www.cnblogs.com/blogjun/articles/8072751.html

安装cerebro

docker pull lmenezes/cerebro

然后直接运行:

docker run -d -p 9000:9000 --name cerebro --restart always lmenezes/cerebro

 安装ik分词器

可以修改elasticsearch.yml文件:

cluster.name: my-application
node.name: node-data4
node.master: false
node.data: true
# network.bind_host: 192.168.171.29
transport.tcp.port: 9301
http.port: 9201
network.publish_host: 192.168.171.29
network.host: 0.0.0.0
http.max_content_length: 1024mb
discovery.zen.ping.unicast.hosts: ["192.168.171.45:9300","192.168.171.45:9301","192.168.171.45:9302"]
http.cors.enabled: true
http.cors.allow-origin: "*" 
discovery.zen.ping_timeout: 10s 
discovery.zen.fd.ping_timeout: 10000s
discovery.zen.fd.ping_retries: 10

将ik分词器拷贝到服务器中,目录结构如下:

然后执行命令:

docker run -d --name esmaster --ulimit nofile=65536:131072 -p 9200:9200 -p 9300:9300 -v /home/iie4bu/ddy/docker-elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /home/iie4bu/ddy/elasticsearch-analysis-ik-6.2.3:/usr/share/elasticsearch/plugins/ -v /home/iie4bu/ddy/docker-elasticsearch/data:/usr/share/elasticsearch/data  -e "ES_JAVA_OPTS=-Xms5120m -Xmx5120m" docker.elastic.co/elasticsearch/elasticsearch:6.2.3

修改了es中的plugins之后,需要修改kibana的plugin否则无法进入kibana:

docker run -d --name kibana -v /home/iie4bu/ddy/docker-kibana/kibana.yml:/usr/share/kibana/config/kibana.yml -v /home/iie4bu/ddy/docker-kibana/plugins:/usr/share/kibana/plugins/ -p 5601:5601 docker.elastic.co/kibana/kibana:6.2.3 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值