基于Docker搭建Elasticsearch集群
集群规划,采用三个节点
# 准备3个es节点 es 9200 9300
- web 9201 tcp:9301 node-1 elasticsearch.yml
- web 9202 tcp:9302 node-2 elasticsearch.yml
- web 9203 tcp:9303 node-3 elasticsearch.yml
- 注意
- 所有节点集群名称必须保持一致cluster.name
- 每个节点必须有唯一的名字 node.name
- 开启每个节点的远程连接network.host:0.0.0.0
- 指定IP地址进行集群节点通信network_publish_host:
- 修改web端口tcp端口http.port:transport.tcp.port
- 指定集群中所有系节点通信列表discovery.seed_hosts:node-1 node-2 node-3相同
- 允许集群初始化master节点节点数:cluster.initial_master_nodes:[“node-1”, “node-2”, “node-3”]
- 集群最少几个节点可用gateway.recover_after_nodes:2
- 开启每个节点的跨域访问http.cors.enabled:true http.cors.allow.-origin:"*"
配置文件
cluster
node-1
config/elasticsearch.yml
# 集群名称
cluster.name: es-cluster
#节点名称
node.name: node-1
# 发布地址,一个单一地址,用于通知集群中的其他节点,以便其他的节点能够和它通信。当前,一个 elasticsearch 节点可能被绑定到多个地址,但是仅仅有一个发布地址
# docker宿主机ip
network.publish_host: 172.30.38.46
# 开放远程连接,bind_host和publish_host一起设置
network.host: 0.0.0.0
# 对外暴露的http请求端口
http.port: 9201
# 集群节点之间通信用的TCP端口
transport.tcp.port: 9301
# 一个集群中最小主节点个数(防止脑裂,一般为n/2 + 1,n为集群节点个数)(7.10.1版本已取消?)
discovery.zen.minimum_master_nodes: 2
# 新节点启动时能被发现的节点列表(新增节点需要添加自身)
discovery.zen.ping.unicast.hosts: ["172.30.38.46:9301","172.30.38.46:9302","172.30.38.46:9303"]
# 集群初始话指定主节点(节点名),7版本必须设置
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
# 跨域问题解决
http.cors.enabled: true
http.cors.allow-origin: "*"
node-2
config/elasticsearch.yml
# 集群名称
cluster.name: es-cluster
#节点名称
node.name: node-1
# 发布地址,一个单一地址,用于通知集群中的其他节点,以便其他的节点能够和它通信。当前,一个 elasticsearch 节点可能被绑定到多个地址,但是仅仅有一个发布地址
network.publish_host: 172.30.38.46
# 开放远程连接,bind_host和publish_host一起设置
network.host: 0.0.0.0
# 对外暴露的http请求端口
http.port: 9202
# 集群节点之间通信用的TCP端口
transport.tcp.port: 9302
# 一个集群中最小主节点个数(防止脑裂,一般为n/2 + 1,n为集群节点个数)(7.10.1版本已取消?)
discovery.zen.minimum_master_nodes: 2
# 新节点启动时能被发现的节点列表(新增节点需要添加自身)
discovery.zen.ping.unicast.hosts: ["172.30.38.46:9301","172.30.38.46:9302","172.30.38.46:9303"]
# 集群初始话指定主节点(节点名),7版本必须设置
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
# 跨域问题解决
http.cors.enabled: true
http.cors.allow-origin: "*"
node-3
config/elasticsearch.yml
# 集群名称
cluster.name: es-cluster
#节点名称
node.name: node-1
# 发布地址,一个单一地址,用于通知集群中的其他节点,以便其他的节点能够和它通信。当前,一个 elasticsearch 节点可能被绑定到多个地址,但是仅仅有一个发布地址
network.publish_host: 172.30.38.46
# 开放远程连接,bind_host和publish_host一起设置
network.host: 0.0.0.0
# 对外暴露的http请求端口
http.port: 9203
# 集群节点之间通信用的TCP端口
transport.tcp.port: 9303
# 一个集群中最小主节点个数(防止脑裂,一般为n/2 + 1,n为集群节点个数)(7.10.1版本已取消?)
discovery.zen.minimum_master_nodes: 2
# 新节点启动时能被发现的节点列表(新增节点需要添加自身)
discovery.zen.ping.unicast.hosts: ["172.30.38.46:9301","172.30.38.46:9302","172.30.38.46:9303"]
# 集群初始话指定主节点(节点名),7版本必须设置
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
# 跨域问题解决
http.cors.enabled: true
http.cors.allow-origin: "*"
创建/elk/escluster-kibana-compose/node-1,/elk/escluster-kibana-compose/node-2,/elk/escluster-kibana-compose/node-3文件夹。分别在三个文件夹下创建config文件夹并在config文件夹下创建elasticsearch.yml
文件,
cluster
node-1
config/elasticsearch.yml
cluster.name: es-cluster
node.name: node-1
network.publish_host: 172.30.38.46
network.host: 0.0.0.0
http.port: 9201
transport.tcp.port: 9301
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["172.30.38.46:9301","172.30.38.46:9302","172.30.38.46:9303"]
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
http.cors.enabled: true
http.cors.allow-origin: "*"
node-2
config/elasticsearch.yml
cluster.name: es-cluster
node.name: node-2
network.publish_host: 172.30.38.46
network.host: 0.0.0.0
http.port: 9202
transport.tcp.port: 9302
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["172.30.38.46:9301","172.30.38.46:9302","172.30.38.46:9303"]
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
http.cors.enabled: true
http.cors.allow-origin: "*"
node-3
config/elasticsearch.yml
cluster.name: es-cluster
node.name: node-3
network.publish_host: 172.30.38.46
network.host: 0.0.0.0
http.port: 9203
transport.tcp.port: 9303
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["172.30.38.46:9301","172.30.38.46:9302","172.30.38.46:9303"]
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
http.cors.enabled: true
http.cors.allow-origin: "*"
/elk/escluster-kibana-compose/kibana.yml
server.name: kibana
server.port: 5606
server.host: "0"
elasticsearch.hosts: [ "http://es01:9201","http://es02:9202","http://es03:9203" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
docker-compose.yml文件
version: "2"
networks:
escluster:
services:
es01:
image: elasticsearch:7.6.0
ports:
- "9201:9201"
- "9301:9301"
networks:
- "escluster"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- /elk/escluster-kibana-compose/node-1/data:/usr/share/elasticsearch/data
- /elk/escluster-kibana-compose/node-1/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /elk/escluster-kibana-compose/node-1/plugins/ik-7.6.0:/usr/share/elasticsearch/plugins/ik-7.6.0
es02:
image: elasticsearch:7.6.0
ports:
- "9202:9202"
- "9302:9302"
networks:
- "escluster"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- /elk/escluster-kibana-compose/node-2/data:/usr/share/elasticsearch/data
- /elk/escluster-kibana-compose/node-2/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /elk/escluster-kibana-compose/node-2/plugins/ik-7.6.0:/usr/share/elasticsearch/plugins/ik-7.6.0
es03:
image: elasticsearch:7.6.0
ports:
- "9203:9203"
- "9303:9303"
networks:
- "escluster"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- /elk/escluster-kibana-compose/node-3/data:/usr/share/elasticsearch/data
- /elk/escluster-kibana-compose/node-3/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /elk/escluster-kibana-compose/node-3/plugins/ik-7.6.0:/usr/share/elasticsearch/plugins/ik-7.6.0
kibana:
image: kibana:7.6.0
ports:
- "5606:5606"
networks:
- "escluster"
volumes:
- /elk/escluster-kibana-compose/kibana.yml:/usr/share/kibana/config/kibana.yml
执行docker-compose
# 启动
docker-compose up -d
# 查看日志
docker-compose logs -f
启动完成之后分别访问
http://172.30.38.46:9201/
http://172.30.38.46:9202/
http://172.30.38.46:9203/
{
"name": "node-1",
"cluster_name": "es-cluster",
"cluster_uuid": "gC1ZHP8uSOqnqof0-Rdlcg",
"version": {
"number": "7.6.0",
"build_flavor": "default",
"build_type": "docker",
"build_hash": "7f634e9f44834fbc12724506cc1da681b0c3b1e3",
"build_date": "2020-02-06T00:09:00.449973Z",
"build_snapshot": false,
"lucene_version": "8.4.0",
"minimum_wire_compatibility_version": "6.8.0",
"minimum_index_compatibility_version": "6.0.0-beta1"
},
"tagline": "You Know, for Search"
}
{
"name": "node-2",
"cluster_name": "es-cluster",
"cluster_uuid": "gC1ZHP8uSOqnqof0-Rdlcg",
"version": {
"number": "7.6.0",
"build_flavor": "default",
"build_type": "docker",
"build_hash": "7f634e9f44834fbc12724506cc1da681b0c3b1e3",
"build_date": "2020-02-06T00:09:00.449973Z",
"build_snapshot": false,
"lucene_version": "8.4.0",
"minimum_wire_compatibility_version": "6.8.0",
"minimum_index_compatibility_version": "6.0.0-beta1"
},
"tagline": "You Know, for Search"
}
{
"name": "node-3",
"cluster_name": "es-cluster",
"cluster_uuid": "gC1ZHP8uSOqnqof0-Rdlcg",
"version": {
"number": "7.6.0",
"build_flavor": "default",
"build_type": "docker",
"build_hash": "7f634e9f44834fbc12724506cc1da681b0c3b1e3",
"build_date": "2020-02-06T00:09:00.449973Z",
"build_snapshot": false,
"lucene_version": "8.4.0",
"minimum_wire_compatibility_version": "6.8.0",
"minimum_index_compatibility_version": "6.0.0-beta1"
},
"tagline": "You Know, for Search"
}
访问kibana
http://172.30.38.46:5606/