本次集群采用docker进行安装
版本:elasticsearch7.6.0
安装前准备:
在宿主机上创建文件夹
mkdir -p /home/docker/es-docker
mkdir -p /home/tools/es/data/es1
mkdir -p /home/tools/es/data/es2
mkdir -p /home/tools/es/data/es3
mkdir -p /home/tools/es/config
vi docker-compose.yml
docker-compose.yml内容如下:
version: '2.2'
networks:
esnet:
driver: bridge
services:
es1:
image: elasticsearch:7.6.0
container_name: es1
hostname: es1
volumes:
- /home/tools/es/data/es1:/usr/share/elasticsearch/data #数据卷挂载(注:挂载卷的系统文件夹需要修改属主属组为 elasticsearch)
- /home/tools/es/config/es1.yml:/usr/share/elasticsearch/config/elasticsearch.yml #配置文件挂载
- /etc/localtime:/etc/localtime #时区同步
environment:
- TZ="Asia/Shanghai"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m" #设置 heap size,通常设置为物理内存的一半,我这里机器是8G 内存,部署 3 个ES设置为 1g
ulimits: #固定内存,保证性能
memlock:
soft: -1
hard: -1
ports:
- "9201:9201"
- "9301:9301"
networks:
- esnet
es2:
image: elasticsearch:7.6.0
container_name: es2
hostname: es2
volumes:
- /home/tools/es/data/es2:/usr/share/elasticsearch/data #数据卷挂载(注:挂载卷的系统文件夹需要修改属主属组为 elasticsearch)
- /home/tools/es/config/es2.yml:/usr/share/elasticsearch/config/elasticsearch.yml #配置文件挂载
- /etc/localtime:/etc/localtime
environment:
- TZ="Asia/Shanghai"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9202:9202"
- "9302:9302"
networks:
- esnet
es3:
image: elasticsearch:7.6.0
container_name: es3
hostname: es3
volumes:
- /home/tools/es/data/es3:/usr/share/elasticsearch/data #数据卷挂载(注:挂载卷的系统文件夹需要修改属主属组为 elasticsearch)
- /home/tools/es/config/es3.yml:/usr/share/elasticsearch/config/elasticsearch.yml #配置文件挂载
- /etc/localtime:/etc/localtime
environment:
- TZ="Asia/Shanghai"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9203:9203"
- "9303:9303"
networks:
- esnet
es1,2,3三个配置文件的内容如下:
cluster.name: "es-cluster"
node.name: es1
network.host: 0.0.0.0
http.port: 9201 #设置对外服务的http端口
transport.tcp.port: 9301 #设置节点间交互的tcp端口
http.cors.enabled: true
http.cors.allow-origin: "*"
bootstrap.memory_lock: true
xpack.monitoring.collection.enabled: true #默认是 false,设置为 true,表示启用监控数据的收集
node.master: true #指定该节点是否有资格被选举成为node,默认是true,es是默认集群中的第一台机器为master,如果这台机挂了就会重新选举master
node.data: true #指定该节点是否存储索引数据,默认为true
discovery.zen.ping.unicast.hosts: ["es1:9301","es2:9302","es3:9303"]
discovery.zen.minimum_master_nodes: 2 #设置这个参数来保证集群中的节点可以知道其它N个有master资格的节点。默认为1,对于大的集群来说,为避免脑裂,应将此参数设置为(master_eligible_nodes / 2) + 1
cluster.initial_master_nodes: es1
cluster.name: "es-cluster"
node.name: es2
network.host: 0.0.0.0
http.port: 9202
transport.tcp.port: 9302
http.cors.enabled: true
http.cors.allow-origin: "*"
bootstrap.memory_lock: true
node.master: true
node.data: true
xpack.monitoring.collection.enabled: true #默认是 false,设置为 true,表示启用监控数据的收集
discovery.zen.ping.unicast.hosts: ["es1:9301","es2:9302","es3:9303"]
discovery.zen.minimum_master_nodes: 2
cluster.initial_master_nodes: es1
cluster.name: "es-cluster"
node.name: es3
network.host: 0.0.0.0
http.port: 9203
transport.tcp.port: 9303
http.cors.enabled: true
http.cors.allow-origin: "*"
bootstrap.memory_lock: true
node.master: true
node.data: true
xpack.monitoring.collection.enabled: true #默认是 false,设置为 true,表示启用监控数据的收集
discovery.zen.ping.unicast.hosts: ["es1:9301","es2:9302","es3:9303"]
discovery.zen.minimum_master_nodes: 2
cluster.initial_master_nodes: es1
data文件夹需要授权,否则启动时报错:
chmod 777 /home/tools/es/data/es1
chmod 777 /home/tools/es/data/es2
chmod 777 /home/tools/es/data/es3
此时我们进入/home/docker/es-docker文件夹,启动
nohup docker-compose up &
此时可以使用命令查看启动日志,看是否正常启动:
tail -f nohup.out
使用指令查看容器是否正常
docker ps -a
[root@localhost es-docker]# curl -X GET "localhost:9201/_cluster/settings?pretty"
{
"persistent" : { },
"transient" : { }
}
[root@localhost es-docker]#
安装完成。