使用Docker-Compose快速部署ElasticSearch集群(3节点集群)

使用Docker-Compose快速部署ElasticSearch集群(3节点集群)

 

Elasticsearch  作为一个搜索引擎,我们对它的基本要求就是存储海量数据并且可以在非常短的时间内查询到我们想要的信息。所以第一步我们需要保证的就是  Elasticsearch  的高可用性,什么是高可用性呢?它通常是指,通过设计减少系统不能提供服务的时间。假设系统一直能够提供服务,我们说系统的可用性是  100%。如果系统在某个时刻宕掉了,比如某个网站在某个时间挂掉了,那么就可以它临时是不可用的。所以,为了保证 Elasticsearch  的高可用性,我们就应该尽量减少 Elasticsearch 的不可用时间

 

前言

针对一个索引,Elasticsearch 中其实有专门的衡量索引健康状况的标志,分为三个等级:

  • green,绿色。这代表所有的主分片和副本分片都已分配。你的集群是 100% 可用的。
  • yellow,黄色。所有的主分片已经分片了,但至少还有一个副本是缺失的。不会有数据丢失,所以搜索结果依然是完整的。不过,你的高可用性在某种程度上被弱化。如果更多的分片消失,你就会丢数据了。所以可把 yellow 想象成一个需要及时调查的警告。
  • red,红色。至少一个主分片以及它的全部副本都在缺失中。这意味着你在缺少数据:搜索只能返回部分数据,而分配到这个分片上的写入请求会返回一个异常。

如果你只有一台主机的话,其实索引的健康状况也是 yellow,因为一台主机,集群没有其他的主机可以存放副本,所以说,这就是一个不健康的状态,因此集群也是十分有必要的。另外,既然是群集,那么存储空间肯定也是联合起来的,假如一台主机的存储空间是固定的,那么集群它相对于单个主机也有更多的存储空间,可存储的数据量也更大。

 

Docker-Compose配置文件

version: '3.7'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    container_name: elasticsearch1
    environment:
      - node.name=elasticsearch1
      - cluster.name=docker-cluster
      - cluster.initial_master_nodes=elasticsearch1
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms256M -Xmx256M"
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - network.host=_eth0_
    ulimits:
      nproc: 65535
      memlock:
        soft: -1
        hard: -1
    cap_add:
      - ALL
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
        delay: 10s
      resources:
        limits:
          cpus: '1'
          memory: 256M
        reservations:
          cpus: '1'
          memory: 1G
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
        window: 10s
    volumes:
      - type: volume
        source: logs
        target: /var/log
      - type: volume
        source: esdata1
        target: /usr/share/elasticsearch/data
    networks:
      - elastic
      - ingress
    ports:
      - 9200:9200
      - 9300:9300
  elasticsearch2:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    container_name: elasticsearch2
    environment:
      - node.name=elasticsearch2
      - cluster.name=docker-cluster
      - cluster.initial_master_nodes=elasticsearch1
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms256M -Xmx256M"
      - "discovery.zen.ping.unicast.hosts=elasticsearch1"
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - network.host=_eth0_
    ulimits:
      nproc: 65535
      memlock:
        soft: -1
        hard: -1
    cap_add:
      - ALL
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
        delay: 10s
      resources:
        limits:
          cpus: '1'
          memory: 256M
        reservations:
          cpus: '1'
          memory: 256M
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
        window: 10s
    volumes:
      - type: volume
        source: logs
        target: /var/log
      - type: volume
        source: esdata2
        target: /usr/share/elasticsearch/data
    networks:
      - elastic
      - ingress
    ports:
      - 9201:9200
  elasticsearch3:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    container_name: elasticsearch3
    environment:
      - node.name=elasticsearch3
      - cluster.name=docker-cluster
      - cluster.initial_master_nodes=elasticsearch1
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms256M -Xmx256M"
      - "discovery.zen.ping.unicast.hosts=elasticsearch1"
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - network.host=_eth0_
    ulimits:
      nproc: 65535
      memlock:
        soft: -1
        hard: -1
    cap_add:
      - ALL
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
        delay: 10s
      resources:
        limits:
          cpus: '1'
          memory: 256M
        reservations:
          cpus: '1'
          memory: 256M
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
        window: 10s
    volumes:
      - type: volume
        source: logs
        target: /var/log
      - type: volume
        source: esdata3
        target: /usr/share/elasticsearch/data
    networks:
      - elastic
      - ingress
    ports:
      - 9202:9200
  kibana:
    image: docker.elastic.co/kibana/kibana:7.8.0
    container_name: kibana
    environment:
      SERVER_NAME: localhost
      ELASTICSEARCH_URL: http://elasticsearch1:9200/
    ports:
      - 5601:5601
    volumes:
      - type: volume
        source: logs
        target: /var/log
    ulimits:
      nproc: 65535
      memlock:
        soft: -1
        hard: -1
    cap_add:
      - ALL
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
        delay: 10s
      resources:
        limits:
          cpus: '1'
          memory: 256M
        reservations:
          cpus: '1'
          memory: 256M
      restart_policy:
        condition: on-failure
        delay: 30s
        max_attempts: 3
        window: 120s
    networks:
      - elastic
      - ingress
  auditbeat:
    image: docker.elastic.co/beats/auditbeat:7.8.0
    command: auditbeat -e -strict.perms=false
    user: root
    environment:
      - setup.kibana.host=kibana:5601
      - output.elasticsearch.hosts=["elasticsearch:9200"]
    cap_add: ['AUDIT_CONTROL', 'AUDIT_READ']
    pid: "host"
    volumes:
    #   - ${PWD}/configs/auditbeat.docker.yml:/usr/share/auditbeat/auditbeat.yml
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - elastic
  metricbeat:
    image: docker.elastic.co/beats/metricbeat:7.8.0
    # command: --strict.perms=false
    environment:
      - setup.kibana.host=kibana:5601
      - output.elasticsearch.hosts=["elasticsearch:9200"]
    cap_add:
      - AUDIT_CONTROL
      - AUDIT_READ
    volumes:
      # - ${PWD}/configs/metricbeat.docker.yml:/usr/share/metricbeat/metricbeat.yml
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
      - /proc:/hostfs/proc:ro
      - /:/hostfs:ro
    networks:
      - elastic

  heartbeat:
    image: docker.elastic.co/beats/heartbeat:7.8.0
    command: --strict.perms=false
    environment:
      - setup.kibana.host=kibana:5601
      - output.elasticsearch.hosts=["elasticsearch:9200"]
    # volumes:
    #   - ${PWD}/configs/heartbeat.docker.yml:/usr/share/heartbeat/heartbeat.yml
    networks:
      - elastic

  packetbeat:
    image: docker.elastic.co/beats/packetbeat:7.8.0
    command: --strict.perms=false
    environment:
      - setup.kibana.host=kibana:5601
      - output.elasticsearch.hosts=["elasticsearch:9200"]
    cap_add:
      - NET_RAW
      - NET_ADMIN
    # volumes:
    #   - ${PWD}/configs/packetbeat.docker.yml:/usr/share/packetbeat/packetbeat.yml
    networks:
      - elastic

  filebeat:
    image: docker.elastic.co/beats/filebeat:7.8.0
    command: --strict.perms=false
    environment:
      - setup.kibana.host=kibana:5601
      - output.elasticsearch.hosts=["elasticsearch:9200"]
    ports:
      - 9000:9000
    volumes:
      # - ${PWD}/configs/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - elastic
  apmserver:
    image: docker.elastic.co/apm/apm-server:7.8.0
    command: --strict.perms=false
    ports:
      - 8200:8200
      - 8201:8200
    environment:
      - apm-server.host=0.0.0.0
      - setup.kibana.host=kibana:5601
      - output.elasticsearch.hosts=["elasticsearch:9200"]
    # volumes:
    #   - ${PWD}/configs/apm-server.yml:/usr/share/apm-server/apm-server.yml
    networks:
        - elastic
  app-search:
    image: docker.elastic.co/app-search/app-search:7.6.2
    ports:
      - 3002:3002
    environment:
      secret_session_key: supersecretsessionkey
      elasticsearch.host: http://elasticsearch1:9200/
      allow_es_settings_modification: "true"
    networks:
        - elastic
  nginx:
    image: nginx:latest
    ports:
        - 8881:80
    volumes:
        - ${PWD}/nginx-config/:/etc/nginx/conf.d/
    command: /bin/bash -c "nginx -g 'daemon off;'"
    ulimits:
      nproc: 65535
    networks:
      - ingress
volumes:
  esdata1:
  esdata2:
  esdata3:
  logs:

networks:
  elastic:
  ingress:

# configs:
#   auditbeat_config:
#     file: configs/auditbeat.docker.yml
#   filebeat_config:
#     file: configs/filebeat.docker.yml
#   heartbeat_config:
#     file: configs/heartbeat.docker.yml
#   metricbeat_config:
#     file: configs/metricbeat.docker.yml
#   packetbeat_config:
#     file: configs/packetbeat.docker.yml

注意:在elastic启动目录中保存此yml配置文件,保存名为docker-compose.yml,同时在目录中新建nginx-config目录,在目录中新建名为nginx.conf的配置文件,配置文件内容如下文件所述。

 

Nginx.Conf配置文件

upstream kibana {
    server kibana:5601;
}

upstream elasticsearch {
    server elasticsearch1:9200;
    server elasticsearch2:9201;
    server elasticsearch3:9202;
}

server {
    listen 80;

    location / {
        proxy_pass http://kibana;
        proxy_redirect off;
    }
    
    location /elasticsearch {
        proxy_pass            http://elasticsearch/;
        proxy_read_timeout    90;
        proxy_connect_timeout 90;
        proxy_set_header      Host $host;
        proxy_set_header      X-Real-IP $remote_addr;
        proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header      Connection "Keep-Alive";
        proxy_set_header      Proxy-Connection "Keep-Alive";
        proxy_set_header      Proxy "";
    }
}


 目录结构

注:目录结构如上,注意不需要images目录,也不需要.gitignore、README.md,当然加上也不影响。

 

启动应用

执行命令:

docker-compose up -d

注:第一次执行可能需要下载镜像,时间较长,后续则比较快。

 

集群信息

执行命令:

curl http://localhost:9200/_nodes?pretty=true

启动后,执行上述命令,查看集群节点信息。

 

 

源码地址:https://github.com/Miazzy/xdata-elasticsearch-service

 

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
好的,下面是使用docker-compose搭建ES集群的步骤: 1. 首先,在你的机器上安装DockerDocker Compose。 2. 创建一个新的文件夹,用于存储docker-compose.yml文件和ES配置文件。 3. 在该文件夹中创建一个名为docker-compose.yml的文件,并将以下内容复制到文件中: ``` version: '3' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1 container_name: elasticsearch environment: - node.name=es-node1 - cluster.name=es-docker-cluster - discovery.seed_hosts=es-node1,es-node2,es-node3 - cluster.initial_master_nodes=es-node1,es-node2,es-node3 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata1:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - esnet elasticsearch2: image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1 container_name: elasticsearch2 environment: - node.name=es-node2 - cluster.name=es-docker-cluster - discovery.seed_hosts=es-node1,es-node2,es-node3 - cluster.initial_master_nodes=es-node1,es-node2,es-node3 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata2:/usr/share/elasticsearch/data networks: - esnet elasticsearch3: image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1 container_name: elasticsearch3 environment: - node.name=es-node3 - cluster.name=es-docker-cluster - discovery.seed_hosts=es-node1,es-node2,es-node3 - cluster.initial_master_nodes=es-node1,es-node2,es-node3 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata3:/usr/share/elasticsearch/data networks: - esnet volumes: esdata1: driver: local esdata2: driver: local esdata3: driver: local networks: esnet: ``` 这个文件定义了三个ES节点,每个节点都有自己的容器,并且它们都属于同一个网络。 4. 创建一个名为elasticsearch.yml的文件,并将以下内容复制到文件中: ``` cluster.name: "es-docker-cluster" network.host: 0.0.0.0 discovery.seed_hosts: elasticsearch,elasticsearch2,elasticsearch3 cluster.initial_master_nodes: elasticsearch,elasticsearch2,elasticsearch3 ``` 这个文件定义了ES节点的配置信息。 5. 运行以下命令启动ES集群: ``` docker-compose up -d ``` 这个命令将启动三个ES节点,并在后台运行它们。 6. 等待片刻,然后运行以下命令来查看ES集群的状态: ``` curl http://localhost:9200/_cat/nodes?v ``` 如果一切正常,你应该能够看到三个节点的状态信息。 这样,一个使用docker-compose搭建的ES集群就创建完成了。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值