docker-compose部署elasticsearch集群

脚本是同事的,感谢飞哥。

场景:这是一个基于docker环境的部署,脚本整合了单机模式,多服务器集群,单机集群模式。

#!/bin/bash
# install elasticsearch
# Recommend user id is 1001
# Use "id -u" to check normal user id

# Local config
DEPLOY_DIR="/home/elasticsearch"
# data dir default value is ${DEPLOY_DIR}/data
DATA_DIR=""
# empty means solo, the number of cluster's member is 3
IPS=(192.168.100.180 192.168.100.181 192.168.100.182)

# 检查部署集群状态
#  curl -XGET http://127.0.0.1:9200/_cluster/stats?pretty
#  curl -XGET http://localhost:9200/_cat/health?v

ES_VERSION="7.17.20-debian-12-r1"
ES_IMAGE="bitnami/elasticsearch:${ES_VERSION}"
TAR_NAME="elasticsearch-${ES_VERSION}.tar"
SCRIPT_NAME=$(basename ${0})

#usage: check_data_dir [canshu]
check_data_dir(){
    while :
    do
        if [ "${DATA_DIR}u" == "u" ];then
            echo "Data directory is not set"
        fi
        echo "    Use default directory: ${DEPLOY_DIR}${1}/data"
        read -p "Confirm[y/n]: " Confirm
        case ${Confirm} in 
        Y|y|"yes"|"YES")
            DATA_DIR=${DEPLOY_DIR}${1}/data
            break
            ;;
        N|n|no|NO)
            echo
            echo "Please modify script ${SCRIPT_NAME}, the variable DATA_DIR"
            exit 66
            ;;
        *)
            echo "Error input"
        esac
    done
}


#usage:check_user [cansu]
#UID is linux environment variable, bitnami image use 1001 privilige, chmod -u 1001 #testuser
check_user(){
    if [ ${UID} -ne 1001 -a ${UID} -ne 0 ];then
        echo "Please use the user whose id is 1001 or 0."
        echo "Current user id is ${UID}."
        exit 88
    fi
    mkdir -p ${DATA_DIR} && chown 1001.1001 ${DATA_DIR}
    cd ${DEPLOY_DIR}${1}/
}


deploy_standalone(){
    cat>>docker-compose.yml<<EOF
version: '2'

services:
  elasticsearch:
    image: ${ES_IMAGE}
    container_name: elasticsearch
    restart: always
    environment:
      LANG: en_US.UTF-8
      TZ: Asia/Shanghai       
      ELASTICSEARCH_HTTP_PORT_NUMBER: 9200
    volumes:
      - ${DATA_DIR}:/bitnami/elasticsearch/data
    ports:
      - 9200:9200
EOF
  #if have network can use "#" zhushi,if have image file,can LAN deploy
    docker images | grep -q ${ES_VERSION}
    if [ $? -ne 0 ];then
        if [ -f es-${ES_VERSION}.tar ];then
            docker load < ${TAR_NAME}
        else
            echo "There is not a image in loacalhost."
            exit 77
        fi
    fi
    
#modify /etc/profile add -> alias docker-compose='docker compose';source /etc/profile;
    #docker compose up -d
    docker-compose up -d
    #docker compose logs -f
    docker-compose logs -f
}


deploy_cluster(){

    #pwd
    #stand-alone mode cluster
    if [ "${IPS[0]}" == "${IPS[1]}" ];then
        N=${1}
        CLUSTER=${IPS[0]}:9300,${IPS[1]}:9301,${IPS[2]}:9302
    else
        N=0
        CLUSTER=${IPS[0]},${IPS[1]},${IPS[2]}
    fi
    c=${1}
    cat>docker-compose.yml<<EOF
version: '2'

services:
  elasticsearch${1}:
    image: ${ES_IMAGE}
    container_name: elasticsearch${1}
    restart: always
    environment:
      - LANG=en_US.UTF-8
      - TZ=Asia/Shanghai       
      - ELASTICSEARCH_HTTP_PORT_NUMBER=920${N}
      - ELASTICSEARCH_TRANSPORT_PORT_NUMBER=930${N}
      - ELASTICSEARCH_ADVERTISED_HOSTNAME=${IPS[c]}
      - ELASTICSEARCH_BIND_ADDRESS=0.0.0.0
      - ELASTICSEARCH_CLUSTER_NAME=MVcluster
      - ELASTICSEARCH_CLUSTER_HOSTS=${CLUSTER}
      - ELASTICSEARCH_NODE_NAME=node${N}
      - ELASTICSEARCH_HEAP_SIZE=1g
    volumes:
      - ${DATA_DIR}:/bitnami/elasticsearch/data
    ports:
      - 920${N}:920${N}
      - 930${N}:930${N}
      
EOF
 
    docker images | grep -q ${ES_VERSION}
    if [ $? -ne 0 ];then
        if [ -f es-${ES_VERSION}.tar ];then
            docker load < ${TAR_NAME}
        else
            echo "There is not a image in loacalhost."
            exit 77
        fi
    fi
    
    docker-compose up -d
    #docker-compose logs -f
}

main(){

    if [ ${#IPS[@]} -le 1 ];then
        MODE=standalone
    else
        MODE=cluster
    fi
    echo "    deploy mode:${MODE}"
    
    if [ "${MODE}" == "standalone" ];then
        check_data_dir
        check_user
        deploy_standalone
    else
        c=0
        for ip in ${IPS[@]}
        do
            hostname -I |grep -w "${ip}"
            if [ $? -eq 0 ];then
                check_data_dir ${c}
                check_user ${c}
                deploy_cluster ${c}
            fi
            ((c += 1))
        done
    fi
}

main

留意编码 cat -A jiaoben

  • 3
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
以下是使用Docker Compose建立ELK集群的步骤: 1.创建一个新的目录并在其中创建一个docker-compose.yml文件。 2.在docker-compose.yml文件中定义三个服务:Elasticsearch、Logstash和Kibana。 3.为每个服务定义容器映像,端口和其他配置选项。 4.使用Docker Compose命令启动ELK集群。 5.在Kibana中配置索引模式和可视化仪表板以查看和分析日志数据。 下面是一个示例docker-compose.yml文件,用于启动一个ELK集群: ```yaml version: '3' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0 container_name: elasticsearch environment: - node.name=elasticsearch - discovery.seed_hosts=elasticsearch - cluster.initial_master_nodes=elasticsearch - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata1:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - elk logstash: image: docker.elastic.co/logstash/logstash:7.14.0 container_name: logstash volumes: - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml - ./logstash/pipeline:/usr/share/logstash/pipeline ports: - 5044:5044 networks: - elk kibana: image: docker.elastic.co/kibana/kibana:7.14.0 container_name: kibana environment: ELASTICSEARCH_URL: http://elasticsearch:9200 ports: - 5601:5601 networks: - elk volumes: esdata1: driver: local networks: elk: ``` 在上面的示例中,我们定义了三个服务:Elasticsearch、Logstash和Kibana。每个服务都有自己的容器映像,端口和其他配置选项。我们还定义了一个名为“elk”的网络,以便服务可以相互通信。 要启动ELK集群,请在包含docker-compose.yml文件的目录中运行以下命令: ``` docker-compose up ``` 这将启动Elasticsearch、Logstash和Kibana容器,并将它们连接到“elk”网络。一旦容器启动,您可以在浏览器中访问Kibana Web界面,该界面默认在端口5601上运行。 在Kibana中,您可以配置索引模式和可视化仪表板以查看和分析日志数据。要将日志数据发送到Logstash,请将日志发送到Logstash监听的端口5044。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值