基于Docker的ELK环境搭建

说明:使用ELK中的版本信息是6.5.4版本

一、搭建Elasticsearch

1.1 下载ElasticsearchDocker镜像

docker pull docker.elastic.co/elasticsearch/elasticsearch:6.5.4

1.2 运行Elasticsearch镜像

1.2.1 单节点运行

docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.5.4

1.2.2 使用Docker Compose运行多个节点,这里使用官网的双节点的配置文件。docker-compose.yml

version: '2.2'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
    container_name: es01
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elastic
  es02:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
    container_name: es02
    environment:
      - node.name=es02
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data02:/usr/share/elasticsearch/data
    networks:
      - elastic
  es03:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
    container_name: es03
    environment:
      - node.name=es03
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es02
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data03:/usr/share/elasticsearch/data
    networks:
      - elastic

volumes:
  data01:
    driver: local
  data02:
    driver: local
  data03:
    driver: local

networks:
  elastic:
    driver: bridge

启动Elasticsearch集群命令:

docker-compose up

检查启动结果:

curl -X GET "localhost:9200/_cluster/health?pretty"

返回结果如下:status为yellow或者green即可

{
  "cluster_name" : "docker-cluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 15,
  "active_shards" : 15,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 5,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 75.0
}

或者在浏览器输入:

http://192.168.1.102:9200/

响应结果如下:

{
  "name" : "HRvQXOm",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "NuCEs7izS1KFUKXeDhY8TQ",
  "version" : {
    "number" : "6.5.4",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "d2ef93d",
    "build_date" : "2018-12-17T21:17:40.758843Z",
    "build_snapshot" : false,
    "lucene_version" : "7.5.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

停止Elasticsearch命令:

docker-compose down

二、搭建Logstash

2.1 下载Logstash镜像

docker pull docker.elastic.co/logstash/logstash:6.5.4

2.2 配置Logstash的Docker配置文件

2.2.1 创建一个logstash/config和logstash/pipeline目录

mkdir -p logstash/config
mkdir -p logstash/pipeline

2.2.2 创建并配置logstash.conf文件

cd logstash/pipeline
vi logstash.conf

配置如下信息:

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://宿主主机IP:9200"]
    index => "logstash-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

2.2.3 创建并配置logstash.yml、pipelines.yml、jvm.options、log4j2.properties、startup.options文件

cd logstash/config
touch logstash.yml pipelines.yml jvm.options log4j2.properties startup.options

logstash.yml配置信息如下:

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.url: http://宿主主机IP:9200

pipelines.yml配置信息如下:

# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
#   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

- pipeline.id: main
  path.config: "/usr/share/logstash/pipeline"

jvm.options配置信息如下:

## JVM configuration

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms1g
-Xmx1g

################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################

## GC configuration
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly

## Locale
# Set the locale language
#-Duser.language=en

# Set the locale country
#-Duser.country=US

# Set the locale variant, if any
#-Duser.variant=

## basic

# set the I/O temp directory
#-Djava.io.tmpdir=$HOME

# set to headless, just in case
-Djava.awt.headless=true

# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8

# use our provided JNA always versus the system one
#-Djna.nosys=true

# Turn on JRuby invokedynamic
-Djruby.compile.invokedynamic=true
# Force Compilation
-Djruby.jit.threshold=0

## heap dumps

# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=${LOGSTASH_HOME}/heapdump.hprof

## GC logging
#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime

# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${LS_GC_LOG_FILE}

# Entropy source for randomness
-Djava.security.egd=file:/dev/urandom

log4j2.properties配置信息如下:

status = error
name = LogstashPropertiesConfig

appender.console.type = Console
appender.console.name = plain_console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n

appender.json_console.type = Console
appender.json_console.name = json_console
appender.json_console.layout.type = JSONLayout
appender.json_console.layout.compact = true
appender.json_console.layout.eventEol = true

rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console

startup.options配置信息如下:

################################################################################
# These settings are ONLY used by $LS_HOME/bin/system-install to create a custom
# startup script for Logstash and is not used by Logstash itself. It should
# automagically use the init system (systemd, upstart, sysv, etc.) that your
# Linux distribution uses.
#
# After changing anything here, you need to re-run $LS_HOME/bin/system-install
# as root to push the changes to the init script.
################################################################################

# Override Java location
#JAVACMD=/usr/bin/java

# Set a home directory
LS_HOME=/usr/share/logstash

# logstash settings directory, the path which contains logstash.yml
LS_SETTINGS_DIR=/etc/logstash

# Arguments to pass to logstash
LS_OPTS="--path.settings ${LS_SETTINGS_DIR}"

# Arguments to pass to java
LS_JAVA_OPTS=""

# pidfiles aren't used the same way for upstart and systemd; this is for sysv users.
LS_PIDFILE=/var/run/logstash.pid

# user and group id to be invoked as
LS_USER=logstash
LS_GROUP=logstash

# Enable GC logging by uncommenting the appropriate lines in the GC logging
# section in jvm.options
LS_GC_LOG_FILE=/var/log/logstash/gc.log

# Open file limit
LS_OPEN_FILES=16384

# Nice level
LS_NICE=19

# Change these to have the init script named and described differently
# This is useful when running multiple instances of Logstash on the same
# physical box or vm
SERVICE_NAME="logstash"
SERVICE_DESCRIPTION="logstash"

# If you need to run a command or script before launching Logstash, put it
# between the lines beginning with `read` and `EOM`, and uncomment those lines.
###
## read -r -d '' PRESTART << EOM
## EOM

2.4 构建Logstash的Docker镜像

cd logstash
vi Dockerfile

Dockerfile文件配置如下:

FROM docker.elastic.co/logstash/logstash:6.5.4
RUN rm -f /usr/share/logstash/pipeline/logstash.conf
ADD pipeline/ /usr/share/logstash/pipeline/
ADD config/ /usr/share/logstash/config/

使用docker build 命令构建镜像:例如镜像名称为docker.elastic.co/jack/logstash:6.5.4

docker build -t docker.elastic.co/jack/logstash:6.5.4 .

运行构建的Docker镜像:

docker run -d --name logstash docker.elastic.co/jack/logstash:6.5.4

使用docker logs查询启动日志:

docker logs -f logstash

启动成功日志如下: 

pipelines=>[:main], :non_running_pipelines=>[]}
[2020-06-09T16:00:43,537][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2020-06-09T16:00:43,984][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

三、搭建Kibana

3.1 下载Kibana的Docker镜像

docker pull kibana:6.5.4

3.2 配置kibana.yml配置文件

mkdir -p kibana/config
cd kibana/config
touch kibana.yml

kibana.yml配置信息如下:

---
# Default Kibana configuration from kibana-docker.

server.name: kibana
server.host: "0"
elasticsearch.url: http://宿主主机IP:9200
xpack.monitoring.ui.container.elasticsearch.enabled: true

3.3 构建Kibana镜像

cd kibana
touch Dockerfile

Dockerfile配置如下:

FROM kibana:6.5.4

RUN rm -r /usr/share/kibana/config/kibana.yml

COPY config/kibana.yml /usr/share/kibana/config/

构建镜像:例如镜像名称为docker.elastic.co/jack/kibana:6.5.4

docker build -t docker.elastic.co/jack/kibana:6.5.4 .

运行镜像kibana镜像:

docker run -d --name kibana  -p 5601:5601 docker.elastic.co/jack/kibana:6.5.4

3.4 检验kibana

浏览器输入:http://localhost:5601/ 可打开Kibana控制台即表示安装成功

如有问题,欢迎留言讨论。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值