Docker elacticsearch+filebeat+logstash+kibana docker-compose一键启动 日志管理系统 本地安装 详细版

思路 (使用离线本地搭建)

1.安装docker以及docker-compose(自行安装 版本不低即可)

 安装docker:

(4条消息) Centos7安装Docker_玩物丧志的快乐的博客-CSDN博客_centos7 docker

安装docker-compose:

(4条消息) 安装docker-compose的两种方式_沙漠之鹰的博客-CSDN博客_如何安装docker-compose

2.创建网络并查看(开始elkf安装搭建)

修改系统参数:

进入目录   /etc/sysctl.conf 加入

vm.max_map_count=655360  

进入目录   /etc/security/limits.conf 加入

*               soft     nofile        65535
*               hard     nofile        65535
*               soft     nproc         65535
*               hard     nproc         65535
*               soft     memlock       unlimited
*               hard     memlock       unlimited

创建网络并查看

docker network create elk
docker network ls







3.编写一个docker-compose.yml文件进行服务定义和运行

创建一个主目录存放安装文件在进入该目录

mkdir /root/elkf

cd elkf

 vi /root/elkf/docker-compose.yml

version: "3"                                            #docker-compose版本

services:                                                #需要定义运行的服务
  nginx:
    restart: always
    image: nginx                                    
    container_name: nginx
    hostname: nginx
    ports:                                                #映射端口主机到容器
     - 80:80
    volumes:                                          #卷挂载路径主机到容器
     - /var/log/nginx:/var/log/nginx

  filebeat:
    restart: always
    depends_on:
     - "nginx"
    build:
      context: ./filebeat
      dockerfile: Dockerfile
    container_name: filebeat
    hostname: filebeat
    volumes:
     - /var/log/nginx:/var/log/nginx

  elasticsearch:
    restart: always
    depends_on:
     - "nginx"
    build:
      context: ./elasticsearch
      dockerfile: Dockerfile
    container_name: elasticsearch
    hostname: elasticsearch
    ports:
     - 9200:9200
     - 9300:9300
    volumes:
     - /var/log/elasticsearch:/var/log/elasticsearch

  logstash:
    restart: always
    depends_on:
     - "nginx"
    build:
      context: ./logstash
      dockerfile: Dockerfile
    container_name: logstash
    hostname: logstash
    ports:
     - 5044:5044
    volumes:
     - /opt/logstash/conf:/opt/logstash/conf

  kibana:
    restart: always
    depends_on:
     - "nginx"
    build:
      context: ./kibana
      dockerfile: Dockerfile
    container_name: kibana
    hostname: kibana
    ports:
     - 5601:5601
    

networks:                                             #定义添加的网络
  default:
    external:
      name: elk
 

这里边 就定义了需要运行的elkf的四个服务 

4.构建镜像

软件包下载地址  

http://www.haojiang.online/other/download.tar.gz   (ps:这里我已经打包好了全部4个安装包)

1、构建elacticsearch镜像

cd /root/elkf/

mkdir elasticsearch

cd elasticsearch

 创建Dockerfile文件并写入信息

vi Dockerfile

FROM centos:7.9.2009
MAINTAINER wzlu
RUN yum -y install java-1.8.0-openjdk vim telnet lsof
ADD elasticsearch-6.1.0.tar.gz /usr/local/
RUN cd /usr/local/elasticsearch-6.1.0/config
RUN mkdir -p /data/behavior/log-node1
RUN mkdir /var/log/elasticsearch
COPY elasticsearch.yml /usr/local/elasticsearch-6.1.0/config/
RUN useradd es && chown -R es:es /usr/local/elasticsearch-6.1.0
RUN chmod +x /usr/local/elasticsearch-6.1.0/bin/*
RUN chown -R es:es /var/log/elasticsearch/
RUN chown -R es:es /data/behavior/log-node1
EXPOSE 9200
EXPOSE 9300
CMD su es /usr/local/elasticsearch-6.1.0/bin/elasticsearch

 创建一个elasticsearch.yml文件 写入 如下信息

  vi elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-elk
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0

#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

再导入elacticsearch的安装包

elactissearch下目录结构

2、构建logstash镜像

cd /root/elkf/

mkdir logstash

cd logstash

 写入logstash配置文件

 mkdir -p /opt/logstash/conf

vim /opt/logstash/conf/logstash-nginx-log.conf

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
  beats { port => 5044 }
}
 
filter {
    date { match => [ "timestamp","yyyy-MM-dd HH:mm:ss,SSS" ] }
    if "nginx-access" in [tags] {
    grok {
        match => {
            "message" => '%{IP:remote_addr} - (%{WORD:remote_user}|-) \[%{HTTPDATE:time_local}\] "%{WORD:method} %{NOTSPACE:request} HTTP/%{NUMBER}" %{NUMBER:status} %{NUMBER:body_bytes_sent} %{QS} %{QS:http_user_agent}'
        }
          }
     urldecode {
            all_fields => true
        }
    date {
            match => [ "time_local" , "dd/MMM/YYYY:HH:mm:ss Z" ]
          }
    }
}
 
output {
  if "nginx-access" in [tags] {
    elasticsearch {
      hosts => [ "elasticsearch:9200" ]
      manage_template => false
      index => "nginx-access-%{+YYYY.MM.dd}"
    }
  }
}
 

创建Dockerfile文件并写入信息

vi Dockerfile 

FROM centos:7.9.2009
MAINTAINER wzlu
RUN yum -y install java-1.8.0-openjdk vim telnet lsof
ADD logstash-6.1.0.tar.gz /usr/local/
RUN cd /usr/local/logstash-6.1.0
ADD run.sh /run.sh
RUN chmod 755 /*.sh
EXPOSE 5044
CMD ["/run.sh"]

 编写执行的脚本文件

#!/bin/bash
/usr/local/logstash-6.1.0/bin/logstash -f /opt/logstash/conf/logstash-nginx-log.conf

注意:一定要与刚才存放的logstash配置文件路径一致

再导入logstash的安装包 

logstash下目录结构 

3、构建kibana镜像

cd /root/elkf/

mkdir kibana

cd kibana

创建Dockerfile文件并写入信息

vi Dockerfile

FROM centos:7.9.2009
MAINTAINER wzlu
RUN yum -y install java-1.8.0-openjdk vim telnet lsof
ADD kibana-6.1.0-linux-x86_64.tar.gz /usr/local/
RUN cd /usr/local/kibana-6.1.0-linux-x86_64
COPY kibana.yml /usr/local/kibana-6.1.0-linux-x86_64/config/
EXPOSE 5601
CMD ["/usr/local/kibana-6.1.0-linux-x86_64/bin/kibana"]

创建kibana.yml文件 在写入一下信息

   vi  kibana.yml     (复制即可)

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
server.name: "kibana"

# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://elasticsearch:9200"

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"

再导入kibana的安装包

kibana下目录结构

4、构建filebeat镜像

cd /root/elkf/

mkdir filebeat

cd filebeat

 创建Dockerfile文件并写入信息

vi Dockerfile

FROM centos:7.9.2009
MAINTAINER wzlu
RUN yum -y install java-1.8.0-openjdk vim telnet lsof
ADD filebeat-6.1.0-linux-x86_64.tar.gz /usr/local/
RUN cd /usr/local/filebeat-6.1.0-linux-x86_64
COPY filebeat.yml /usr/local/filebeat-6.1.0-linux-x86_64
ADD run.sh /run.sh
RUN chmod 755 /*.sh
CMD ["/run.sh"]

 编写执行的脚本文件

#!/bin/bash
/usr/local/filebeat-6.1.0-linux-x86_64/filebeat -e -c /usr/local/filebeat-6.1.0-linux-x86_64/filebeat.yml

 创建filebeat.yml文件  写入如下信息

   vi filebeat.yml  

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/nginx/access.log
    #- c:\programdata\elasticsearch\logs\*

  tags: ["nginx-access"]
  clean_*: true
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["nginx-access"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["logstash:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

 导入filebeat离线安装包

filebeat目录下结构

5.一键部署启动

elkf目录结构

 先拉一个新的nginx镜像

docker pull nginx

然后使用docker-compose命令一键部署           (ps:注意检查容器端口是否被占用)

 docker-compose up -d

查看容器状态

docker-compose ps

获取日志信息

watch -n 2 curl -k 192.168.25.100(本机ip)

6.登录kibana查看日志

登录    http://本机IP地址:5601  查看

登录成功

查看日志

 ​​​

elkf日志系统搭建成功

如果您们发现里边有什么错误和问题可以联系作者欢迎指正,原创,谢谢支持!

  • 4
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 4
    评论
通过docker-compose安装logstash的步骤如下: 1. 首先,需要编写一个docker-compose.yaml文件,指定logstash本、资源限制、挂载路径、端口等配置信息。示例文件如下: version: '3' services: logstash: restart: always image: logstash:6.7.0 deploy: replicas: 1 update_config: parallelism: 2 delay: 10s restart_policy: condition: on-failure delay: 5s max_attempts: 3 window: 120s resources: limits: cpus: '0.5' memory: 1024M reservations: cpus: '1' memory: 2408M volumes: - /opt/data/logstash/:/opt/data/logstash/ ports: - "9600:9600" - "5044:5044" container_name: "logstash" networks: - back-up networks: back-up: driver: bridge 2. 然后,使用docker-compose命令构建logstash容器: docker-compose -f docker-compose.yaml up -d 3. 最后,通过以下命令进入logstash容器进行操作: docker exec -it logstash /bin/bash 这样就可以通过docker-compose安装logstash了。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [docker-compose搭建 es/kibana/logstash elk](https://blog.csdn.net/chugu5948/article/details/100614342)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] - *2* [docker-compose docker 一次性安装打包 各个中间件 mysql zookeeper kafka redis](https://download.csdn.net/download/huangyanhua616/85592973)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] - *3* [docker部署logstash](https://blog.csdn.net/u013214151/article/details/105682052)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

HaoJIANG_0

谢谢你的支持,我会继续加油

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值