Docker部署简易版ELK

Docker部署简易版ELK

系统环境

Aliyun 1c2g CentOS7.5

防火墙安全策略

云服务器安全组规则
外网访问Docker容器

如图所示

若有问题请参考官方文档解决

安装 Docker compose

Docker compose 官方文档
1.下载

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

若安装其他版本,替换1.29.2即可

2.赋权限

sudo chmod +x /usr/local/bin/docker-compose

3.验证是否安装成功

$ docker-compose --version
docker-compose version 1.29.2, build 1110ad01

若安装失败请参考官方文档

下载 docker-elk

下载到 /app/docker-elk

git clone https://github.com/deviantony/docker-elk.git /app/docker-elk

该开源项目结构:

├── docker-stack.yml
├── elasticsearch
│ ├── config
│ │ └── elasticsearch.yml
│ └── Dockerfile
├── extensions
│ ├── apm-server
│ ├── app-search
│ ├── curator
│ ├── logspout
├── kibana
│ ├── config
│ │ └── kibana.yml
│ └── Dockerfile
├── LICENSE
├── logstash
│ ├── config
│ │ └── logstash.yml
│ ├── Dockerfile
│ └── pipeline
│   └─── logstash.conf
└── README.md
└── .env

主要配置文件:
1 docker-compose.yml
2 elasticsearch.yml
3 kibana.yml
4 logstash.yml
5 .env
6 logstash.conf

配置ELK

修改vm.max_map_count

vim /etc/sysctl.conf

添加如下代码
vm.max_map_count=262144

然后

sysctl -p

设置ELK版本(可选)

vim .env

当前用户应有sudo权限。修改完毕后运行docker-compose build

下载mssql-jdbc

https://docs.microsoft.com/en-us/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server?view=sql-server-2017

curl -o https://download.microsoft.com/download/4/c/3/4c31fbc1-62cc-4a0b-932a-b38ca31cd410/sqljdbc_9.2.1.0_chs.tar.gz ~

解压到/app/docker-elk

tar zxvf ~/sqljdbc_9.2.1.0_chs.tar.gz -C /app/docker-elk

配置docker-compose.yml

打开docker-compose.yml

vim docker-compose.yml
  • elasticsearch的volumes下添加以绑定配置文件到容器:
 - type: bind
        source: ./elasticsearch/config/elasticsearch.yml
        target: /usr/share/elasticsearch/config/elasticsearch.yml
        read_only: true
 - type: bind
        source: ./elasticsearch/config/jvm.options.d/jvm.options
        target: /usr/share/elasticsearch/config/jvm.options.d/jvm.options
        read_only: true
  • elasticsearch下的ELASTIC_PASSWORD默认密码是changeme,后面会修改此项
  • 在elasticsearch下添加自动重启机制
restart: on-failure
  • logstash的volumes下绑定路径
- type: bind
        source: ./sqljdbc_9.2/chs/mssql-jdbc-9.2.1.jre11.jar
        target: /usr/share/logstash/lib/sqljdbc_9.2/chs/mssql-jdbc-9.2.1.jre11.jar
        read_only: true
- type: bind
        source: ./logstash/config/logstash.yml
        target: /usr/share/logstash/config/logstash.yml
        read_only: true
- type: bind
        source: ./logstash/pipeline
        target: /usr/share/logstash/pipeline
        read_only: true
- type: bind
        source: ./logstash/config/log4j2.properties
        target: /usr/share/logstash/config/log4j2.properties
        read_only: true
- type: bind
        source: ./logstash/foobar.sql
        target: /usr/share/logstash/foobar.sql
        read_only: true
- type: bind
        source: ./logstash/config/pipelines.yml
        target: /usr/share/logstash/config/pipelines.yml
        read_only: true
  • logstash 下添加
user: root
restart: always
  • kibana 下添加
restart: always

编辑配置文件

  1. elasticsearch.yml
vim ./elasticsearch/config/elasticsearch.yml

添加如下配置

node.name: "node-1"
discovery.zen.fd.ping_timeout: 600s
discovery.zen.fd.ping_retries: 10
path.logs: /usr/share/elasticsearch/logs
## CORS
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: "Content-Type,Content-Length,Authorization"
http.cors.allow-credentials: true
## license type
xpack.license.self_generated.type: basic
## 
xpack.monitoring.collection.enabled: false
xpack.monitoring.enabled: false
  1. jvm.options
    jvm使用G1GC
-XX:+UseG1GC
  1. logstash.yml
    添加或修改如下配置:
log.level: info
path.logs: /usr/share/logstash/logs
log.format: plain
xpack.monitoring.enabled: false
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: foobar
  1. logstash/config/log4j2.properties
status = error
name = LogstashPropertiesConfig

appender.console.type = Console
appender.console.name = plain_console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n

appender.json_console.type = Console
appender.json_console.name = json_console
appender.json_console.layout.type = JSONLayout
appender.json_console.layout.compact = true
appender.json_console.layout.eventEol = true

appender.rolling.type = RollingFile
appender.rolling.name = plain_rolling
appender.rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 30
appender.rolling.avoid_pipelined_filter.type = PipelineRoutingFilter

appender.json_rolling.type = RollingFile
appender.json_rolling.name = json_rolling
appender.json_rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.json_rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.json_rolling.policies.type = Policies
appender.json_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.json_rolling.policies.time.interval = 1
appender.json_rolling.policies.time.modulate = true
appender.json_rolling.layout.type = JSONLayout
appender.json_rolling.layout.compact = true
appender.json_rolling.layout.eventEol = true
appender.json_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.json_rolling.policies.size.size = 100MB
appender.json_rolling.strategy.type = DefaultRolloverStrategy
appender.json_rolling.strategy.max = 30
appender.json_rolling.avoid_pipelined_filter.type = PipelineRoutingFilter

appender.routing.type = PipelineRouting
appender.routing.name = pipeline_routing_appender
appender.routing.pipeline.type = RollingFile
appender.routing.pipeline.name = appender-${ctx:pipeline.id}
appender.routing.pipeline.fileName = ${sys:ls.logs}/pipeline_${ctx:pipeline.id}.log
appender.routing.pipeline.filePattern = ${sys:ls.logs}/pipeline_${ctx:pipeline.id}.%i.log.gz
appender.routing.pipeline.layout.type = PatternLayout
appender.routing.pipeline.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.routing.pipeline.policy.type = SizeBasedTriggeringPolicy
appender.routing.pipeline.policy.size = 100MB
appender.routing.pipeline.strategy.type = DefaultRolloverStrategy
appender.routing.pipeline.strategy.max = 30

rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
rootLogger.appenderRef.rolling.ref = ${sys:ls.log.format}_rolling
rootLogger.appenderRef.routing.ref = pipeline_routing_appender

# Slowlog

appender.console_slowlog.type = Console
appender.console_slowlog.name = plain_console_slowlog
appender.console_slowlog.layout.type = PatternLayout
appender.console_slowlog.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n

appender.json_console_slowlog.type = Console
appender.json_console_slowlog.name = json_console_slowlog
appender.json_console_slowlog.layout.type = JSONLayout
appender.json_console_slowlog.layout.compact = true
appender.json_console_slowlog.layout.eventEol = true

appender.rolling_slowlog.type = RollingFile
appender.rolling_slowlog.name = plain_rolling_slowlog
appender.rolling_slowlog.fileName = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}.log
appender.rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling_slowlog.policies.type = Policies
appender.rolling_slowlog.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling_slowlog.policies.time.interval = 1
appender.rolling_slowlog.policies.time.modulate = true
appender.rolling_slowlog.layout.type = PatternLayout
appender.rolling_slowlog.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.rolling_slowlog.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling_slowlog.policies.size.size = 100MB
appender.rolling_slowlog.strategy.type = DefaultRolloverStrategy
appender.rolling_slowlog.strategy.max = 30

appender.json_rolling_slowlog.type = RollingFile
appender.json_rolling_slowlog.name = json_rolling_slowlog
appender.json_rolling_slowlog.fileName = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}.log
appender.json_rolling_slowlog.filePattern = ${sys:ls.logs}/logstash-slowlog-${sys:ls.log.format}-%d{yyyy-MM-dd}-%i.log.gz
appender.json_rolling_slowlog.policies.type = Policies
appender.json_rolling_slowlog.policies.time.type = TimeBasedTriggeringPolicy
appender.json_rolling_slowlog.policies.time.interval = 1
appender.json_rolling_slowlog.policies.time.modulate = true
appender.json_rolling_slowlog.layout.type = JSONLayout
appender.json_rolling_slowlog.layout.compact = true
appender.json_rolling_slowlog.layout.eventEol = true
appender.json_rolling_slowlog.policies.size.type = SizeBasedTriggeringPolicy
appender.json_rolling_slowlog.policies.size.size = 100MB
appender.json_rolling_slowlog.strategy.type = DefaultRolloverStrategy
appender.json_rolling_slowlog.strategy.max = 30

logger.slowlog.name = slowlog
logger.slowlog.level = trace
logger.slowlog.appenderRef.console_slowlog.ref = ${sys:ls.log.format}_console_slowlog
logger.slowlog.appenderRef.rolling_slowlog.ref = ${sys:ls.log.format}_rolling_slowlog
logger.slowlog.additivity = false

logger.licensereader.name = logstash.licensechecker.licensereader
logger.licensereader.level = error

# Silence http-client by default
logger.apache_http_client.name = org.apache.http
logger.apache_http_client.level = fatal

# Deprecation log
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_plain_rolling
appender.deprecation_rolling.fileName = ${sys:ls.logs}/logstash-deprecation.log
appender.deprecation_rolling.filePattern = ${sys:ls.logs}/logstash-deprecation-%d{yyyy-MM-dd}-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.deprecation_rolling.policies.time.interval = 1
appender.deprecation_rolling.policies.time.modulate = true
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c]%notEmpty{[%X{pipeline.id}]}%notEmpty{[%X{plugin.id}]} %m%n
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 100MB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 30

logger.deprecation.name = org.logstash.deprecation, deprecation
logger.deprecation.level = WARN
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_plain_rolling
logger.deprecation.additivity = false

logger.deprecation_root.name = deprecation
logger.deprecation_root.level = WARN
logger.deprecation_root.appenderRef.deprecation_rolling.ref = deprecation_plain_rolling
logger.deprecation_root.additivity = false

  1. logstash/config/pipelines.yml
    添加如下
- pipeline.id: main
  path.config: "/usr/share/logstash/pipeline/logstash.conf"
  1. logstash.conf
input {
  jdbc {
        jdbc_driver_library => "/usr/share/logstash/lib/sqljdbc_9.2/chs/mssql-jdbc-9.2.1.jre11.jar"
        jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
        jdbc_connection_string => "jdbc:sqlserver://serverip:1433;databaseName=someDB;"
        jdbc_user => "user"
        jdbc_password => "password"
        schedule => "30 * * * *"
        jdbc_default_timezone => "Asia/Shanghai"
        jdbc_page_size => "500"
        record_last_run => "true"
        #use_column_value => "true"
        #tracking_column => "LastModificationTime"
        last_run_metadata_path => "/usr/share/logstash/config/last_value"
        lowercase_column_names => "false"
        tracking_column_type => "timestamp"
        clean_run => "false"
        statement_filepath => "/usr/share/logstash/foobar.sql"
        #statement => "SELECT * FROM Table WITH(NOLOCK) WHERE (LastModificationTime >= :sql_last_value OR CreationTime >= :sql_last_value) AND IsDeleted=0"
    }
}
filter {
        mutate {
                #add_field => { "Id" => "%{Name}-%{Age}-%{Tel}-%{Address}-%{Time}" }
        }

        mutate {
                rename => { "Field1" => "Column1" }
}
output {
    elasticsearch {
        index => "sample_index"
        document_id => "%{Id}"
        hosts => ["elasticsearch:9200"]
        user => "elastic"
        password => "password"
        ecs_compatibility => disabled
    }

关于 jdbc 等 plugin,参考官方文档

  1. kibana.yml
monitoring.ui.container.elasticsearch.enabled: false
xpack.monitoring.enabled: false
## UI localization
i18n.locale: "zh-CN"
## security
xpack.security.encryptionKey: "something_at_least_32_characters"
xpack.reporting.encryptionKey: "a_random_string"
xpack.security.session.idleTimeout: "20m"
xpack.security.session.lifespan: "30d"

elasticsearch.username: elastic
## ELASTIC_PASSWORD
elasticsearch.password: foobar

启动容器

启动容器后台运行:

cd /app/docker
docker-compose up -d

在这里插入图片描述

查看 elasticsearch\kibana\logstash log命令:

docker-compose logs elasticsearch
docker-compose logs kibana
docker-compose logs logstash 

查看容器状态

docker ps
docker inspect container-id

停止容器

docker stop container-id

docker-compose down

关于docker-compose的命令详情请参阅官方文档

重置密码

/app/docker执行

docker-compose exec -T elasticsearch bin/elasticsearch-setup-passwords auto --batch
Changed password for user apm_system
PASSWORD apm_system = YkELBJGOT6AxqsPqsi7I

Changed password for user kibana
PASSWORD kibana = FxRwjm5KRYvHhGEnYTM9

Changed password for user logstash_system
PASSWORD logstash_system = A4f5VOfjVWSdi0KAZWGu

Changed password for user beats_system
PASSWORD beats_system = QnW8xxhnn7LMlA7vuI7B

Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = OvjEGR13wjkOkIbWqaEM

Changed password for user elastic
PASSWORD elastic = PGevNMuv7PhVnaYg7vJw

根据生成的密码修改docker-compose.yml中的ELASTIC_PASSWORD,kibana/config/kibana.yml、logstash/config/logstash.yml、logstash/pipeline/logstash.conf中 elastic 用户对应的密码

开启TLS

elasticsearch.yml 添加:

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.type: PKCS12
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-stack-ca.p12
xpack.security.transport.ssl.truststore.path: elastic-stack-ca.p12
xpack.security.transport.ssl.truststore.type: PKCS12

进入容器

sudo docker exec -it es /bin/bash

在容器内运行

./bin/elasticsearch-certutil ca

一路回车。文件名默认 elastic-stack-ca,密码默认。完成后拷贝到 host

exit

在 host 运行

docker cp es:/usr/share/elasticsearch/elastic-certificates.p12 .

docker-compose.yml内设置挂载:

- type: bind
      source: ./elastic-stack-ca.p12
      target: /usr/share/elasticsearch/config/elastic-stack-ca.p12
      read_only: true

完成后停止容器重新启动

docker-compose restart

官方文档

使用es-head

在本地机器上运行:

git clone git@github.com:mobz/elasticsearch-head.git

在项目根目录运行:

npm install
npm start

项目默认开启到 localhost:9100
访问该端口,地址栏填写:http://your_host_ip:9200/?auth_user=elastic&auth_password=your_password

参考官方文档

查看索引信息:
在这里插入图片描述
在这里插入图片描述

查询:
在这里插入图片描述

你可以使用Docker部署ELK(Elasticsearch, Logstash, Kibana)堆栈。以下是一些步骤: 1. 安装DockerDocker Compose:请确保你的机器上已经安装了DockerDocker Compose。 2. 创建一个新的目录并在该目录下创建一个`docker-compose.yml`文件。 3. 在`docker-compose.yml`文件中添加以下内容: ```yaml version: '3' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0 container_name: elasticsearch environment: - discovery.type=single-node ports: - 9200:9200 - 9300:9300 volumes: - ./data:/usr/share/elasticsearch/data logstash: image: docker.elastic.co/logstash/logstash:7.14.0 container_name: logstash volumes: - ./logstash/config:/usr/share/logstash/pipeline ports: - 5044:5044 kibana: image: docker.elastic.co/kibana/kibana:7.14.0 container_name: kibana ports: - 5601:5601 ``` 这个`docker-compose.yml`文件定义了三个服务:Elasticsearch、Logstash和Kibana。每个服务都使用了ELK堆栈的官方Docker镜像。 4. 创建一个`data`目录,用于保存Elasticsearch的数据。 5. 在一个终端窗口中,导航到包含`docker-compose.yml`文件的目录,并运行以下命令来启动ELK堆栈: ```bash docker-compose up ``` 这将启动Elasticsearch、Logstash和Kibana容器,并将它们连接在一起。 6. 访问Kibana:在浏览器中访问`http://localhost:5601`,你将看到Kibana的登录界面。 现在,你已经成功地使用Docker部署ELK堆栈。你可以通过Logstash将日志数据发送到Elasticsearch,并使用Kibana来可视化和分析这些日志数据。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值