使用docker-compose与SpringBoot搭建ELK日志分析系统
ELK对应Elasticsearch、Logstash、Kibana,版本7.17.7
Logstash作为日志采集工具,向Elasticsearch写日志信息;
Elasticsearch提供存储与检索功能;
Kibana为Elasticsearch的查询接口,提供友好的图形界面。
搭建ELK环境
这里使用docker-compose把ELK作为一组项目容器启动,这里请提前搭好docker、docker-compose环境。
新建 /data/elk 目录
在elk目录下创建文件 docker-compose配置文件
docker-compose.yml
version: '3'
services:
elasticsearch:
image: elasticsearch:7.17.7
container_name: elasticsearch
privileged: true
environment:
#设置集群名称为elasticsearch
- cluster.name=elasticsearch
#以单一节点模式启动
- discovery.type=single-node
#设置使用jvm内存大小
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- TZ=Asia/Shanghai
volumes:
- $PWD/elasticsearch/data:/usr/share/elasticsearch/data
- $PWD/elasticsearch/plugins:/usr/share/elasticsearch/plugins
- $PWD/elasticsearch/logs:/usr/share/elasticsearch/logs
hostname: elasticsearch
restart: always
ports:
- "9200:9200"
- "9300:9300"
kibana:
image: kibana:7.17.7
container_name: kibana
environment:
# 配置这个,kibana有中文界面,当然也可以配置文件里配置对应的参数
- I18N_LOCALE=zh-CN
- TZ=Asia/Shanghai
volumes:
- $PWD/kibana/config/kibana.yml:/usr/share/config/kibana.yml
hostname: kibana
depends_on:
- elasticsearch #后于elasticsearch启动
restart: always
ports:
- "5601:5601"
logstash:
image: logstash:7.17.7
container_name: logstash
environment:
- TZ=Asia/Shanghai
volumes:
- $PWD/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- $PWD/logstash/config/pipelines.yml:/usr/share/logstash/config/pipelines.yml
- $PWD/logstash/pipeline:/usr/share/logstash/pipeline
# - $PWD/logstash/tmp:/tmp
hostname: logstash
restart: always
depends_on:
- elasticsearch #后于elasticsearch启动
ports:
- "4560:4560"
创建 kibana 的配置文件
/data/elk/kibana/config/kibana.yml
# Default Kibana configuration for docker target
server.host: '0.0.0.0'
server.shutdownTimeout: '5s'
elasticsearch.hosts: ['http://elasticsearch:9200']
monitoring.ui.container.elasticsearch.enabled: true
创建logstash配置文件
/data/elk/logstash/config/logstash.yml (这个文件可以不创建,用来默认的)
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
/data/elk/logstash/config/pipelines.yml
# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
- pipeline.id: main
path.config: "/usr/share/logstash/pipeline"
接收SpringBoot传的日志,转换写入elasticsearch的pipeline配置,有需求可以在/data/elk/logstash/pipeline目录创建多个logstash.conf pipeline配置
索引命名是SpringBoot应用名+年月日,spring.application.name变量在SpringBoot logback配置传上来。
/data/elk/logstash/pipeline/logstash.conf
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 4560
codec => json_lines
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "%{spring.application.name}-%{+YYYY.MM.dd}"
}
}
如果elk 3个镜像下不来,可以配置国内的镜像加速,如阿里的、docker中国官方的
/etc/docker/daemon.json
{
“registry-mirrors”: [“https://registry.docker-cn.com”]
}
创建elasticsearch所需要的挂载出来的目录,并修改权限。
记录一定,给775或777的权限,要不elasticsearch启动报错。
root@huangliuyu:/data/elk# pwd
/data/elk
root@huangliuyu:/data/elk# mkdir -p elasticsearch/data elasticsearch/plugins elasticsearch/logs
root@huangliuyu:/data/elk# chmod -R 775 elasticsearch
root@huangliuyu:/data/elk# ls
docker-compose.yml elasticsearch kibana logstash
启动、停止容器组
#/data/elk 目录下
## 启动
root@huangliuyu:/data/elk# docker compose up -d
[+] Running 4/4
⠿ Network elk_default Created 0.0s
⠿ Container elasticsearch Started 0.7s
⠿ Container kibana Started 1.9s
⠿ Container logstash Started 1.8s
root@huangliuyu:/data/elk# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
660cd104c166 bf8838e621a6 "/usr/local/bin/dock…" 17 seconds ago Up 14 seconds 5044/tcp, 0.0.0.0:4560->4560/tcp, :::4560->4560/tcp, 9600/tcp logstash
6bd52d2cbf14 47c5b6ca1535 "/bin/tini -- /usr/l…" 17 seconds ago Up 14 seconds 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp kibana
9dc4e9a3ca7c ec0817395263 "/bin/tini -- /usr/l…" 17 seconds ago Up 15 seconds 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp elasticsearch
da9778c17d05 kartoza/postgis:13 "/bin/sh -c /scripts…" 5 months ago Up 3 hours 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp postgis-server
root@huangliuyu:/data/elk#
## 停止容器组
root@huangliuyu:/data/elk# docker-compose down
Stopping logstash ... done
Stopping kibana ... done
Stopping elasticsearch ... done
Removing logstash ... done
Removing kibana ... done
Removing elasticsearch ... done
Removing network elk_default
root@huangliuyu:/data/elk#
配置SpringBoot应用向Logstash输入日志
(1)添加 logstash-logback 依赖包
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.2</version>
</dependency>
(2)在logback配置添加 LOGSTASH appender传给logstash写入到elasticsearch 。
如果没有logback配置,补上去,再添加logstash的配置。
logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<!-- 获取spring.application.name值,给appName变量,后续使用-->
<springProperty scope="context" name="appName" source="spring.application.name"/>
<springProperty scope="context" name="namespace" source="spring.cloud.nacos.discovery.namespace"/>
<!-- 输出到logstash -->
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!-- 可以访问的logstash日志收集端口 -->
<destination>127.0.0.1:4560</destination>
<!-- 这里可以根据自己的情况,定义编码器 -->
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder">
<!-- 变量值 -->
<customFields>{"spring.application.name":"${appName}"}</customFields>
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<pattern>
<pattern>
{
"createTime": "%d{yyyy-MM-dd HH:mm:ss.SSS}",
"traceId": "%X{X-B3-TraceId:-}",
"spanId": "%X{X-B3-SpanId:-}",
"exportable": "%X{X-Span-Export:-}",
"logLevel": "%level",
"serviceName": "${appName}",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger{40}",
"line":"%L",
"message": "%message",
"namespace": "${namespace}"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<root level="INFO">
<!-- 在需要的日志等级加入LOGSTASH appender -->
<appender-ref ref="LOGSTASH" />
</root>
</configuration>
启动SpringBoot应用,没什么问题的话,就会通过logstash的pipeline写入到elasticsearch。
在Kibana查看日志
http://localhost:5601 访问Kibana,
左侧导航栏
(1) stack Management --> 索引管理
在这里,可以看到日志已经写入elasticsearch。
(2) stack Management --> 索引模式
在这里,创建对应的索引模式,可以在Discover看到日志情况
(3) Discover
也可以通过开发工具,通过elasticsearch的http接口去调用查询结果。