SpringBoot利用ELK实现日志收集
ELK是Elasticsearch、Logstash、Kibana他们三个组合起来可以搭建日志系统,本文主要记录使
用ELK收集SoringBoot应用产生的日志
Elasticsearch、Logstash、Kibana作用
-
Elasticsearch:存储日志信息
-
Logstash: 日志收集,springboot利用Logstash把日志发送个Logstash,然后Logstash将日志传递
给Elasticsearch。
- Kibana:通过web端对日志进行可视化操作
对Elasticsearch安装
-
下载Elasticsearch镜像
docker pull Elasticsearch:7.6.2
-
修改虚拟内存地址,否则可能出现内存过小无法启动
sysctl -w vm.max_map_count=262144
-
启动Elasticsearch服务:
docker run -p 9200:9200 -p 9300:9300 --name elasticsearch \ -e "discovery.type=single-node" \ -e "cluster.name=elasticsearch" \ -v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins \ -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data \ -d elasticsearch:7.6.2
-
启动时/usr/share/elasticsearch会出现没有访问权限,需要修改/mydata/elasticsearch/data/权
限,然后重新启动elasticsearch
chmod 777 /mydata/elasticsearch/data/
-
安装IKAnalyzer中文分词器,并重新启动:
docker exec -it elasticsearch /bin/bash #此命令需要在容器中运行 elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis- ik/releases/download/v7.6.2/elasticsearch-analysis-ik-7.6.2.zip docker restart elasticsearch
注:离线安装elasticsearch中插件
-
1.下载elasticsearch-analysis-ik-7.6.2.zip
https://github.com/medcl/elasticsearch-analysis- ik/releases/download/v7.6.2/elasticsearch-analysis-ik-7.6.2.zip
-
2.上传到linux
-
3.上传的linux复制到elasticsearch容器中
docker cp elasticsearch-analysis-ik-7.6.2.zip elasticsearch:/
-
4.安装插件
docker exec -it elasticsearch /bin/bash elasticsearch-plugin install file:elasticsearch-analysis-ik-7.6.2.zip docker restart elasticsearch
-
如果防火墙没有关闭
firewall-cmd --zone=public --add-port=9200/tcp --permanent firewall-cmd --reload
安装Logstash的Docker镜像
-
1.下载Logstash镜像
docker pull logstash:7.6.2
-
2.添加Logstash配置文件logstash.conf
input { tcp { mode => "server" host => "0.0.0.0" port => 4560 codec => json_lines type => "manage" } tcp { mode => "server" host => "0.0.0.0" port => 4561 codec => json_lines type => "star" } tcp { mode => "server" host => "0.0.0.0" port => 4562 codec => json_lines type => "love" } } filter{ if [type] == "record" { mutate { remove_field => "port" remove_field => "host" remove_field => "@version" } json { source => "message" remove_field => ["message"] } } } output { elasticsearch { hosts => "es:9200" index => "leinfty-%{type}-%{+YYYY.MM.dd}" } }
-
3.创建/mydata/logstash,将logstash.conf拷贝到该目录
mkdir /mydata/logstash
-
4.启动logstash
docker run --name logstash -p 4560:4560 -p 4561:4561 -p 4562:4562 \ --link elasticsearch:es \ -v /mydata/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \ -d logstash:7.6.2
Kibana安装
-
1.下载Kibana镜像
docker pull kibana:7.6.2
-
2.启动Kibana
docker run --name kibana -p 5601:5601 \ --link elasticsearch:es \ -e "elasticsearch.hosts=http://es:9200" \ -d kibana:7.6.2
-
3.如果防火墙没有关闭
firewall-cmd --zone=public --add-port=5601/tcp --permanent firewall-cmd --reload
-
4.将kibana变为中文
docker exec -it kibana bash cd config vi kibana.yml
-
5.在kibana.yml中添加
il8n.locale:"zh-CN"
-
6.访问http://xxxx:5601进行测试
SpringBoot集成Logstash
添加Logstash依赖
<!--集成logstash-->
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>5.3</version>
</dependency>
添加配置文件logback-spring.xml,使得logbach日志输入到logstash
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<!--应用名称-->
<property name="APP_NAME" value="leinfty-love"/>
<!--日志文件保存路径-->
<property name="LOG_FILE_PATH" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/logs}"/>
<contextName>${APP_NAME}</contextName>
<!--每天记录日志到文件appender-->
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_FILE_PATH}/${APP_NAME}-%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
</appender>
<!--输出到logstash的appender-->
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!--可以访问的logstash日志收集端口-->
<destination>ip:4562</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE"/>
<appender-ref ref="LOGSTASH"/>
</root>
</configuration>
在application.yml中添加配置进行测试
logging:
file:
path: /var/logs
level:
root: info
config: classpath:logback-spring.xml
查看收集的日志
- 1.创建索引
权限控制
-
进入es容器
docker exec -it elasticsearch bash
-
修改配置
vi config/elasticsearch.yml
-
启用安全配置
xpack.security.enabled: true xpack.license.self_generated.type: basic xpack.security.transport.ssl.enabled: true
-
重启es容器
exit docker restart elasticsearch
-
设置密码
docker exec -it elasticsearch bash
bin/elasticsearch-setup-passwords interactive
按提示填入各类应用的密码
-
进入kibana容器
docker exec -it kibana bash
-
配置kibana连接elastic的设置
vi config/kibana.yml
elasticsearch.username: "elastic" elasticsearch.password: "xxx"
-
重启kibana容器
docker restart kibana
-
配置logstash连接elastic的设置
vi /mydata/logstash/logstash.conf
output { elasticsearch { hosts => "es:9200" user => "elastic" password => "xxx" index => "leinfty-%{type}-%{+YYYY.MM.dd}" } }
-
重启logstash
docker restart logstash
-
验证账号登录