前言
ELK日志监控
一、ELK安装
1.工具包下载
代码如下(示例):
logstash-7.6.1
elasticsearch-7.6.1
elasticsearch-head-master(可不安装)
kibana-7.6.1-windows-x86_64
下载链接 https://pan.baidu.com/s/1-uGtRC7cQ0pIEO_7n-26TA
提取码 4728
2.工具包解压运行
均为解压版,放到自定义磁盘路径解压即可
1、在elasticsearch-7.6.1目录的bin下打开cmd,输入.\elasticsearch.bat
2、在elasticsearch-head-master目录下打开cmd,输入npm run start
3、在kibana-7.6.1-windows-x86_64目录的bin下打开cmd,输入.\kibana.bat
4、在logstash-7.6.1目录的bin下打开cmd,输入.\logstash.bat -f …/config/logstash.conf
3.效果图
1、elasticsearch:
输入127.0.0.1:9200
2、elasticsearch-head-master
输入127.0.0.1:9100
3、kibana
127.0.0.1:5601
4、logstash
已上表示都已启动成功
二、项目日志输入到.log文件并ELK展示
配置文件
1、pom.xml添加
<!--logstash-->
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>4.9</version>
</dependency>
2、logback-spring.xml
<?xml version="1.0" encoding="UTF-8" ?>
<configuration>
<!-- 控制台输出 -->
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<!--格式化输出,%d:日期;%thread:线程名;%-5level:级别,从左显示5个字符宽度;%msg:日志消息;%n:换行符-->
<pattern>
<![CDATA[%d{yyyy-MM-dd HH:mm:ss} [%thread] [%class:%line] %-5level %logger - %msg%n ]]>
</pattern>
</encoder>
</appender>
<!--ELK日志输出 -->
<appender name="FILE"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy
class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!--日志文件输出的文件名 -->
<FileNamePattern>F:\ELK\logs\elk.log
</FileNamePattern>
<!--日志文件保留天数 -->
<MaxHistory>30</MaxHistory>
</rollingPolicy>
<!--日志文件最大的大小 -->
<triggeringPolicy
class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>10MB</MaxFileSize>
</triggeringPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<!-- ELK格式化输出:%d表示日期,%thread表示线程名,%-5level:级别从左显示5个字符宽度%msg:日志消息,%n是换行符 -->
<!-- <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level-->
<!-- %logger{50} - %msg %n</pattern>-->
<pattern>
{
"project": "wxpublic",
"level": "%level",
"service": "${APP_NAME:-}",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger",
"message": "%msg",
"stack_trace": "%exception{20}"
}
</pattern>
</encoder>
</appender>
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>127.0.0.1:5601</destination>
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
<keepAliveDuration>5 minutes</keepAliveDuration>
</appender>
<logger name="com.example" level="INFO" addtivity="false">
<appender-ref ref="STDOUT"/>
<appender-ref ref="FILE"/>
</logger>
<root level="INFO">
<appender-ref ref="STDOUT"/>
<appender-ref ref="FILE"/>
</root>
</configuration>
3、logstash.conf
在logstash-7.6.1目录下创建一个logstash-springboot.conf配置文件,代替config下的logstash.conf
input {
file {
type => "info_log"
path => "F:/ELK/logs/elk.log"
}
tcp {
mode => "server"
host => "0.0.0.0"
port => 4560
codec => json_lines
}
}
output{
elasticsearch {
#ESIP地址与端口
hosts => "127.0.0.1:9200"
#ES索引名称(自己定义的)
index => "mylog"
}
stdout {
#以JSON格式输出
codec => json_lines
}
}
使用新配置文件重新启动logstash
.\logstash.bat -f …/conf/logstash-springboot.conf
效果测试
写一个简单测试接口
@ApiOperation(value = "测试ELK功能")
@GetMapping("/elktest")
public String elkTest() {
logger.info("elk test success");
return "success";
}
postman访问
Elasticsearch结果
kibana中配置mylog的索引
1、点击连接到您的Elasticsearch索引
2、输入自己的索引名字(conf中配置的那个),点击下一步
结果显示