ELK+slf4j+springboot日志可视化查询

Springboot+ELK实现日志查询

日志查询一直头疼的问题就是日志文件太大、请求数据纵横交错很不方便查找定位。最近公司难得的清闲就整合ELK可针对日志进行日期、接口地址、用户查询日志明细

在这里插入图片描述

在这里插入图片描述


一、elasticsearch+logstash+filebeat+kibana安装部署 (统一版本:7.7.0)

1. docker 安装部署elasticsearch

下载镜像

docker pull elasticsearch:7.7.0

创建挂载的目录

mkdir -p /data/elk/elasticsearch/data
mkdir -p /data/elk/elasticsearch/config
mkdir -p /data/elk/elasticsearch/plugins
// 创建配置文件
echo "http.host: 0.0.0.0" >> /data/elk/elasticsearch/config/elasticsearch.yml
其中data是数据目录挂载需要权限 chmod -R 777 /data/elk/elasticsearch/data

创建容器并启动

docker run --name elasticsearch -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e ES_JAVA_OPTS="-Xms512m -Xmx512m" \
-v /data/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /dataelk//elasticsearch/data:/usr/share/elasticsearch/data \
-v /dataelk//elasticsearch/plugins:/usr/share/elasticsearch/plugins \
-d elasticsearch:7.7.0

访问http://ip:9200如下图

在这里插入图片描述

2. docker 安装部署logstash

下载镜像

docker pull logstash:7.7.0

启动logstash(拷贝数据到持久化目录)

docker run -d --name=logstash logstash:7.7.0
mkdir -p /data/elk/logstash/config/conf.d
docker cp logstash:/usr/share/logstash/config /data/elk/logstash/
docker cp logstash:/usr/share/logstash/data /data/elk/logstash/
docker cp logstash:/usr/share/logstash/pipeline /data/elk/logstash/
// 授予权限
chmod 777 -R /data/elk/logstash

修改配置文件中的elasticsearch地址

 vi /data/elk/logstash/config/logstash.yml

在这里插入图片描述

重新启动logstash

docker rm -f logstash
docker run \
--name logstash \
--restart=always \
-p 5044:5044 \
-p 9600:9600 \
-v /data/elk/logstash/config:/usr/share/logstash/config \
-v /data/elk/logstash/data:/usr/share/logstash/data \
-v /data/elk/logstash/pipeline:/usr/share/logstash/pipeline \
-d logstash:7.7.0

3. docker 安装部署filebeat

安装在应用服务器直接读取log文件

下载镜像

	docker pull elastic/filebeat:7.7.0

临时启动

	docker run -d --name=filebeat elastic/filebeat:7.7.0

拷贝数据文件

	docker cp filebeat:/usr/share/filebeat /data/
	chmod 777 -R /data/filebeat
	chmod go-w /data/filebeat/filebeat.yml

编辑配置文件(输出到logstash)

vi /data/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/messages/*.log

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

processors:
- add_cloud_metadata: ~
- add_docker_metadata: ~

output.logstash:
  hosts: '192.168.247.181:5044'
  indices:
    - index: "filebeat-%{+yyyy.MM.dd}"

重新启动filebeat

docker rm -f filebeat

/data/logs为项目日志存放目录

docker run \
  --name=filebeat \
  --restart=always \
  -v /data/filebeat:/usr/share/filebeat \
  -v /data/logs:/var/log/messages \
  -d elastic/filebeat:7.7.0

在logstash中添加filebeat配置文件

日志格式解析可根据自己项目来变更,我这里把请求头日志放在request索引下,请求过程日志放在backend索引下

// logback日志输出格式 与grok 解析格式对应
<pattern>[%X{appName:-admin}] [%X{nodeName:-127.0.0.1}] [%d{DEFAULT}] [%p] [%t] [traceId:%X{traceId:-0}] [user:%X{userName:-root}] [ip:%X{ip:-127.0.0.1}] [%c:%L] %m%n</pattern>

vi /data/elk/logstash/config/conf.d/beat.conf

input {
    beats {
        port => 5044
    }
}

filter {

    mutate {
        copy => { "message" => "logmessage" }
    }

    grok { 
        match =>{
           "message" => [
                "\A\[%{NOTSPACE:moduleName}\]\s+\[%{NOTSPACE:nodeName}\]\s+\[%{DATA:datetime}]\s+\[%{NOTSPACE:level}]\s+\[%{NOTSPACE:thread}\]\s+\[traceId:%{NOTSPACE:traceId}\]\s+\[user:%{NOTSPACE:user}\]\s+\[ip:%{NOTSPACE:ip}\]\s+\[%{NOTSPACE:localhost}\]\s+%{GREEDYDATA:message}[\n|\t](?<exception>(?<exceptionType>^.*[Exception|Error]):\s+(?<exceptionMessage>(.)*?)[\n|\t](?<stacktrace>(?m)(.*)))",
                "\A\[%{NOTSPACE:moduleName}\]\s+\[%{NOTSPACE:nodeName}\]\s+\[%{DATA:datetime}]\s+\[%{NOTSPACE:level}]\s+\[%{NOTSPACE:thread}\]\s+\[traceId:%{NOTSPACE:traceId}\]\s+\[user:%{NOTSPACE:user}\]\s+\[ip:%{NOTSPACE:ip}\]\s+\[%{NOTSPACE:localhost}\]\s+(?<message>(?m).*)" 
          ]
        }
        overwrite => [ "message" ]
    }

    date {
        match => ["timestamp", "yyyy-MM-dd HH:mm:ss,SSS"]
        remove_field => ["timestamp"]
    }

    ruby {
        code => 'event.set("document_type", "request") if event.get("message").include?"requestUri"'
    }

    if [document_type] == 'request' {
      json {
        source => "message"
        skip_on_invalid_json => true
        remove_field => ["message"]
      }
    }

}

output {
      if [document_type] == 'request' {
        elasticsearch {
          hosts => "192.168.247.181:9200"
          index => "request-%{+YYYY.MM.dd}"
          sniffing => true
          manage_template => false
          template_overwrite => true
        }
      } else {
        elasticsearch {
          hosts => "192.168.247.181:9200"
          index => "backend-%{+YYYY.MM.dd}"
          sniffing => true
          manage_template => false
          template_overwrite => true
        }
      }
}

指定filebeats配置文件路径
vi /data/elk/logstash/config/logstash.yml
// docker容器内部路径
path.config: /usr/share/logstash/config/conf.d/*.conf
path.logs: /usr/share/logstash/logs
在这里插入图片描述
重启logstash

docker restart logstash

4. docker 安装kibana查看日志

下载镜像

docker pull kibana:7.7.0

配置文件

注意替换自己的elasticsearch ip

mkdir -p /data/elk/kibana/config/
vi /data/elk/kibana/config/kibana.yml
内容如下
#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://192.168.247.181:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true

启动kibana

docker run -d \
  --name=kibana \
  --restart=always \
  -p 5601:5601 \
  -v /data/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml \
  kibana:7.7.0

访问http://ip:5601

request*索引
在这里插入图片描述

backend*索引

在这里插入图片描述

二、整合Springboot可视化查询

1.pom.xml中添加依赖

<dependency>
     <groupId>org.elasticsearch.client</groupId>
     <artifactId>elasticsearch-rest-high-level-client</artifactId>
     <version>7.1.0</version>
 </dependency>
 <dependency>
     <groupId>org.elasticsearch</groupId>
     <artifactId>elasticsearch</artifactId>
     <version>7.1.0</version>
 </dependency>

2.ES配置类

@Configuration
public class ElasticsearchConfiguration {

    private String host = "192.168.247.181";

    private int port = 9200;

    private int connTimeout = 3000;

    private int socketTimeout = 5000;

    private int connectionRequestTimeout = 500;

    @Bean(destroyMethod = "close", name = "client")
    public RestHighLevelClient initRestClient() {
        RestClientBuilder builder = RestClient.builder(new HttpHost(host, port))
                .setRequestConfigCallback(requestConfigBuilder -> requestConfigBuilder
                .setConnectTimeout(connTimeout)
                .setSocketTimeout(socketTimeout)
                .setConnectionRequestTimeout(connectionRequestTimeout));
        return new RestHighLevelClient(builder);
    }
}

3.数据查询

	@PostMapping("searchRequest")
    public Map<String, Object> searchRequest(@RequestBody RequestLogForm param) throws Exception {
        SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
        BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
        if (StringUtils.isNotBlank(param.getModuleName())) {
            boolQueryBuilder.must(QueryBuilders.termQuery("moduleName.keyword", param.getModuleName()));
        }
        if (StringUtils.isNotBlank(param.getRequestUri())) {
            boolQueryBuilder.must(QueryBuilders.matchPhraseQuery("requestUri", param.getRequestUri()));
        }
        if (StringUtils.isNotBlank(param.getTraceId())) {
            boolQueryBuilder.must(QueryBuilders.termQuery("traceId.keyword", param.getTraceId()));
        }
        if (StringUtils.isNotBlank(param.getUser())) {
            boolQueryBuilder.must(QueryBuilders.termQuery("user.keyword", param.getUser()));
        }
        if (param.getBeginTime() != null || param.getEndTime() != null) {
            RangeQueryBuilder rangeQueryBuilder = QueryBuilders.rangeQuery("@timestamp");
            if (param.getBeginTime() != null) {
                rangeQueryBuilder.gte(new java.util.Date(param.getBeginTime().getTime() + 3600 * 1000));
            }
            if (param.getEndTime() != null) {
                rangeQueryBuilder.lte(new java.util.Date(param.getEndTime().getTime() + 3600 * 1000));
            }
            boolQueryBuilder.must(rangeQueryBuilder);
        }

        sourceBuilder.query(boolQueryBuilder);
        sourceBuilder.from((param.getPage() - 1) * param.getLimit());
        sourceBuilder.size(param.getLimit());

        sourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS));
        sourceBuilder.sort("@timestamp", SortOrder.DESC);

        SearchRequest searchRequest = new SearchRequest();
        searchRequest.source(sourceBuilder);
        // 该索引包含的日志每个请求一条记录,所以筛选的记录可以代表请求的调用情况
        searchRequest.indices("request-*");

        log.info("ES Query: \n{}", sourceBuilder.toString());
        SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
        log.info("ES Response: \n{}", JSON.toJSONString(searchResponse));
        List<RequestLog> list = new ArrayList<>();
        for (SearchHit documentFields : searchResponse.getHits().getHits()) {
            RequestLog requestLog = JSON.parseObject(documentFields.getSourceAsString(), RequestLog.class);
            list.add(requestLog);
        }
        Map<String, Object> result = new HashMap<>(4);
        result.put("code", 0);
        result.put("message", "SUCCESS");
        result.put("data", list);
        result.put("count", searchResponse.getHits().getTotalHits().value);
        return result;
    }
	@GetMapping("searchBackend/{traceId}")
    public String searchBackend(@PathVariable String traceId) throws Exception {
        SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
        BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
        boolQueryBuilder.must(QueryBuilders.termQuery("traceId.keyword", traceId));
        sourceBuilder.query(boolQueryBuilder);
        sourceBuilder.from(0);
        // ElasticSearch的最大查询结果树,跟ElasticSearch的配置保持一致
        sourceBuilder.size(10000);
        sourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS));
        sourceBuilder.sort("datetime.keyword", SortOrder.ASC);
        sourceBuilder.sort("log.offset", SortOrder.ASC);

        SearchRequest searchRequest = new SearchRequest();
        searchRequest.source(sourceBuilder);
        // 该索引包含的日志是每个请求执行过程中写的日志,一个请求会有多条记录
        searchRequest.indices("backend-*");

        log.info("ES Query: \n" + sourceBuilder.toString());
        SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
        for (ShardSearchFailure ssf : searchResponse.getShardFailures()) {
            System.err.println(ssf.reason());
        }

        StringBuffer concatMessage = new StringBuffer();
        for (SearchHit documentFields : searchResponse.getHits().getHits()) {
            BackendLog backendLog = JSON.parseObject(documentFields.getSourceAsString(), BackendLog.class);
            concatMessage.append(backendLog.getDatetime() + " " + backendLog.getLocalhost() + " " + backendLog.getMessage() + "<br/>");
            concatMessage.append(" <br/>");
        }
        return concatMessage.toString();
    }
  • 3
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Spring Boot 提供了一套方便的工具来简新项目的初始搭建,但对于数据可,虽然Spring Boot本身并不是一个数据可框架,但它可以与许多第三方库结合使用来增强应用的可功能。以下是一些常见的Spring Boot数据可框架和方法: 1. **Spring Data JPA + Chart.js**:你可以使用Spring Data JPA查询数据,然后将结果集传递给前端,利用JavaScript库Chart.js(如折线图、饼图等)动态生成图表。 2. **Spring MVC + D3.js or Highcharts**:D3.js和Highcharts是流行的JavaScript数据可库,可以配合Spring MVC构建复杂的图表和仪表盘。 3. **Thymeleaf或Freemarker+ECharts/Google Charts**:这些模板引擎可以用来嵌入图表组件,ECharts或Google Charts提供了丰富的图表类型和定制选项。 4. **Spring Boot Admin**:这是一个用于监控的应用,提供了一整套的可工具,包括对健康检查、监控指标等的可展示。 5. **ELK Stack (Elasticsearch, Logstash, Kibana)**:虽然不是Spring Boot的一部分,但它们可以作为后端日志分析和数据可工具,Spring Boot可以通过集成支持来使用。 6. **Grafana + Prometheus & InfluxDB**:Grafana是一个流行的数据可平台,与Spring Boot结合可以方便地展示Prometheus或InfluxDB收集的指标。 要实现数据可,你可能需要在项目中添加相关的依赖,并配置数据源连接。如果你正在寻找一个特定的框架,告诉我你的具体需求和场景,我可以给出更详细的建议。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值