日志系统

本文详细介绍了如何构建一个分布式日志处理系统,包括使用logback统一日志格式,filebeat收集日志,kafka作为消息中间件,Storm进行日志解析并存储到ES,最后通过Kibana实现日志搜索和查看。该系统旨在解决多节点日志查询、错误统计等问题,提供高效、可扩展的日志管理能力。
摘要由CSDN通过智能技术生成

目录

 

一、目的

二、解决方案及相关技术

三、方案详述

3.1 统一日志格式:logback

3.2 日志收集:filebeat

3.3 日志消息:kafka

3.4 日志解析+ 存储:Storm + ES/Redis/MongoDB

3.5 日志搜索查看:Kibana


 

 

一、目的

解决分布式中多节点日志处理,如日志查询、订单状态流转日志、错误日志统计、接口响应时间统计、微服务熔断统计等。

二、解决方案及相关技术

解决方案图,如下图所示,使用技术:ELK + Storm:

三、方案详述

3.1 统一日志格式:logback

       3.1.1 项目日志配置:

              a.定义日志文件名格式:LOG_xxx.日期.服务名称-端口.log
                       如:LOG_INFO.2021-01-21.instance-demo-9011.log
              b.定义哪些日志文件:
                       通用日志:                LOG_COMMON.2021-01-21.instance-demo-9011.log
                       基本信息日志:         LOG_INFO.2021-01-21.instance-demo-9011.log
                       错误日志:                LOG_ERROR.2021-01-21.instance-demo-9011.log
                       调用其他服务日志:  LOG_FEIGN.2021-01-21.instance-demo-9011.log
                       网关调用服务日志:  LOG_GATEWAY.2021-01-21.instance-demo-9011.log
                  注意:通用日志是收集所有日志信息,该文件会过大,不做收集处理


      3.1.2 定义日志内容:          

        a. LOG_FEIGN日志内容定义:
                请求:本地IP|方法名称|REQUEST |请求方式|请求url |^请求头^请求体^请求体长度
                响应:本地IP|方法名称|RESPONSE|响应状态码|执行时间|^响应头^响应体^响应体长度

                注意:读取响应体最大长度2000,防止响应体过大
            实例:
                请求:2021-03-22 15:20:18.293| INFO [,789c516ebabe6012,b6aae94c943d1698,false]|instance-demo|789c516ebabe6012|b6aae94c943d1698|172.17.0.5|FeignMongoDbService#listAll()|REQUEST|POST|http://instance-test/instance-test/test/mongodb/listAll|^bodyLength:0
                响应:2021-03-22 15:20:18.325| INFO [,789c516ebabe6012,b6aae94c943d1698,false]|instance-demo|789c516ebabe6012|b6aae94c943d1698|172.17.0.5|FeignMongoDbService#listAll()|RESPONSE|200|32ms|^connection:keep-alive^content-type:application/json^date:Mon, 22 Mar 2021 15:20:17 GMT^keep-alive:timeout=60^transfer-encoding:chunked^{"success":true,"code":"200",......}]}^bodyLength:1962

        b.LOG_GATEWAY日志内容定义:
                请求:本地IP|请求ID|手机号|Token|终端类型|版本号|User-Agent|REQUEST |请求方式  |请求url |路由IP:端口号|^请求头^请求参数^请求体
                响应:本地IP|请求ID|手机号|Token|终端类型|版本号|User-Agent|RESPONSE|响应状态码|执行时间|^响应头^响应体^响应体长度

            实例:
                请求:2021-03-22 15:20:18.514| INFO [,789c516ebabe6012,789c516ebabe6012,false]|instance-gateway|789c516ebabe6012|789c516ebabe6012|172.17.0.5|0c8dee3b-1024|[13726987313]|[dafrgtrhyhjyutkuikiuk]|[andorid]|[4.2.3]|[Jmeter]|REQUEST|POST|http://172.17.0.5:9013/instance-demo/feign/log/listAll |172.17.0.5:9013|^Connection:keep-alive^Mobile:13726987313^UserToken:dafrgtrhyhjyutkuikiuk^Channel:andorid^AppVersion:4.2.3^User-Agent:Jmeter^Content-Length:9^Content-Type:text/plain; charset=UTF-8^Host:127.0.0.1:8000^{}
                响应:2021-03-22 15:20:18.514| INFO [,789c516ebabe6012,789c516ebabe6012,false]|instance-gateway|789c516ebabe6012|789c516ebabe6012|172.17.0.5|0c8dee3b-1024|[13726987313]|[dafrgtrhyhjyutkuikiuk]|[andorid]|[4.2.3]|[Jmeter]|RESPONSE|200|0ms|^transfer-encoding:chunked^TraceId:789c516ebabe6012^Content-Type:application/json^Date:Mon, 22 Mar 2021 15:20:17 GMT^{"success":true,"code":"200",......}]}^bodyLength:1946

        c.LOG_INFO、 LOG_ERROR日志内容定义:
                本地IP|手机号|^消息内容
            实例:
                LOG_INFO: 2021-03-22 15:20:18.515| INFO [,789c516ebabe6012,789c516ebabe6012,false]|instance-gateway|789c516ebabe6012|789c516ebabe6012|172.17.0.5|[13726987313]|^http://172.17.0.5:9013/instance-demo/feign/log/listAll: 288ms
                LOG_ERROR: 2021-03-22 15:20:18.323|ERROR [,789c516ebabe6012,1c36db7dba2f9d02,false]|instance-test|789c516ebabe6012|1c36db7dba2f9d02|172.17.0.5|[13726987313]|^MongoDbController.listAll(),^java.lang.NumberFormatException: null

 

3.2 日志收集:filebeat

注意:a.filebeat的收集源文件是logback的日志文件,输出是kafka;

           b.收集日志文件排除LOG_COMMON,收集所有日志信息,文件过大

           c.fields定义了日志类型。

filebeat.inputs:

- type: log
  enabled: true
  paths:
    - /logs/LOG_INFO.*.log
  fields: 
    log_type: LOG_INFO
- type: log
  enabled: true
  paths:
    - /logs/LOG_ERROR.*.log
  fields: 
    log_type: LOG_ERROR
- type: log
  enabled: true
  paths:
    - /logs/LOG_FEIGN.*.log
  fields: 
    log_type: LOG_FEIGN
- type: log
  enabled: true
  paths:
    - /logs/LOG_GATEWAY.*.log
  fields: 
    log_type: LOG_GATEWAY
    
setup.template.settings:
  index.number_of_shards: 3
  
output.kafka:
  enabled: true
  hosts: ["192.168.1.3:9092"]
  topic: message-log

3.3 日志消息:kafka

下段代码是filebeat收集到的日志,其中message是打印的日志信息;fields是filebeat定义的字段。

{
  "@timestamp": "2021-03-20T05:40:52.332Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "_doc",
    "version": "7.10.0"
  },
  "message": "2021-03-20 05:40:44.946| INFO [,c1881bf682187216,ba2d7fe1db57a88c,false]|instance-demo|c1881bf682187216|ba2d7fe1db57a88c|172.17.0.5|13040666379|-4501317081998265834|^[{\"appId\":\"miguvideo\",\"createBy\":\"1\",\"createTime\":1615384888518,\"endTime\":1615384841885,\"id\":\"6048d138253fce4584a6281c\",\"jobId\":\"1\",\"mgdbId\":\"900000203\",\"planPlayId\":\"1\",\"poolId\":\"1\",\"startTime\":1615384841885,\"updateFrequency\":2,\"updateNumber\":3,\"upperLimit\":4},{\"appId\":\"miguvideo\",\"createBy\":\"1\",\"createTime\":1615385023913,\"endTime\":1615384841885,\"id\":\"6048d1bf253fce4584a6281d\",\"jobId\":\"2\",\"mgdbId\":\"900000308\",\"planPlayId\":\"2\",\"poolId\":\"2\",\"startTime\":1615384841885,\"updateFrequency\":3,\"updateNumber\":3,\"upperLimit\":2},{\"appId\":\"miguvideo\",\"createBy\":\"1\",\"createTime\":1615385142045,\"endTime\":1615384841885,\"id\":\"6048d236eabb306dfc12f638\",\"jobId\":\"1\",\"mgdbId\":\"900000240\",\"planPlayId\":\"1\",\"poolId\":\"1\",\"startTime\":1615384841885,\"updateFrequency\":2,\"updateNumber\":3,\"upperLimit\":4},{\"appId\":\"203023\",\"createBy\":\"12\",\"createTime\":1615814906128,\"endTime\":1615814873867,\"id\":\"604f60fa64f425261c93a5b8\",\"jobId\":\"45\",\"mgdbId\":\"45\",\"planPlayId\":\"45\",\"poolId\":\"45\",\"startTime\":1615814873867,\"updateFrequency\":45,\"updateNumber\":5,\"upperLimit\":45},{\"appId\":\"203023\",\"createBy\":\"12\",\"createTime\":1615814983938,\"endTime\":1615814873867,\"id\":\"604f6147c5c9846061bae594\",\"jobId\":\"45\",\"mgdbId\":\"45\",\"planPlayId\":\"45\",\"poolId\":\"45\",\"startTime\":1615814873867,\"updateFrequency\":45,\"updateNumber\":5,\"upperLimit\":45},{\"appId\":\"203023\",\"createBy\":\"12\",\"createTime\":1615815163252,\"endTime\":1615814873867,\"id\":\"604f61fbc5c9846061bae595\",\"jobId\":\"45\",\"mgdbId\":\"45\",\"planPlayId\":\"45\",\"poolId\":\"45\",\"startTime\":1615814873867,\"updateFrequency\":45,\"updateNumber\":5,\"upperLimit\":45}]",
  "fields": {
    "log_type": "LOG_INFO"
  },
  "input": {
    "type": "log"
  },
  "ecs": {
    "version": "1.6.0"
  },
  "host": {
    "mac": [
      "02:42:ac:11:00:05"
    ],
    "hostname": "07fca6b5bcc5",
    "architecture": "x86_64",
    "os": {
      "codename": "Core",
      "platform": "centos",
      "version": "7 (Core)",
      "family": "redhat",
      "name": "CentOS Linux",
      "kernel": "5.4.72-microsoft-standard-WSL2"
    },
    "id": "7797686064c141d3a56031c11979690c",
    "name": "07fca6b5bcc5",
    "containerized": true,
    "ip": [
      "172.17.0.5"
    ]
  },
  "agent": {
    "hostname": "07fca6b5bcc5",
    "ephemeral_id": "073624d2-af87-43a5-b252-6e109f561bc2",
    "id": "f1c7a2c3-928f-4a99-af8a-88fee9ba28e2",
    "name": "07fca6b5bcc5",
    "type": "filebeat",
    "version": "7.10.0"
  },
  "log": {
    "offset": 1918337,
    "file": {
      "path": "/logs/LOG_INFO.2021-03-20.instance-demo-9011.log"
    }
  }
}

3.4 日志解析+ 存储:Storm + ES/Redis/MongoDB

下段代码是Storm的日志解析,并写入ES。注意:索引index格式:应用名称-日期,如:instance-demo-2021.03.19

package com.migu.storm.bolt;

import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import com.log.config.Constant;
import com.log.enumeration.FeignLogType;
import com.migu.storm.entity.*;
import com.migu.storm.util.DateUtil;
import org.apache.http.HttpHost;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.BasicOutputCollector;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseBasicBolt;
import org.apache.storm.tuple.Tuple;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.bulk.BackoffPolicy;
import org.elasticsearch.action.bulk.BulkProcessor;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.unit.ByteSizeUnit;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.unit.TimeValue;

import java.util.Date;
import java.util.Map;
import java.util.Properties;
import java.util.function.BiConsumer;

/**
 * 索引index格式:应用名称-日期,如:instance-demo-2021.03.19
 * @description 日志写入ES
 * @author tcm
 * @version 1.0.0
 * @date 2020/12/24 16:17
 **/
public class LogESBolt extends BaseBasicBolt {
    private Properties properties;

    // ES批量写入:线程安全的批量处理类
    private BulkProcessor bulkProcessor;

    public LogESBolt(Properties properties) {
        this.properties = properties;
    }

    @Override
    public void execute(Tuple tuple, BasicOutputCollector collector) {
        // 日志公用字段
        String logFileBeatStr = tuple.getStringByField("logFileBeatContent");
        LogFileBeatContent logFileBeatContent = JSON.parseObject(logFileBeatStr, LogFileBeatContent.class);
        // 日志内容
        String message = tuple.getStringByField("message");

        // 索引 - 服务应用名称
        Date timestamp = new Date();
        String index = properties.getProperty("OTHER") + "-" + DateUtil.dateToString(timestamp);
        // 拆分日志
        String[] logs = message.trim().split("\\|");
        // 基本信息日志
        if (Constant.LogType.LOG_INFO.equals(logFileBeatContent.getLogType())) {
            LogInfo logInfo = new LogInfo();
            logInfo.setLogFileBeatContent(logFileBeatContent);
            logInfo.setMessage(message);

            if (logs.length >= 7) {
                timestamp = DateUtil.stringToDate(logs[0].trim());
                logInfo.setTimestamp(DateUtil.dateToString(timestamp, "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"));
                logInfo.setLogLevel(logs[1].trim().split(" ")[0].trim());
                logInfo.setApplicationName(logs[2].trim());
                logInfo.setTraceId(logs[3].trim());
                logInfo.setSpanId(logs[4].trim());
                logInfo.setServerIp(logs[5].trim());
                logInfo.setMobile(logs[6].trim());

                // 索引是服务应用名称
                index = logInfo.getApplicationName() + "-" + DateUtil.dateToString(timestamp);
            }

            JSONObject jsonObject = JSON.parseObject(JSON.toJSONString(logInfo));
            bulkProcessor.add(new IndexRequest(index).source(jsonObject));
        }
        // 错误日志
        else if (Constant.LogType.LOG_ERROR.equals(logFileBeatContent.getLogType())) {
            LogError logError = new LogError();
            logError.setLogFileBeatContent(logFileBeatContent);
            logError.setMessage(message);

            if (logs.length >= 7) {
                timestamp = DateUtil.stringToDate(logs[0].trim());
                logError.setTimestamp(DateUtil.dateToString(timestamp, "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"));
                logError.setLogLevel(logs[1].trim().split(" ")[0].trim());
                logError.setApplicationName(logs[2].trim());
                logError.setTraceId(logs[3].trim());
                logError.setSpanId(logs[4].trim());
                logError.setServerIp(logs[5].trim());
                logError.setMobile(logs[6].trim());

                // 索引是服务应用名称
                index = logError.getApplicationName() + "-" + DateUtil.dateToString(timestamp);
            }
            JSONObject jsonObject = JSON.parseObject(JSON.toJSONString(logError));
            bulkProcessor.add(new IndexRequest(index).source(jsonObject));
        }
        // feign日志
        else if (Constant.LogType.LOG_FEIGN.equals(logFileBeatContent.getLogType())) {
            LogCommon logCommon = new LogCommon();
            logCommon.setLogFileBeatContent(logFileBeatContent);
            logCommon.setMessage(message);

            if (logs.length >= 10) {
                timestamp = DateUtil.stringToDate(logs[0].trim());
                logCommon.setTimestamp(DateUtil.dateToString(timestamp, "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"));
                logCommon.setLogLevel(logs[1].trim().split(" ")[0].trim());
                logCommon.setApplicationName(logs[2].trim());
                logCommon.setTraceId(logs[3].trim());
                logCommon.setSpanId(logs[4].trim());
                logCommon.setServerIp(logs[5].trim());

                // 索引是服务应用名称
                index = logCommon.getApplicationName() + "-" + DateUtil.dateToString(timestamp);

                String logCommonStr = JSON.toJSONString(logCommon);
                String methodName = logs[6].trim();
                String logType = logs[7].trim();
                // 转换为feign请求
                if (FeignLogType.REQUEST.name().equals(logType)) {
                    LogFeignRequest logFeignRequest = JSON.parseObject(logCommonStr, LogFeignRequest.class);
                    logFeignRequest.setMethodName(methodName);
                    logFeignRequest.setLogType(logType);
                    logFeignRequest.setMethod(logs[8].trim());
                    logFeignRequest.setUrl(logs[9].trim());

                    JSONObject jsonObject = JSON.parseObject(JSON.toJSONString(logFeignRequest));
                    bulkProcessor.add(new IndexRequest(index).source(jsonObject));
                }
                // 转换为feign响应
                else if (FeignLogType.RESPONSE.name().equals(logType)) {
                    LogFeignResponse logFeignResponse = JSON.parseObject(logCommonStr, LogFeignResponse.class);
                    logFeignResponse.setMethodName(methodName);
                    logFeignResponse.setLogType(logType);
                    logFeignResponse.setStatus(logs[8].trim());
                    logFeignResponse.setElapsedTime(logs[9].trim());

                    JSONObject jsonObject = JSON.parseObject(JSON.toJSONString(logFeignResponse));
                    bulkProcessor.add(new IndexRequest(index).source(jsonObject));
                }
            } else {
                JSONObject jsonObject = JSON.parseObject(JSON.toJSONString(logCommon));
                bulkProcessor.add(new IndexRequest(index).source(jsonObject));
            }
        }
        // gateway日志
        else if (Constant.LogType.LOG_GATEWAY.equals(logFileBeatContent.getLogType())) {
            LogCommon logCommon = new LogCommon();
            logCommon.setLogFileBeatContent(logFileBeatContent);
            logCommon.setMessage(message);

            if (logs.length >= 15) {
                timestamp = DateUtil.stringToDate(logs[0].trim());
                logCommon.setTimestamp(DateUtil.dateToString(timestamp, "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"));
                logCommon.setLogLevel(logs[1].trim().split(" ")[0].trim());
                logCommon.setApplicationName(logs[2].trim());
                logCommon.setTraceId(logs[3].trim());
                logCommon.setSpanId(logs[4].trim());
                logCommon.setServerIp(logs[5].trim());

                // 索引是服务应用名称
                index = logCommon.getApplicationName() + "-" + DateUtil.dateToString(timestamp);;

                String logCommonStr = JSON.toJSONString(logCommon);
                String requestId = logs[6].trim();
                String mobile = logs[7].trim();
                String token = logs[8].trim();
                String channel = logs[9].trim();
                String appVersion = logs[10].trim();
                String userAgent = logs[11].trim();
                String logType = logs[12].trim();
                // 转换为gateway请求
                if (FeignLogType.REQUEST.name().equals(logType)) {
                    LogGatewayRequest logGatewayRequest = JSON.parseObject(logCommonStr, LogGatewayRequest.class);
                    logGatewayRequest.setRequestId(requestId);
                    logGatewayRequest.setMobile(mobile);
                    logGatewayRequest.setToken(token);
                    logGatewayRequest.setChannel(channel);
                    logGatewayRequest.setAppVersion(appVersion);
                    logGatewayRequest.setUserAgent(userAgent);
                    logGatewayRequest.setLogType(logType);
                    logGatewayRequest.setMethod(logs[13].trim());
                    logGatewayRequest.setUrl(logs[14].trim());
                    logGatewayRequest.setRouteIp(logs[15].trim());

                    JSONObject jsonObject = JSON.parseObject(JSON.toJSONString(logGatewayRequest));
                    bulkProcessor.add(new IndexRequest(index).source(jsonObject));
                }
                // 转换为gateway响应
                else if (FeignLogType.RESPONSE.name().equals(logType)) {
                    LogGatewayResponse logGatewayResponse = JSON.parseObject(logCommonStr, LogGatewayResponse.class);
                    logGatewayResponse.setRequestId(requestId);
                    logGatewayResponse.setMobile(mobile);
                    logGatewayResponse.setToken(token);
                    logGatewayResponse.setChannel(channel);
                    logGatewayResponse.setAppVersion(appVersion);
                    logGatewayResponse.setUserAgent(userAgent);
                    logGatewayResponse.setLogType(logType);
                    logGatewayResponse.setStatus(logs[13].trim());
                    logGatewayResponse.setElapsedTime(logs[14].trim());

                    JSONObject jsonObject = JSON.parseObject(JSON.toJSONString(logGatewayResponse));
                    bulkProcessor.add(new IndexRequest(index).source(jsonObject));
                }
            } else {
                JSONObject jsonObject = JSON.parseObject(JSON.toJSONString(logCommon));
                bulkProcessor.add(new IndexRequest(index).source(jsonObject));
            }
        }
    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {

    }

    /**
     * execute前,准备操作
     * @param stormConf
     * @param context Topology的基本信息
     */
    @Override
    public void prepare(Map stormConf, TopologyContext context) {
        super.prepare(stormConf, context);

        // ES批量写入:BulkProcessor
        this.bulkProcessor = bulkProcessor();
    }

    /**
     * ES批量写入:BulkProcessor
     * @return
     */
    public BulkProcessor bulkProcessor() {
        System.out.println("properties: " + JSON.toJSONString(properties));
        // es的客户端
        RestHighLevelClient client = new RestHighLevelClient(
                RestClient.builder(new HttpHost(
                        properties.getProperty("ES_SERVER"),
                        Integer.parseInt(properties.getProperty("ES_PORT")),
                        "http"))
        );

        // Consumer函数式接口
        BiConsumer<BulkRequest, ActionListener<BulkResponse>> bulkConsumer =
                (request, bulkListener) -> client.bulkAsync(request, RequestOptions.DEFAULT, bulkListener);

        return BulkProcessor.builder(bulkConsumer, new BulkProcessor.Listener() {

            @Override
            public void beforeBulk(long executionId, BulkRequest request) {
                int i = request.numberOfActions();
                System.out.println("ES同步数量: " + i);
            }

            @Override
            public void afterBulk(long l, BulkRequest bulkRequest, BulkResponse bulkResponse) {
            }

            @Override
            public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
                System.out.println("写入ES重新消费");
            }

        }) // 达到刷新的条数
          .setBulkActions(1000)
           // 达到刷新的大小
          .setBulkSize(new ByteSizeValue(100, ByteSizeUnit.MB))
           // 固定刷新的时间频率
          .setFlushInterval(TimeValue.timeValueSeconds(3))
           // 并发线程数
          .setConcurrentRequests(2)
           // 重试补偿策略
          .setBackoffPolicy(BackoffPolicy.exponentialBackoff(TimeValue.timeValueMillis(100), 3))
          .build();
    }
}

如下图所示,是日志Storm的拓扑结构:

如下图所示,现有的日志索引:

3.5 日志搜索查看:Kibana

注意:请求的响应头添加跟踪链ID,可以根据traceId获取该请求的所有日志信息,可以查看同个服务不同实例的订单状态流转等日志信息。

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值