通过elk日志平台监控微服务的接口耗时情况

最近做了的消息平台。在平台上线前期,最关注的就是推送接口耗时情况。虽然我们的容器平台持续性的将日志通过fliebeat输出到elk平台。但是各种各样日志很乱,无法单独对我们关注的接口日志进行单独的筛选和统计。

想了好几天,突然来了灵感。是否可以将日志一分为二,将平常七七八八的日志和我要重点关注的接口耗时日志分成两种。接口耗时专门定义成timecost日志。

1、修改logback配置文件,自定义timeCostLog日志和其格式。

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>

<configuration scan="true">
    <include resource="org/springframework/boot/logging/logback/base.xml"/>


    <appender name="TimeCostLogAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <append>true</append>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>logs/timeCost.%d.log</fileNamePattern>
            <maxHistory>90</maxHistory>
        </rollingPolicy>
        <encoder charset="UTF-8">
            <pattern>%msg%n</pattern>
            <charset>UTF-8</charset>
        </encoder>
    </appender>

    <logger name="javax.activation" level="WARN"/>
    <logger name="timeCostLog" additivity="false" level="INFO">
        <appender-ref ref="TimeCostLogAppender"/>
    </logger>



    <!-- https://logback.qos.ch/manual/configuration.html#shutdownHook and https://jira.qos.ch/browse/LOGBACK-1090 -->
    <shutdownHook class="ch.qos.logback.core.hook.DelayingShutdownHook"/>

    <contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
        <resetJUL>true</resetJUL>
    </contextListener>

</configuration>

 

2、我们是微服务,接口耗时希望是从gateway的耗时算起。所以在gateway写了两个过滤器TimeCostPreFilter 和TimeCostPostFilter 。

package com.ht.msgcenter.gateway.gateway.timecost;

import com.netflix.zuul.ZuulFilter;
import com.netflix.zuul.context.RequestContext;
import com.netflix.zuul.exception.ZuulException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cloud.netflix.zuul.filters.support.FilterConstants;

public class TimeCostPreFilter extends ZuulFilter {
    public static final String START_TIME_KEY = "start_time";
    private Logger logger = LoggerFactory.getLogger(TimeCostPreFilter.class);

    @Override
    public String filterType() {
        return FilterConstants.PRE_TYPE;
    }

    @Override
    public int filterOrder() {
        return 0;
    }

    /**
     * 判断是否要拦截的逻辑
     *
     * @return
     */
    @Override
    public boolean shouldFilter() {
        return true;
    }

    @Override
    public Object run() throws ZuulException {
        long startTime = System.currentTimeMillis();
        RequestContext.getCurrentContext().set(START_TIME_KEY, startTime);
        return null;
    }

}
package com.ht.msgcenter.gateway.gateway.timecost;

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.ht.msgcenter.gateway.service.dto.timeCostDTO;
import com.netflix.zuul.ZuulFilter;
import com.netflix.zuul.context.RequestContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cloud.netflix.zuul.filters.support.FilterConstants;

import java.text.SimpleDateFormat;
import java.util.Date;


public class TimeCostPostFilter extends ZuulFilter {
    private static final String START_TIME_KE = "start_time";
    private  final Logger timeCost_Log =  LoggerFactory.getLogger("timeCostLog");

    @Override
    public String filterType() {
        return FilterConstants.POST_TYPE;
    }

    @Override
    public int filterOrder() {
        return 0;
    }

    @Override
    public boolean shouldFilter() {
        return true;
    }

    @Override
    public Object run() {
        RequestContext ctx =RequestContext.getCurrentContext();
        long startTime = (long) RequestContext.getCurrentContext().get(START_TIME_KE);
        //HttpServletRequest request = ((ServletRequestAttributes) RequestContextHolder.getRequestAttributes()).getRequest();
        String url = ctx.getRequest().getRequestURL().toString();
        String method = ctx.getRequest().getMethod();
        int result = ctx.getResponseStatusCode();
        String logs ="";
        Date day=new Date();
        SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS");
        ObjectMapper mapper = new ObjectMapper();
        try {
             logs = mapper.writeValueAsString(new timeCostDTO(url, df.format(day),method,result,System.currentTimeMillis() - startTime));
        } catch (JsonProcessingException e) {
            e.printStackTrace();
        }
        timeCost_Log.info(logs);
        return null;
    }

}
package com.ht.msgcenter.gateway.config;

import com.ht.msgcenter.gateway.gateway.timecost.TimeCostPostFilter;
import com.ht.msgcenter.gateway.gateway.timecost.TimeCostPreFilter;
import com.netflix.zuul.ZuulFilter;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class TimeCostFilterConfig {
    @Bean
    public ZuulFilter timeCostPreFilter() {
        return new TimeCostPreFilter();
    }

    @Bean
    public ZuulFilter timeCostPostFilter() {
        return new TimeCostPostFilter();
    }
}

这样项目启动后运行,将会生成两种日志。

3、由于我们的项目是使用filebeat进行日志采集的。具体的配置如下:

注意:inputs有两种。gateway.log 就是原来所有七七八八的日志。timeCost*.log就是我们自定义的接口日志。

json.keys_under_root  是解决对json key的识别,不然就当成全字符串了。

output中,我们对不同的source设置不同的elk index。

filebeat.inputs:
- type: log
  enabled: true   
  paths:
     - /logs/gateway.log
  encoding: utf8
  fields:
     source: app

- type: log
  enabled: true   
  paths:
     - /logs/timeCost*.log
  encoding: utf8
  json.keys_under_root: true
  json.overwrite_keys: true
  fields:
     source: timecost
 # multiline.pattern: '^20'
 # multiline.negate: true
 # multiline.match: after
 # multiline.max_lines: 4000
  #tail_files: true





#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml
  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false


setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"


#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.


output.elasticsearch:
  hosts: ["10.1.9.29:9201","10.1.9.77:9200","10.1.9.78:9200"]
  index: "cs-msgcenter-%{[fields.source]}-%{+yyyy.MM.dd}"
  indices:
    - index: "cs-msgcenter-app-%{+yyyy.MM.dd}"
      when.equals:
        fields:
          source: app
    - index: "cs-msgcenter-timecosts-%{+yyyy.MM.dd}"
      when.equals:
        fields:
          source: timecost
  
setup.template.name: "cs-msgcenter"
setup.template.pattern: "cs-msgcenter-*"
#setup.template.enable: false

在fliebeat采集日志elasticsearch后,咱们可以登录kibana进行查询。

然后定制一些看板,来实时查看接口的tps

 

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值