Sleuth+Zipkin+Logback实现链路追踪+持久化监控(超详细)

一、Sleuth链路追踪

1. 导依赖

        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-sleuth</artifactId>
            <version>3.1.1</version>
        </dependency>
        
         <!-- 不要导纯acceslog的依赖,会导致打不出来access日志-->
        <dependency>
            <groupId>net.rakugakibox.spring.boot</groupId>
            <artifactId>logback-access-spring-boot-starter</artifactId>
            <version>2.7.1</version>
        </dependency>
        
         <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>1.2.3</version>
        </dependency>

2. TraceConfig配置

@Configuration
public class TraceConfig {
    
    //设置采样
    @Primary
    @Bean
    public SamplerProperties samplerProperties() {
        SamplerProperties samplerProperties = new SamplerProperties();
        //采样率,默认10%,表示10%的请求会被采集
        samplerProperties.setProbability(1.0F);
        //每秒x个采样
        samplerProperties.setRate(100);
        return samplerProperties;
    }

    //用于生成trace对象,不加这段逻辑,response的traceId和log文件中的traceId可能不一致
    @Primary
    @Bean
    public Tracer tracer() {
        Tracer tracer = Tracing.newBuilder()
                .localServiceName("untraced-service")
                .build()
                .tracer();
        return tracer;
    }
}

3. 日志配置

3.1. Access Log - 记录请求日志

3.1.1. xml配置
<configuration>
  <!-- always a good activate OnConsoleStatusListener -->
  <statusListener class="ch.qos.logback.core.status.OnConsoleStatusListener"/>
  <springProperty scope="context" name="applicationName" source="spring.application.name"/>

  <property name="ACCESS_LOG_FILE"
            value="${ACCESS_LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/access.log}"/>
  <conversionRule conversionWord="trace" converterClass="me.xxx.config.log.AccessTraceIdConverter" />

  <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${ACCESS_LOG_FILE}</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>${ACCESS_LOG_FILE}.%d{yyyy-MM-dd}.log.zip</fileNamePattern>
    </rollingPolicy>
    <encoder>
      <pattern>
        [%t{yyyy-MM-dd'T'HH:mm:ss.SSSX}]|%trace|%localIP|%clientHost|%i{X-Real-IP}|%user|"%requestURL"|%statusCode|%bytesSent|%elapsedTime|"%i{Referer}"|"%i{User-Agent}"|%requestContent|
      </pattern>
    </encoder>
  </appender>

  <appender-ref ref="FILE"/>
</configuration>

3.1.2. AccessTraceIdConverter配置
public class AccessTraceIdConverter extends AccessConverter {
    @Override
    public String convert(IAccessEvent accessEvent) {
        TraceContext spanContext = (TraceContext) accessEvent.getRequest().getAttribute(TraceContext.class.getName());

        String traceId = null;
        String spanId = null;

        if (null != spanContext) {
            traceId = spanContext.traceId();
            spanId = spanContext.spanId();
        }

        StringBuilder stringBuilder = new StringBuilder("[");
        stringBuilder.append(StringUtils.isEmpty(traceId) ? "----------------" : traceId);


        stringBuilder.append(",");
        stringBuilder.append(StringUtils.isEmpty(spanId) ? "----------------" : spanId);

        stringBuilder.append("]");
        return stringBuilder.toString();
    }
}

3.2. App Log - 记录服务内部日志

3.2.1. xml配置
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
  <!--  <include resource="org/springframework/boot/logging/logback/console-appender.xml"/>-->
  <jmxConfigurator/>
  <statusListener class="ch.qos.logback.core.status.OnConsoleStatusListener"/>
  <springProperty scope="context" name="applicationName" source="spring.application.name"/>
  <conversionRule conversionWord="trace" converterClass="me.xxx.config.log.ClassTraceIdConverter" />

  <property name="LOG_FILE"
    value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/app.log}"/>

  <appender name="APP-LOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${LOG_FILE}</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
      <FileNamePattern>${LOG_PATH}/app.log.%d{yyyy-MM-dd}.%i.log</FileNamePattern>
      <maxFileSize>100MB</maxFileSize>
      <maxHistory>30</maxHistory>
    </rollingPolicy>

    <encoder>
      <pattern>
        [%date{yyyy-MM-dd'T'HH:mm:ss.SSSX}]|%trace|%-4level|%5.5thread|%20.20logger{36}:%4L|%msg%n
      </pattern>
    </encoder>

  </appender>

  <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
      <Pattern>
        [%date{yyyy-MM-dd'T'HH:mm:ss.SSSX}]%trace|%-4level|%5.5thread|%20.20logger{36}:%4L|%msg%n
      </Pattern>
    </encoder>
  </appender>

  <root level="INFO">
    <appender-ref ref="APP-LOG"/>
    <appender-ref ref="CONSOLE"/>
  </root>
</configuration>

3.2.2. ClassTraceIdConverter配置
public class ClassTraceIdConverter extends ClassicConverter {

    @Override
    public String convert(ILoggingEvent iLoggingEvent) {
        Map<String, String> mdcMap = iLoggingEvent.getMDCPropertyMap();
        String traceId = mdcMap.get("traceId");
        if (StringUtils.isEmpty(traceId)) {
            return "";
        }


        StringBuilder stringBuilder = new StringBuilder("[");
        stringBuilder.append(traceId);
        stringBuilder.append(",");

        String spanId = mdcMap.get("spanId");
        if (StringUtils.isEmpty(spanId)) {
            stringBuilder.append("----------------");
        } else {
            stringBuilder.append(spanId);
        }

        stringBuilder.append("]");
        return stringBuilder.toString();
    }

}

4. LogBackConfig配置

用于将traceId输出到response的header里

@Configuration
public class LogBackConfig {
    @Bean
    public FilterRegistrationBean<TeeFilter> filterRegistrationBean() {
        FilterRegistrationBean<TeeFilter> bean = new FilterRegistrationBean<>();
        bean.setFilter(new MyTeeFilter());
        bean.addUrlPatterns("/*");
        return bean;
    }
}

class MyTeeFilter extends TeeFilter {

    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain filterChain)
            throws IOException, ServletException {
        HttpServletRequest req = (HttpServletRequest) request;

        if (null == req.getHeader("X-B3-TraceId")) {
            HttpServletResponse res = (HttpServletResponse) response;

            TraceContext spanContext = (TraceContext) request.getAttribute(TraceContext.class.getName());
            if (spanContext != null) {
                String traceId = spanContext.traceId();
                if (null != traceId) {
                    res.addHeader("X-B3-TraceId", traceId);
                }
            }
        }

        super.doFilter(request, response, filterChain);
    }

}

5. Configuration注入配置

LogBackConfig和TraceConfig可能不会字段注入,如果没字段注入,需要在resource目录下新建META-INF文件夹,文件夹下新建spring.factories文件,文件配置如下:

org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
  me.xxx.config.log.LogBackConfig,\
  me.xxx.config.log.TraceConfig

二、Zipkin监控

由于已经引入了sleuth的包,sleuth是包含zipKin的,所以不需要再引新包

1. 启动zipKin

下载jar包地址:https://repo1.maven.org/maven2/io/zipkin/zipkin-server/

CMD控制点中 java -jar zipkin.jar运行

浏览器访问:http://127.0.0.1:9411/zipkin/

2. 服务集成zipKin

2.1.导依赖

        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-sleuth-zipkin</artifactId>
            <version>3.1.1</version>
        </dependency>

2.2. properties文件配置

#采样率:默认0.1,即10%的请求会被采集,这里设置为100%
spring.sleuth.sampler.probability=1.0
#设置使用http的方式传输数据
spring.zipkin.sender.type=web
#zipKin采样地址,不要用localhost,会采集失败
spring.zipkin.base-url=http://127.0.0.1:9411/

此时访问接口,就可以在zipKin控制台里看到链路了

3. 持久化

3.1. 持久化到mysql

3.1.1. 建表
CREATE TABLE IF NOT EXISTS zipkin_spans (
  `trace_id_high` BIGINT NOT NULL DEFAULT 0 COMMENT 'If non zero, this means the trace uses 128 bit traceIds instead of 64 bit',
  `trace_id` BIGINT NOT NULL,
  `id` BIGINT NOT NULL,
  `name` VARCHAR(255) NOT NULL,
  `remote_service_name` VARCHAR(255),
  `parent_id` BIGINT,
  `debug` BIT(1),
  `start_ts` BIGINT COMMENT 'Span.timestamp(): epoch micros used for endTs query and to implement TTL',
  `duration` BIGINT COMMENT 'Span.duration(): micros used for minDuration and maxDuration query',
  PRIMARY KEY (`trace_id_high`, `trace_id`, `id`)
) ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci;

ALTER TABLE zipkin_spans ADD INDEX(`trace_id_high`, `trace_id`) COMMENT 'for getTracesByIds';
ALTER TABLE zipkin_spans ADD INDEX(`name`) COMMENT 'for getTraces and getSpanNames';
ALTER TABLE zipkin_spans ADD INDEX(`remote_service_name`) COMMENT 'for getTraces and getRemoteServiceNames';
ALTER TABLE zipkin_spans ADD INDEX(`start_ts`) COMMENT 'for getTraces ordering and range';

CREATE TABLE IF NOT EXISTS zipkin_annotations (
  `trace_id_high` BIGINT NOT NULL DEFAULT 0 COMMENT 'If non zero, this means the trace uses 128 bit traceIds instead of 64 bit',
  `trace_id` BIGINT NOT NULL COMMENT 'coincides with zipkin_spans.trace_id',
  `span_id` BIGINT NOT NULL COMMENT 'coincides with zipkin_spans.id',
  `a_key` VARCHAR(255) NOT NULL COMMENT 'BinaryAnnotation.key or Annotation.value if type == -1',
  `a_value` BLOB COMMENT 'BinaryAnnotation.value(), which must be smaller than 64KB',
  `a_type` INT NOT NULL COMMENT 'BinaryAnnotation.type() or -1 if Annotation',
  `a_timestamp` BIGINT COMMENT 'Used to implement TTL; Annotation.timestamp or zipkin_spans.timestamp',
  `endpoint_ipv4` INT COMMENT 'Null when Binary/Annotation.endpoint is null',
  `endpoint_ipv6` BINARY(16) COMMENT 'Null when Binary/Annotation.endpoint is null, or no IPv6 address',
  `endpoint_port` SMALLINT COMMENT 'Null when Binary/Annotation.endpoint is null',
  `endpoint_service_name` VARCHAR(255) COMMENT 'Null when Binary/Annotation.endpoint is null'
) ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci;

ALTER TABLE zipkin_annotations ADD UNIQUE KEY(`trace_id_high`, `trace_id`, `span_id`, `a_key`, `a_timestamp`) COMMENT 'Ignore insert on duplicate';
ALTER TABLE zipkin_annotations ADD INDEX(`trace_id_high`, `trace_id`, `span_id`) COMMENT 'for joining with zipkin_spans';
ALTER TABLE zipkin_annotations ADD INDEX(`trace_id_high`, `trace_id`) COMMENT 'for getTraces/ByIds';
ALTER TABLE zipkin_annotations ADD INDEX(`endpoint_service_name`) COMMENT 'for getTraces and getServiceNames';
ALTER TABLE zipkin_annotations ADD INDEX(`a_type`) COMMENT 'for getTraces and autocomplete values';
ALTER TABLE zipkin_annotations ADD INDEX(`a_key`) COMMENT 'for getTraces and autocomplete values';
ALTER TABLE zipkin_annotations ADD INDEX(`trace_id`, `span_id`, `a_key`) COMMENT 'for dependencies job';

CREATE TABLE IF NOT EXISTS zipkin_dependencies (
  `day` DATE NOT NULL,
  `parent` VARCHAR(255) NOT NULL,
  `child` VARCHAR(255) NOT NULL,
  `call_count` BIGINT,
  `error_count` BIGINT,
  PRIMARY KEY (`day`, `parent`, `child`)
) ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci;
3.1.1. 启动(2选1)
3.1.1.1 使用java -jar启动
java -jar zipkin-server-2.24.3-exec.jar --STORAGE_TYPE=mysql --MYSQL_HOST=127.0.0.1 --MYSQL_TCP_PORT=3306 --MYSQL_DB=zipkin --MYSQL_USER=root --MYSQL_PASS=123456
3.1.1.2 使用Docker启动
1、自己推jar包启动

1、编写DockerFile

# 基础镜像
FROM  openjdk:8-jre
# author(可更换)
MAINTAINER cato
# 挂载目录(可更换)
VOLUME /opt/software/zipkin/
# 创建目录(可更换和上面保持一致)
RUN mkdir -p /opt/software/zipkin/
# 指定路径(可更换和上面保持一致)
WORKDIR /opt/software/zipkin/

COPY zipkin-server-2.24.3-exec.jar /opt/software/zipkin/

# 启动认证服务,zipkin-server-2.24.3-exec.jar是自己推到linux上的jar包
ENTRYPOINT ["java","-jar","zipkin-server-2.24.3-exec.jar"]

2、推镜像(注意空格后的.要带上)

docker build -t zipkin:2.24.3 .

3、启动镜像

docker run \
--name zipkin -d \
--restart=always \
-p 9411:9411 \
-e MYSQL_USER=root \
-e MYSQL_PASS=123456 \
-e MYSQL_HOST=127.0.0.1 \
-e STORAGE_TYPE=mysql \
-e MYSQL_DB=zipkin \
-e MYSQL_TCP_PORT=3306 \
zipkin:2.24.3
2、从Docker下载jar包启动

如果提示登录的话,需要先登录docker仓库,此处略

启动 (会自动下载镜像)

docker run \
--name zipkin -d \
--restart=always \
-p 9411:9411 \
-e MYSQL_USER=root \
-e MYSQL_PASS=123456 \
-e MYSQL_HOST=127.0.0.1 \
-e STORAGE_TYPE=mysql \
-e MYSQL_DB=zipkin \
-e MYSQL_TCP_PORT=3306 \
openzipkin/zipkin:2.24.3

3.2. 持久化到es

docker run \
--name zipkin -d \
-p 9411:9411 \
--restart=always \
-e STORAGE_TYPE=elasticsearch \
-e ES_HOSTS=localhost:9200
openzipkin/zipkin:2.24.3

3.3. 持久化到RabbitMQ

docker  run \
--name zipkin -d \
--restart=always \
-p 9411:9411 \
-e RABBIT_ADDRESSES=162.14.115.18:5672 \
-e RABBIT_USER=admin \
-e RABBIT_PASSWORD=admin \
openzipkin/zipkin:2.24.3

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
在引入Zipkin追踪时,可能会遇到一些报错。具体的错误信息可以根据引用\[2\]中提到的错误提示来进行排查。根据该引用中的描述,Zipkin在order服务中给出了具体的错误信息,包括异常类型和发生异常的服务的IP和端口号。这样可以方便地追踪到出错的地方,帮助我们定位问题所在。如果遇到报错,可以根据错误信息进行排查,检查order服务的下单接口是否出现了问题。另外,引用\[3\]提到,Spring Cloud提供了spring-cloud-sleuth-zipkin来方便集成Zipkin实现,可以通过引入spring-cloud-starter-zipkin依赖来使用。因此,如果在引入Zipkin时遇到报错,可以检查是否正确引入了相关的依赖。 #### 引用[.reference_title] - *1* *3* [zipkin追踪详解](https://blog.csdn.net/fsy9595887/article/details/84935599)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* [Springcloud----sleuth+zipkin追踪](https://blog.csdn.net/lrs998563/article/details/126466229)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值