SpringCloud 整合Sleuth+Zipkin+ELK实战

前言

  • 什么是Sleuth

    整个分布式系统中能跟踪一个用户请求的过程(包括数据采集,数据传输,数据存储,数据分析,数据可视化),捕获这些跟踪数据,就能构建微服务的整个调用链的视图,这是调试和监控微服务的关键工具

  • 什么是Zipkin

    Zipkin 是一个开放源代码分布式的跟踪系统,每个服务向zipkin报告计时数据,zipkin会根据调用关系通过Zipkin UI生成依赖关系图

  • 什么是ELK

    日志分析系统

(一) 构建 zipkin-server 服务

1.1 官网网站

点击下载

1.2 手撸创建 zipkin-server

Maven添加依赖

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <artifactId>springcloud-demo-dec</artifactId>
        <groupId>com.example</groupId>
        <version>1.0.0-SNAPSHOT</version>
        <relativePath>../../pom.xml</relativePath>
    </parent>
    <modelVersion>4.0.0</modelVersion>

    <artifactId>zipkin-server</artifactId>

    <dependencies>
        <dependency>
            <groupId>io.zipkin.java</groupId>
            <artifactId>zipkin-server</artifactId>
            <version>2.8.4</version>
        </dependency>

        <dependency>
            <groupId>io.zipkin.java</groupId>
            <artifactId>zipkin-autoconfigure-ui</artifactId>
            <version>2.8.4</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <configuration>
                    <mainClass>com.example.springcloud.ZipkinApplication</mainClass>
                </configuration>
                <executions>
                    <execution>
                        <goals>
                            <goal>repackage</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

</project>

application.yml

spring:
  application:
    name: zipkin-server
  main:
    allow-bean-definition-overriding: true
server:
  port: 62100
management:
  metrics:
    web:
      server:
        auto-time-requests: false # 如若没有这个配置,访问WEB时,会抛出"There is already an existing meter named 'http_server_requests_seconds'"

启动类

@EnableZipkinServer
@SpringBootApplication
public class ZipkinApplication {
    public static void main(String[] args) {
        new SpringApplicationBuilder(ZipkinApplication.class)
                .web(WebApplicationType.SERVLET)
                .run(args);
    }
}

启动与访问 Zipkin

java -jar zipkin-server-${version}.jar

http://localhost:62100/zipkin/

(二) 构建 Sleuth 服务

2.1 创建项目

  • sleuthTrace1
    • SleuthTrace1Application
    • application.yml
    • logback-spring.xml
  • sleuthTrace2
    • 创建好 sleuthTrace1 可以复制黏贴 重命名 sleuthTrace2 把里面A改成B

Maven添加依赖

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <artifactId>springcloud-demo-dec</artifactId>
        <groupId>com.example</groupId>
        <version>1.0.0-SNAPSHOT</version>
        <relativePath>../../pom.xml</relativePath>
    </parent>
    <modelVersion>4.0.0</modelVersion>
    <packaging>jar</packaging>
    <artifactId>sleuth-traceA</artifactId>

    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-sleuth</artifactId>
        </dependency>

        <!-- Zipkin -->
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-zipkin</artifactId>
        </dependency>

</project>

application.yml

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:20000/eureka/

info:
  app:
    description: test
    name: sleuth-traceA

logging:
  file: ${spring.application.name}.log

management:
  endpoint:
    env:
      enabled: false
    health:
      show-details: always
  endpoints:
    web:
      exposure:
        include: '*'

server:
  port: 62000

spring:
  application:
    name: sleuth-traceA
  zipkin:
    base-url: http://localhost:62100 # Zipkin的地址

---
# Sleuth采样率
spring:
  sleuth:
    sampler:
      probability: 1 # 1 = 100% 如若采样 50% 就是 0.5

logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>
<!--该日志将日志级别不同的log信息保存到不同的文件中 -->
<configuration>
    <include resource="org/springframework/boot/logging/logback/defaults.xml" />

    <springProperty scope="context" name="springAppName"
                    source="spring.application.name" />

    <!-- 日志输出位置 -->
    <property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}" />

    <!-- 日志格式 -->
    <!--  sleuth输出必需有的 = %clr(${LOG_LEVEL_PATTERN:-%5p} -->
    <property name="CONSOLE_LOG_PATTERN"
              value="%clr(%d{HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}" />

    <!-- 控制台输出 -->
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>INFO</level>
        </filter>
        <!-- 日志输出编码 -->
        <encoder>
            <pattern>${CONSOLE_LOG_PATTERN}</pattern>
            <charset>utf8</charset>
        </encoder>
    </appender>

    <!-- 日志输出级别 -->
    <root level="INFO">
        <appender-ref ref="console" />
    </root>

</configuration>

启动类

@Slf4j
@RestController
@EnableDiscoveryClient
@SpringBootApplication
public class SleuthTrace1Application {

    @Bean
    @LoadBalanced
    public RestTemplate lb() {
        return new RestTemplate();
    }

    @Autowired
    private RestTemplate restTemplate;

    @GetMapping(value = "/traceA")
    public String traceA() {
        log.info("-------Trace A");
        return restTemplate.getForEntity("http://sleuth-traceB/traceB", String.class).getBody();
    }

    public static void main(String[] args) {
        new SpringApplicationBuilder(SleuthTrace1Application.class)
                .web(WebApplicationType.SERVLET)
                .run(args);
    }

}

2.2 查看Zipkin界面的链路信息

  • EurekaServerApplication :20000/ (自行创建 eureka-server )
  • SleuthTrace1Application :62000/
  • SleuthTrace2Application :62001/
  • GET http://localhost:62001/traceB | 查看log
  • GET http://localhost:62000/traceA | 查看log, 会出现 [里面某些字符串一致]

请求 traceA 后的链路信息

图片.png

(三) Sleuth整合ELK

通过kibana查询服务与服务之间调用的traceId, 从而得到整个链路的日志检索

ELK为了演示使用单体容器方式实现

Sleuth项目的微调

sleuth-traceA 与 sleuth-traceB 添加依赖

<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>5.2</version>
</dependency>

sleuth-traceA 与 sleuth-traceB 的 logback-spring.xml 添加 logstash 配置

<!-- Logstash -->
<!-- 为logstash输出的JSON格式的Appender -->
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
    <destination>192.168.8.246:5044</destination>
    <!-- 日志输出编码 -->
    <encoder
            class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
        <providers>
            <timestamp>
                <timeZone>UTC</timeZone>
            </timestamp>
            <pattern>
                <pattern>
                    {
                    "severity": "%level",
                    "service": "${springAppName:-}",
                    "trace": "%X{X-B3-TraceId:-}",
                    "span": "%X{X-B3-SpanId:-}",
                    "exportable": "%X{X-Span-Export:-}",
                    "pid": "${PID:-}",
                    "thread": "%thread",
                    "class": "%logger{40}",
                    "rest": "%message"
                    }
                </pattern>
            </pattern>
        </providers>
    </encoder>
</appender>

<!-- 日志输出级别 -->
<root level="INFO">
    <appender-ref ref="console" />
    <appender-ref ref="logstash" />
</root>

访问Kibana

http://localhost:5601/

Discover --> 选择 ‘*’

请求与查询

  1. 清空控制台
  2. 启动服务
    1. EurekaServerApplication
    2. SleuthTrace2Application
    3. SleuthTrace1Application
  3. 请求 http://localhost:62000/traceA
  4. 复制控制台的 traceId 到 Kibana页面上查询
    1. 精准查询关键字和值:“trace: 32deeb6a183f1e71”

GitHub

博客园

CSDN

我的博客

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

eddie_k2

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值