spring cloud微服务链路追踪、日志收集
-
依赖版本
spring boot <2.1.3.RELEASE>
spring cloud <Greenwich.SR1>
rabbitmq < 3.7.10>
elasticsearch <6.7.0>下载地址
logstash <6.7.0>下载地址
kibana <6.7.0>下载地址
zipkin-server-2.9.4-exec.jar下载地址 -
实现链路追踪(sleuth、zipkin、rabbitmq)
2.1 微服务添加依赖(zuul、service_client)<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-sleuth</artifactId> </dependency> <!--使用默认zipkin默认配置--> <!--<dependency>--> <!--<groupId>org.springframework.cloud</groupId>--> <!--<artifactId>spring-cloud-sleuth-zipkin</artifactId>--> <!--</dependency>--> <!--使用rabbitMQ异步收集链路信息配置--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-zipkin</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-rabbit</artifactId> </dependency>
2.2. 微服务添加配置(bootstrap.yml)
rabbitmq: host: localhost username: guest password: guest port: 5672 # 使用rabbit作为链路跟踪信息进行异步收集,时不需要配置base-url zipkin: sender: type: rabbit # base-url: http://localhost:9411 sleuth: sampler: probability: 1.0 #日志采集比值(0.1~1.0)
2.3. 启动微服务zuul、service_client
2.4. 运行 zipkin-serverjava -jar zipkin-server-2.9.4-exec.jar --zipkin.collector.rabbitmq.addresses=localhost
2.5. 访问zipkin-UI页面
-
链路信息写入elasticsearch,通过kibana查看(sleuth、zipkin、rabbitmq、elasticsearch、kibana)
3.1 关闭之前启动的zipkin-server线程
3.2 修改/config/elasticsearch.yml,放下以下注释,启动elasticsearch(windows环境)# Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 127.0.0.1 # # Set a custom port for HTTP: # http.port: 9200
3.3修改/config/kibana.yml,放开以下注释. 启动kibana(windows环境)
# Kibana is served by a back end server. This setting specifies the port to use. server.port: 5601 # To allow connections from remote users, set this parameter to a non-loopback address. server.host: "localhost" # The URLs of the Elasticsearch instances to use for all your queries. elasticsearch.hosts: ["http://localhost:9200"]
3.3 运行zipkin-server
java -jar zipkin-server-2.9.4-exec.jar --zipkin.collector.rabbitmq.addresses=localhost --STORAGE_TYPE=elasticsearch --ES_HOSTS=http://127.0.0.1:9200
3.4 访问kibanaUi
3.5 management添加zipkin索引,即可看到链路信息记录
-
微服务日志写入elk,通过tarce_id,span_id可查询完整的日志记录(请求流程+info/error 日志收集)
4.1 在以上基础上集成logstash
4.2 在service_client resources目录下添加logback-spring.xml<?xml version="1.0" encoding="UTF-8"?> <configuration> <include resource="org/springframework/boot/logging/logback/base.xml" /> <springProperty scope="context" name="MQHost" source="spring.rabbitmq.host"/> <springProperty scope="context" name="MQPort" source="spring.rabbitmq.port"/> <springProperty scope="context" name="MQUserName" source="spring.rabbitmq.username"/> <springProperty scope="context" name="MQPassword" source="spring.rabbitmq.password"/> <springProperty scope="context" name="applicationName" source="spring.application.name"/> <appender name="AMQP" class="org.springframework.amqp.rabbit.logback.AmqpAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <pattern> <![CDATA[{"excute_time":"%d{yyyy-MM-dd HH:mm:ss}","trace_id":"%X{X-B3-TraceId:-}","span_id":"%X{X-B3-SpanId:-}","thread":"%thread","class_name":"%class","line":"%line","level":"%level","msg":"%msg","stack_trace":"%exception{2}"}]]> </pattern> </layout> <host>${MQHost}</host> <port>${MQPort}</port> <username>${MQUserName}</username> <password>${MQPassword}</password> <applicationId>service.${applicationName}</applicationId> <routingKeyPattern>service.${applicationName}</routingKeyPattern> <declareExchange>true</declareExchange> <exchangeType>topic</exchangeType> <exchangeName>log_logstash</exchangeName> <generateId>true</generateId> <charset>UTF-8</charset> <durable>true</durable> <deliveryMode>PERSISTENT</deliveryMode> </appender> <!--添加AsyncAppender是为了解决 AmqpAppender里[%class:%line]不显示的问题 --> <appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender"> <appender-ref ref="AMQP" /> <includeCallerData>true</includeCallerData> </appender> <root level="INFO"> <appender-ref ref="ASYNC"/> </root> </configuration>
4.3 在logstash bin目录下,新建logstash.conf文件
input { rabbitmq { type => "oct-mid-ribbon" durable => true exchange => "log_logstash" exchange_type => "topic" key => "service.#" host => "127.0.0.1" port => 5672 user => "guest" password => "guest" queue => "OCT_MID_Log" auto_delete => false tags => ["service"] } } output { if[trace_id] != "" { elasticsearch { hosts => ["http://localhost:9200"] index => "logstash-%{+YYYY.MM.dd}" } } }
4.4 启动logstash
logstash.bat -f logstash.conf
4.5 kibana添加logstash索引,通过message字段里tarce_id,span_id,即可还原整个完整请求中的所有打印日志信息