【spring cloud】 sleuth、zipkin、rabbitmq、elasticsearch、logstash、kibana实现分布式微服务链路追踪,日志分析

spring cloud微服务链路追踪、日志收集

  1. 依赖版本
    spring boot <2.1.3.RELEASE>
    spring cloud <Greenwich.SR1>
    rabbitmq < 3.7.10>
    elasticsearch <6.7.0>下载地址
    logstash <6.7.0>下载地址
    kibana <6.7.0>下载地址
    zipkin-server-2.9.4-exec.jar下载地址

  2. 实现链路追踪(sleuth、zipkin、rabbitmq)
    2.1 微服务添加依赖(zuul、service_client)

    		<dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-starter-sleuth</artifactId>
            </dependency>
            <!--使用默认zipkin默认配置-->
            <!--<dependency>-->
                <!--<groupId>org.springframework.cloud</groupId>-->
                <!--<artifactId>spring-cloud-sleuth-zipkin</artifactId>-->
            <!--</dependency>-->
            <!--使用rabbitMQ异步收集链路信息配置-->
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-starter-zipkin</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-stream-binder-rabbit</artifactId>
            </dependency>
    

    2.2. 微服务添加配置(bootstrap.yml)

      rabbitmq:
        host: localhost
        username: guest
        password: guest
        port: 5672
      # 使用rabbit作为链路跟踪信息进行异步收集,时不需要配置base-url
      zipkin:
        sender:
          type: rabbit
       # base-url: http://localhost:9411
      sleuth:
        sampler:
          probability: 1.0 #日志采集比值(0.1~1.0)
    

    2.3. 启动微服务zuul、service_client
    2.4. 运行 zipkin-server

    java -jar zipkin-server-2.9.4-exec.jar --zipkin.collector.rabbitmq.addresses=localhost
    

    2.5. 访问zipkin-UI页面
    zipkinUI页面

  3. 链路信息写入elasticsearch,通过kibana查看(sleuth、zipkin、rabbitmq、elasticsearch、kibana)
    3.1 关闭之前启动的zipkin-server线程
    3.2 修改/config/elasticsearch.yml,放下以下注释,启动elasticsearch(windows环境)

    # Set the bind address to a specific IP (IPv4 or IPv6):
    #
    network.host: 127.0.0.1
    #
    # Set a custom port for HTTP:
    #
    http.port: 9200
    

    3.3修改/config/kibana.yml,放开以下注释. 启动kibana(windows环境)

    # Kibana is served by a back end server. This setting specifies the port to use.
    server.port: 5601
    # To allow connections from remote users, set this parameter to a non-loopback address.
    server.host: "localhost"
    # The URLs of the Elasticsearch instances to use for all your queries.
    elasticsearch.hosts: ["http://localhost:9200"]
    

    3.3 运行zipkin-server

    java -jar zipkin-server-2.9.4-exec.jar --zipkin.collector.rabbitmq.addresses=localhost  --STORAGE_TYPE=elasticsearch --ES_HOSTS=http://127.0.0.1:9200
    

    3.4 访问kibanaUi
    3.5 management添加zipkin索引,即可看到链路信息记录
    链路信息记录

  4. 微服务日志写入elk,通过tarce_id,span_id可查询完整的日志记录(请求流程+info/error 日志收集)
    4.1 在以上基础上集成logstash
    4.2 在service_client resources目录下添加logback-spring.xml

    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
        <include resource="org/springframework/boot/logging/logback/base.xml" />
        <springProperty scope="context" name="MQHost" source="spring.rabbitmq.host"/>
        <springProperty scope="context" name="MQPort" source="spring.rabbitmq.port"/>
        <springProperty scope="context" name="MQUserName" source="spring.rabbitmq.username"/>
        <springProperty scope="context" name="MQPassword" source="spring.rabbitmq.password"/>
        <springProperty scope="context" name="applicationName" source="spring.application.name"/>
    
        <appender name="AMQP" class="org.springframework.amqp.rabbit.logback.AmqpAppender">
            <layout class="ch.qos.logback.classic.PatternLayout">
                <pattern>
                    <![CDATA[{"excute_time":"%d{yyyy-MM-dd HH:mm:ss}","trace_id":"%X{X-B3-TraceId:-}","span_id":"%X{X-B3-SpanId:-}","thread":"%thread","class_name":"%class","line":"%line","level":"%level","msg":"%msg","stack_trace":"%exception{2}"}]]>
                </pattern>
            </layout>
            <host>${MQHost}</host>
            <port>${MQPort}</port>
            <username>${MQUserName}</username>
            <password>${MQPassword}</password>
            <applicationId>service.${applicationName}</applicationId>
            <routingKeyPattern>service.${applicationName}</routingKeyPattern>
            <declareExchange>true</declareExchange>
            <exchangeType>topic</exchangeType>
            <exchangeName>log_logstash</exchangeName>
            <generateId>true</generateId>
            <charset>UTF-8</charset>
            <durable>true</durable>
            <deliveryMode>PERSISTENT</deliveryMode>
        </appender>
        <!--添加AsyncAppender是为了解决 AmqpAppender里[%class:%line]不显示的问题
        -->
        <appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
            <appender-ref ref="AMQP" />
            <includeCallerData>true</includeCallerData>
        </appender>
    
        <root level="INFO">
            <appender-ref ref="ASYNC"/>
        </root>
    </configuration>
    

    4.3 在logstash bin目录下,新建logstash.conf文件

    input {
      rabbitmq { 
    	type => "oct-mid-ribbon"
    	durable => true
    	exchange => "log_logstash"
    	exchange_type => "topic"
    	key => "service.#"
    	host => "127.0.0.1"
    	port => 5672
    	user => "guest"
    	password => "guest"
    	queue => "OCT_MID_Log"
    	auto_delete => false
    	tags => ["service"]
      }
    }
    
    output {
    	if[trace_id] != "" {
    		elasticsearch {
    			hosts => ["http://localhost:9200"]
    			index => "logstash-%{+YYYY.MM.dd}"
    	  }
    	}
    }
    

    4.4 启动logstash

    logstash.bat -f logstash.conf
    

    4.5 kibana添加logstash索引,通过message字段里tarce_id,span_id,即可还原整个完整请求中的所有打印日志信息

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值