微服务中利用Skywalking 实现链路追踪和日志(全局traceID)查看

1、下载安装:

1.1 下载 Skywalking

下载地址: Downloads | Apache SkyWalking

分别下载 apm 和 agent

在这里插入图片描述

在这里插入图片描述

wegt 下载连接如下;

wget https://archive.apache.org/dist/skywalking/java-agent/8.8.0/apache-skywalking-java-agent-8.8.0.tgz

wget https://archive.apache.org/dist/skywalking/8.8.1/apache-skywalking-apm-8.8.1.tar.gz

1.2 下载 Elasticsearch

wegt https://www.elastic.co/cn/downloads/past-releases/elasticsearch-7-17-0

1.3 Elasticsearch config

在打开安装目录的config下elasticsearch.yml 并添加以下配置

#http.port: 9200
cluster.name: CollectorDBCluster
path.data: /opt/elasticsearch-7.17.0/data
path.logs: /opt/elasticsearch-7.17.0/logs
network.host: 0.0.0.0
http.port: 9200

node.name: node-1
cluster.initial_master_nodes: ["node-1"]

1.4 启动Elasticsearch

# /opt/elasticsearch-7.17.0/bin/elasticsearch

1.5 Sekywalking config

storage:
  selector: ${SW_STORAGE:elasticsearch}
  elasticsearch:
    namespace: ${SW_NAMESPACE:"CollectorDBCluster"}
    clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:服务器ip:9200}
    protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:"http"}
    connectTimeout: ${SW_STORAGE_ES_CONNECT_TIMEOUT:500}
    socketTimeout: ${SW_STORAGE_ES_SOCKET_TIMEOUT:30000}
    numHttpClientThread: ${SW_STORAGE_ES_NUM_HTTP_CLIENT_THREAD:0}
#    user: ${SW_ES_USER:""}
#    password: ${SW_ES_PASSWORD:""}
#    trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:""}
#    trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:""}
    secretsManagementFile: ${SW_ES_SECRETS_MANAGEMENT_FILE:""} # Secrets management file in the properties format includes the username, password, which are managed by 3rd party tool.
    dayStep: ${SW_STORAGE_DAY_STEP:1} # Represent the number of days in the one minute/hour/day index.
    indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:1} # Shard number of new indexes
    indexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:1} # Replicas number of new indexes
    # Super data set has been defined in the codes, such as trace segments.The following 3 config would be improve es performance when storage super size data in es.
    superDatasetDayStep: ${SW_SUPERDATASET_STORAGE_DAY_STEP:-1} # Represent the number of days in the super size dataset record index, the default value is the same as dayStep when the value is less than 0
    superDatasetIndexShardsFactor: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_SHARDS_FACTOR:5} #  This factor provides more shards for the super data set, shards number = indexShardsNumber * superDatasetIndexShardsFactor. Also, this factor effects Zipkin and Jaeger traces.
    superDatasetIndexReplicasNumber: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_REPLICAS_NUMBER:0} # Represent the replicas number in the super size dataset record index, the default value is 0.
    indexTemplateOrder: ${SW_STORAGE_ES_INDEX_TEMPLATE_ORDER:0} # the order of index template
    bulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:5000} # Execute the async bulk record data every ${SW_STORAGE_ES_BULK_ACTIONS} requests
    # flush the bulk every 10 seconds whatever the number of requests
    # INT(flushInterval * 2/3) would be used for index refresh period.
    flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:15}
    concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requests
    resultWindowMaxSize: ${SW_STORAGE_ES_QUERY_MAX_WINDOW_SIZE:10000}
    metadataQueryMaxSize: ${SW_STORAGE_ES_QUERY_MAX_SIZE:5000}
    segmentQueryMaxSize: ${SW_STORAGE_ES_QUERY_SEGMENT_SIZE:200}
    profileTaskQueryMaxSize: ${SW_STORAGE_ES_QUERY_PROFILE_TASK_SIZE:200}
    oapAnalyzer: ${SW_STORAGE_ES_OAP_ANALYZER:"{\"analyzer\":{\"oap_analyzer\":{\"type\":\"stop\"}}}"} # the oap analyzer.
    oapLogAnalyzer: ${SW_STORAGE_ES_OAP_LOG_ANALYZER:"{\"analyzer\":{\"oap_log_analyzer\":{\"type\":\"standard\"}}}"} # the oap log analyzer. It could be customized by the ES analyzer configuration to support more language log formats, such as Chinese log, Japanese log and etc.
    advanced: ${SW_STORAGE_ES_ADVANCED:""}
  

主要需要修改

storage:
selector: ${SW_STORAGE:elasticsearch}

我的版本是8.8 如果你是低版本 且Elasticsearch7 ,就配置

storage:
selector: ${SW_STORAGE:elasticsearch7}

然后修改elasticsarch的服务ip和端口就可以了

1.6 启动skywalking

# /opt/apache-skywalking-apm-bin/bin/startup.sh 

1.7 docker-compost 部署微服务项目并通过 -javaagent方式

每个jar单独一个文件夹

分别 orderservice、gatway、userservice

在这里插入图片描述

在这里插入图片描述

version: "3.2"

services:
#  nacos:
#    image: nacos/nacos-server
#    environment:
#      MODE: standalone
#    ports:
#      - "9010:8848"
  userservice:
    env_file: .env
    environment:
      - USER_NAME=${COMNAME}
    build: ./user-service
  
  orderservice:
    build: ./order-service
  gateway:
    build: ./gateway
    ports:
      - "9013:9013"

.env 可以指定运行参数

## docker-compose环境变量

## 测试docker绑定参数
COMNAME=abcdefg129001

每个服务文件夹里面包含以下几个文件

image-20220919143258267

其中dockerfile 如下;

# 将下面的代码放入Dockerfile文件中,复制三份分别放入三个文件夹
FROM java:8
COPY ./app.jar /tmp/app.jar
COPY ./agent /tmp/agent
ENTRYPOINT java  -javaagent:/tmp/agent/skywalking-agent.jar -Dskywalking.agent.service_name=gatway  -Dskywalking.collector.backend_service=sky服务器ip:11800 -jar /tmp/app.jar

其他几个服务相同配置

1.8 skywalking 日志功能

每个springboot小项目单独添加依赖:

    <!--打印skywalking的TraceId到日志-->
    <dependency>
        <groupId>org.apache.skywalking</groupId>
        <artifactId>apm-toolkit-logback-1.x</artifactId>
        <version>8.8.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.skywalking</groupId>
        <artifactId>apm-toolkit-trace</artifactId>
        <version>8.8.0</version>
    </dependency>

其中版本号要与skywalking一致

日志可以使用logback (其他日志框架可以自行google)

<?xml version="1.0" encoding="utf-8" ?>

<configuration>

    <!-- logback-spring加载早于application.yml,如果直接通过${参数key}的形式是无法获取到对应的参数值-->
    <!-- source指定的是application.yml配置文件中key,其它地方直接用${log.path}引用这个值 -->
    <!-- 解决在相对路径下生成log.path_IS_UNDEFINED的问题,增加defaultValue -->
    <springProperty scope="context" name="base.path" source="logging.file.path" defaultValue="${user.home}/kenlogs"/>

    <!-- app.name根据你的应用名称修改 -->
    <springProperty scope="context" name="app.name" source="spring.application.name" defaultValue="applog"/>

    <property name="log.path" value="${base.path}/${app.name}"/>

    <!--格式化输出:%d表示日期,%thread表示线程名,%-5level:级别从左显示5个字符宽度,%msg:日志消息,%n是换行符-->
    <property name="log.pattern" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - [%tid] - %msg%n"/>

    <!-- 控制台日志输出配置 -->
    <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
            <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
                <pattern>${log.pattern}</pattern>
            </layout>
        </encoder>
    </appender>


    <appender name="SKYWALKING" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender">
        <!-- 日志输出编码 -->
        <encoder>
            <!--格式化输出:%d表示日期,%thread表示线程名,%-5level:级别从左显示5个字符宽度%msg:日志消息,%n是换行符-->
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n
            </pattern>
            <charset>UTF-8</charset> <!-- 设置字符集 -->
        </encoder>
    </appender>




    <!-- 文件输出日志配置,按照每天生成日志文件 -->
    <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <!-- 日志文件输出的文件名称 -->
            <FileNamePattern>${log.path}-%d{yyyy-MM-dd}.%i.log</FileNamePattern>
            <!-- 日志保留天数 -->
            <MaxHistory>30</MaxHistory>
            <MaxFileSize>3KB</MaxFileSize>
        </rollingPolicy>

        <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
            <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
                <!--格式化输出:%d表示日期,%thread表示线程名,%-5level:级别从左显示5个字符宽度%msg:日志消息,%n是换行符-->
                <pattern>${log.pattern}</pattern>
            </layout>
        </encoder>
    </appender>

    <!-- mybatis日志配置 -->
    <!--    <logger name="java.sql.Connection" level="DEBUG"/>-->
    <!--    <logger name="java.sql.Statement" level="DEBUG"/>-->
    <!--    <logger name="java.sql.PreparedStatement" level="DEBUG"/>-->

    <root level="INFO">
        <appender-ref ref="SKYWALKING"/>
        <appender-ref ref="file"/>
    </root>

    <!-- 配置开发环境,多个使用逗号隔开(例如:dev,sit) -->
<!--    <springProfile name="dev">
        &lt;!&ndash;定义日志输出级别&ndash;&gt;
        <root level="INFO">
            <appender-ref ref="stdout"/>
            <appender-ref ref="file"/>
        </root>
    </springProfile>

    &lt;!&ndash; 配置测试环境,多个使用逗号隔开 &ndash;&gt;
    <springProfile name="sit">
        &lt;!&ndash;定义日志输出级别&ndash;&gt;
        <root level="INFO">
            <appender-ref ref="stdout"/>
            <appender-ref ref="file"/>
        </root>
    </springProfile>

    &lt;!&ndash; 配置生产环境,多个使用逗号隔开 &ndash;&gt;
    <springProfile name="prod">
        &lt;!&ndash;定义日志输出级别&ndash;&gt;
        <root level="INFO">
            <appender-ref ref="stdout"/>
            <appender-ref ref="file"/>
        </root>
    </springProfile>-->

</configuration>

其中最重要的是

appender name=“SKYWALKING” class 指定正确

同时要日志生效;必须修改服务 -javaagent的config/agent.config ; 我开始就是没指定这个;日志一直没生成

plugin.toolkit.log.grpc.reporter.server_host=${SW_GRPC_LOG_SERVER_HOST:skywalking服务ip}
plugin.toolkit.log.grpc.reporter.server_port=${SW_GRPC_LOG_SERVER_PORT:11800}
plugin.toolkit.log.grpc.reporter.max_message_size=${SW_GRPC_LOG_MAX_MESSAGE_SIZE:10485760}
plugin.toolkit.log.grpc.reporter.upstream_timeout=${SW_GRPC_LOG_GRPC_UPSTREAM_TIMEOUT:30}
plugin.toolkit.log.transmit_formatted=${SW_PLUGIN_TOOLKIT_LOG_TRANSMIT_FORMATTED:true}

测试使用日志

简单点直接controller使用一个;真实业务应该在service层比较合适

private final Logger log = LoggerFactory.getLogger(*controller.class);

1.9 最终效果

在这里插入图片描述

在这里插入图片描述

代码里面获取日志的tranceid

String traceId = TraceContext.traceId();

1.10 日志如何删除历史数据

在这里插入图片描述

以上配置保留一天;其他详细需求可以自行百度。

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值