Spring Cloud Sleuth基于DataSource的链路跟踪
现状
最新的spring-cloud-sleuth(3.1.0-M1
)已经支持基于datasource/jdbc的链路跟踪了,相关文档请参考:sleuth-jdbc-integration
并且spring-cloud-sleuth提供了两套解决方案自由选择:
- 基于JDBC的 p6spy
- 基于DataSource的 datasource-proxy
两种解决方案各有优劣,这里只讨论基于Datasource的datasource-proxy。
虽然最新的spring-cloud-sleuth已经有了数据源链路跟踪的集成方案,但是截止到2021-09-17
,中央仓库的spring-cloud-sleuth
版本还停留在3.0.4
,这个版本并没有支持数据源链路跟踪,并且实际情况中还在使用spring-cloud-sleuth 2.x甚至更低版本,如何在低版中集成数据源链路跟踪呢?
这里基于新版的spring-cloud-sleuth源码和datasource-proxy与spring boot的集成(spring-cloud-sleuth参考了spring-boot-data-source-decorator),用最简单的方式支持数据源的链路跟踪——不管使用的什么数据库,不管使用了什么JDBC,只要是使用了spring-datasource,都可以方便集成。
p6spy和datasource-proxy
为什么不使用p6spy
呢?
和mysql链路跟踪(brave-instrumentation-mysql)类似,p6spy严重依赖JDBC实现,虽然p6spy支持了很多种常见的数据库Driver,但是总感觉不放心。
反正一定会使用spring-datasource,不如基于这一层抽象来做扩展,完全忽略数据库Driver的差异,这样即使切换数据库Driver,也不会有影响,多数据源的情况下,如果设置多个Driver也无需担心不支持的问题。
即使不使用spring-datasource,那么也一定会有另外一个抽象来代替spring datasource,如果替换了spring datasource,也只需基于新的抽象实现一套链路跟踪逻辑即可。
所以最终选择了datasource-proxy
。
原理
datasource-proxy的原理这里仅简述一下,具体请参考datasource-proxy
其原理可以概括为:
- 基于JDK动态代理
- 代理
java.sql.*/javax.sql.*
相关类,在Datasource/Connection/Statement
执行getConnection()/createStatment()/executeQuery()/executeUpdate()
等方法前后埋点; - 埋点位置和操作抽象成为listener,实现listener并在生成代理类的时候配置listener,就可以完成日志打印、链路跟踪等功能;
- spring boot集成时,实现一个listener,在获取连接和执行sql前后,基于spring-cloud-sleuth埋点,完成链路跟踪;
- 在DataSource初始化完成后(
BeanPostProcessor.postProcessAfterInitialization()
),替换DataSource为data-source-proxy代理类。
集成步骤
这里仅讨论基于spring-boot/spring-cloud 2.x 的集成。
-
引入spring-cloud-sleuth和datasource-proxy:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-sleuth</artifactId> <version>2.2.10-RELEASE</version> </dependency> <dependency> <groupId>net.ttddyy</groupId> <artifactId>datasource-proxy</artifactId> <version>1.7</version> </dependency>
-
然后定义一些配置:
# 是否打开datasource链路监控,默认打开 spring.sleuth.datasource.enabled=true # 支持那些阶段的链路监控,包含连接、查询和获取结果,默认全部 spring.sleuth.datasource.include=CONNECTION,QUERY,FETCH # 默认数据库服务名称,localEndpoint.serviceName,默认default-datasource spring.sleuth.datasource.default-name=default-datasource # 多数据源的情况下,可以指定不停数据源的服务名称,key为DataSource在spring容器中的bean名称 # 如果未指定,将使用默认数据库服务名称 spring.sleuth.datasource.name-mapping.orderDataDource=order-mysql spring.sleuth.datasource.name-mapping.productDataDource=product-oracle
-
定义配置类:
@ConfigurationProperties(prefix = "spring.sleuth.datasource") @Data public class SleuthDatasourceProperties { /** * 是否打开数据库链路跟踪,默认打开 */ private boolean enabled = true; /** * 跟踪哪些数据库操作,CONNECTION/QUERY/FETCH,默认全部 */ private List<TraceType> include = Arrays.asList(TraceType.CONNECTION, TraceType.QUERY, TraceType.FETCH); /** * 数据源默认span名称 */ private String defaultName = "default-datasource"; /** * 多数据源情况下数据源span名称映射, key=beanName, value=span name */ private Map<String, String> nameMapping = new HashMap<>(4); /** * 数据库span类型 */ public enum TraceType { /** * 连接 */ CONNECTION, /** * 查询 */ QUERY, /** * 遍历 */ FETCH } }
-
编写链路监听策略类,定义方法埋点前后逻辑:
public class TracingListenerStrategy<C, S, R> { private static final String SPAN_SQL_QUERY_TAG_NAME = "query-sql"; private static final String SPAN_ROW_COUNT_TAG_NAME = "row-count"; private static final String SPAN_CONNECTION_POSTFIX = "/connection"; private static final String SPAN_QUERY_POSTFIX = "/query"; private static final String SPAN_FETCH_POSTFIX = "/fetch"; private static final String JDBC_PREFIX = "jdbc:/"; private final Map<C, ConnectionInfo> openConnections = new ConcurrentHashMap<>(); private final Tracer tracer; private final List<KocaTraceProperties.Database.TraceType> traceTypes; TracingListenerStrategy(Tracer tracer, List<KocaTraceProperties.Database.TraceType> traceTypes) { this.tracer = tracer; this.traceTypes = traceTypes; } void beforeGetConnection(C connectionKey, String dataSourceName) { SpanWithScope spanWithScope = null; if (traceTypes.contains(KocaTraceProperties.Database.TraceType.CONNECTION)) { Span connectionSpan = tracer.nextSpan().name(JDBC_PREFIX + dataSourceName + SPAN_CONNECTION_POSTFIX); connectionSpan.remoteServiceName(dataSourceName); connectionSpan.kind(Span.Kind.CLIENT); connectionSpan.start(); spanWithScope = new SpanWithScope(connectionSpan, tracer.withSpanInScope(connectionSpan)); } ConnectionInfo connectionInfo = new ConnectionInfo(spanWithScope); openConnections.put(connectionKey, connectionInfo); } void afterGetConnection(C connectionKey, Throwable t) { if (t != null) { ConnectionInfo connectionInfo = openConnections.remove(connectionKey); connectionInfo.getSpan().ifPresent(connectionSpan -> { connectionSpan.getSpan().error(t); connectionSpan.finish(); }); } } void beforeQuery(C connectionKey, S statementKey, String dataSourceName) { SpanWithScope spanWithScope = null; if (traceTypes.contains(KocaTraceProperties.Database.TraceType.QUERY)) { Span statementSpan = tracer.nextSpan().name(JDBC_PREFIX + dataSourceName + SPAN_QUERY_POSTFIX); statementSpan.remoteServiceName(dataSourceName); statementSpan.kind(Span.Kind.CLIENT); statementSpan.start(); spanWithScope = new SpanWithScope(statementSpan, tracer.withSpanInScope(statementSpan)); } StatementInfo statementInfo = new StatementInfo(spanWithScope); ConnectionInfo connectionInfo = openConnections.get(connectionKey); if (connectionInfo == null) { // Connection may be closed after statement preparation, but before statement execution. return; } connectionInfo.getNestedStatements().put(statementKey, statementInfo); } void addQueryRowCount(C connectionKey, S statementKey, int rowCount) { ConnectionInfo connectionInfo = openConnections.get(connectionKey); if (connectionInfo == null) { // Connection is already closed return; } StatementInfo statementInfo = connectionInfo.getNestedStatements().get(statementKey); statementInfo.getSpan().ifPresent(statementSpan -> { statementSpan.getSpan().tag(SPAN_ROW_COUNT_TAG_NAME, String.valueOf(rowCount)); }); } void afterQuery(C connectionKey, S statementKey, String sql, Throwable t) { ConnectionInfo connectionInfo = openConnections.get(connectionKey); if (connectionInfo == null) { // Connection may be closed after statement preparation, but before statement execution. return; } StatementInfo statementInfo = connectionInfo.getNestedStatements().get(statementKey); statementInfo.getSpan().ifPresent(statementSpan -> { statementSpan.getSpan().tag(SPAN_SQL_QUERY_TAG_NAME, sql); if (t != null) { statementSpan.getSpan().error(t); } statementSpan.finish(); }); } void beforeResultSetNext(C connectionKey, S statementKey, R resultSetKey, String dataSourceName) { if (!traceTypes.contains(KocaTraceProperties.Database.TraceType.FETCH)) { return; } ConnectionInfo connectionInfo = openConnections.get(connectionKey); // ConnectionInfo may be null if Connection was closed before ResultSet if (connectionInfo == null) { return; } if (connectionInfo.getNestedResultSetSpans().containsKey(resultSetKey)) { // ResultSet span is already created return; } Span resultSetSpan = tracer.nextSpan().name(JDBC_PREFIX + dataSourceName + SPAN_FETCH_POSTFIX); resultSetSpan.remoteServiceName(dataSourceName); resultSetSpan.kind(Span.Kind.CLIENT); resultSetSpan.start(); SpanWithScope spanWithScope = new SpanWithScope(resultSetSpan, tracer.withSpanInScope(resultSetSpan)); connectionInfo.getNestedResultSetSpans().put(resultSetKey, spanWithScope); StatementInfo statementInfo = connectionInfo.getNestedStatements().get(statementKey); // StatementInfo may be null when Statement is proxied and instance returned from ResultSet is different from // instance returned in query method // in this case if Statement is closed before ResultSet span won't be finished immediately, but when // Connection is closed if (statementInfo != null) { statementInfo.getNestedResultSetSpans().put(resultSetKey, spanWithScope); } } void afterResultSetClose(C connectionKey, R resultSetKey, int rowCount, Throwable t) { ConnectionInfo connectionInfo = openConnections.get(connectionKey); // ConnectionInfo may be null if Connection was closed before ResultSet if (connectionInfo == null) { return; } SpanWithScope resultSetSpan = connectionInfo.getNestedResultSetSpans().remove(resultSetKey); // ResultSet span may be null if Statement or ResultSet were already closed if (resultSetSpan == null) { return; } if (rowCount != -1) { resultSetSpan.getSpan().tag(SPAN_ROW_COUNT_TAG_NAME, String.valueOf(rowCount)); } if (t != null) { resultSetSpan.getSpan().error(t); } resultSetSpan.finish(); } void afterStatementClose(C connectionKey, S statementKey) { ConnectionInfo connectionInfo = openConnections.get(connectionKey); // ConnectionInfo may be null if Connection was closed before Statement if (connectionInfo == null) { return; } StatementInfo statementInfo = connectionInfo.getNestedStatements().remove(statementKey); if (statementInfo != null) { statementInfo.getNestedResultSetSpans().forEach((resultSetKey, span) -> { connectionInfo.getNestedResultSetSpans().remove(resultSetKey); span.finish(); }); statementInfo.getNestedResultSetSpans().clear(); } } void afterCommit(C connectionKey, Throwable t) { ConnectionInfo connectionInfo = openConnections.get(connectionKey); if (connectionInfo == null) { // Connection is already closed return; } connectionInfo.getSpan().ifPresent(connectionSpan -> { if (t != null) { connectionSpan.getSpan().error(t); } connectionSpan.getSpan().annotate("commit"); }); } void afterRollback(C connectionKey, Throwable t) { ConnectionInfo connectionInfo = openConnections.get(connectionKey); if (connectionInfo == null) { // Connection is already closed return; } connectionInfo.getSpan().ifPresent(connectionSpan -> { if (t != null) { connectionSpan.getSpan().error(t); } else { connectionSpan.getSpan().tag("error", "Transaction rolled back"); } connectionSpan.getSpan().annotate("rollback"); }); } void afterConnectionClose(C connectionKey, Throwable t) { ConnectionInfo connectionInfo = openConnections.remove(connectionKey); if (connectionInfo == null) { // connection is already closed return; } connectionInfo.getNestedResultSetSpans().values().forEach(SpanWithScope::finish); connectionInfo.getNestedStatements().values() .forEach(statementInfo -> statementInfo.getSpan().ifPresent(SpanWithScope::finish)); connectionInfo.getSpan().ifPresent(connectionSpan -> { if (t != null) { connectionSpan.getSpan().error(t); } connectionSpan.finish(); }); } private final class ConnectionInfo { private final SpanWithScope span; private final Map<S, StatementInfo> nestedStatements = new ConcurrentHashMap<>(); private final Map<R, SpanWithScope> nestedResultSetSpans = new ConcurrentHashMap<>(); private ConnectionInfo(SpanWithScope span) { this.span = span; } Optional<SpanWithScope> getSpan() { return Optional.ofNullable(span); } Map<S, StatementInfo> getNestedStatements() { return nestedStatements; } Map<R, SpanWithScope> getNestedResultSetSpans() { return nestedResultSetSpans; } } private final class StatementInfo { private final SpanWithScope span; private final Map<R, SpanWithScope> nestedResultSetSpans = new ConcurrentHashMap<>(); private StatementInfo(SpanWithScope span) { this.span = span; } Optional<SpanWithScope> getSpan() { return Optional.ofNullable(span); } Map<R, SpanWithScope> getNestedResultSetSpans() { return nestedResultSetSpans; } } private static final class SpanWithScope { private final Span span; private final Tracer.SpanInScope spanInScope; private SpanWithScope(Span span, Tracer.SpanInScope spanInScope) { this.span = span; this.spanInScope = spanInScope; } Span getSpan() { return span; } void finish() { spanInScope.close(); span.finish(); } } }
由于数据库SQL执行是异步的,有CONNECTION、STATEMENT、FETCH等阶段,JDK动态代理拦截各个方法时,并不知道这个方法属于那一次数据库调用,但是同一次数据库调用的数据库连接信息是一样的,因此需要在before CONNECTION阶段将Span信息暂存在Map中,Map的Key即是数据库连接信息,after CONNECTION阶段取出数据库连接对应的Span并处理,STATEMENT和FETCH阶段与CONNECTION类似。
-
实现链路监听:
public class TracingQueryExecutionListener implements QueryExecutionListener, MethodExecutionListener, Ordered { private static final String EXECUTE_UPDATE = "executeUpdate"; private static final String GET_CONNECTION = "getConnection"; private static final String NEXT = "next"; private static final String COMMIT = "commit"; private static final String ROLLBACK = "rollback"; private static final String CLOSE = "close"; private final TracingListenerStrategy<String, Statement, ResultSet> strategy; TracingQueryExecutionListener(Tracer tracer, List<KocaTraceProperties.Database.TraceType> traceTypes) { this.strategy = new TracingListenerStrategy<>(tracer, traceTypes); } @Override public void beforeQuery(ExecutionInfo execInfo, List<QueryInfo> queryInfoList) { strategy.beforeQuery(execInfo.getConnectionId(), execInfo.getStatement(), execInfo.getDataSourceName()); } @Override public void afterQuery(ExecutionInfo execInfo, List<QueryInfo> queryInfoList) { if (EXECUTE_UPDATE.equals(execInfo.getMethod().getName()) && execInfo.getThrowable() == null) { strategy.addQueryRowCount(execInfo.getConnectionId(), execInfo.getStatement(), (int) execInfo.getResult()); } String sql = queryInfoList.stream().map(QueryInfo::getQuery).collect(Collectors.joining("\n")); strategy.afterQuery(execInfo.getConnectionId(), execInfo.getStatement(), sql, execInfo.getThrowable()); } @Override public void beforeMethod(MethodExecutionContext executionContext) { Object target = executionContext.getTarget(); String methodName = executionContext.getMethod().getName(); String dataSourceName = executionContext.getProxyConfig().getDataSourceName(); String connectionId = executionContext.getConnectionInfo().getConnectionId(); if (target instanceof DataSource) { if (GET_CONNECTION.equals(methodName)) { strategy.beforeGetConnection(connectionId, dataSourceName); } } else if (target instanceof ResultSet) { ResultSet resultSet = (ResultSet) target; if (NEXT.equals(methodName)) { try { strategy.beforeResultSetNext(connectionId, resultSet.getStatement(), resultSet, dataSourceName); } catch (SQLException ignore) { // ignore } } } } @Override public void afterMethod(MethodExecutionContext executionContext) { Object target = executionContext.getTarget(); String methodName = executionContext.getMethod().getName(); String connectionId = executionContext.getConnectionInfo().getConnectionId(); Throwable t = executionContext.getThrown(); if (target instanceof DataSource) { if (GET_CONNECTION.equals(methodName)) { strategy.afterGetConnection(connectionId, t); } } else if (target instanceof Connection) { if (COMMIT.equals(methodName)) { strategy.afterCommit(connectionId, t); } else if (ROLLBACK.equals(methodName)) { strategy.afterRollback(connectionId, t); } else if (CLOSE.equals(methodName)) { strategy.afterConnectionClose(connectionId, t); } } else if (target instanceof Statement && CLOSE.equals(methodName)) { strategy.afterStatementClose(connectionId, (Statement) target); } else if (target instanceof ResultSet && CLOSE.equals(methodName)) { ResultSet resultSet = (ResultSet) target; strategy.afterResultSetClose(connectionId, resultSet, -1, t); } } @Override public int getOrder() { return Ordered.HIGHEST_PRECEDENCE + 10; } }
-
添加配置类:
@Configuration @EnableConfigurationProperties(SleuthDatasourceProperties.class) @ConditionalOnProperty( name = {"spring.sleuth.enabled", "spring.sleuth.datasource.enabled"}, havingValue = "true", matchIfMissing = true) @ConditionalOnBean({DataSource.class, Tracer.class}) @AutoConfigureAfter({DataSourceAutoConfiguration.class, TraceAutoConfiguration.class}) public class DataSourceTraceConfig { @Bean public TracingQueryExecutionListener tracingQueryExecutionListener(Tracer tracer, KocaTraceProperties kocaTraceProperties) { return new TracingQueryExecutionListener(tracer, kocaTraceProperties.getDatabase().getInclude()); } @Bean public DatasourceProxyBeanPostProcessor datasourceProxyBeanPostProcessor(TracingQueryExecutionListener listener, KocaTraceProperties kocaTraceProperties) { return new DatasourceProxyBeanPostProcessor(listener, kocaTraceProperties); } static class DatasourceProxyBeanPostProcessor implements BeanPostProcessor { private TracingQueryExecutionListener listener; private KocaTraceProperties kocaTraceProperties; DatasourceProxyBeanPostProcessor(TracingQueryExecutionListener listener, KocaTraceProperties kocaTraceProperties) { this.listener = listener; this.kocaTraceProperties = kocaTraceProperties; } @Override public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException { if (bean instanceof DataSource && !(bean instanceof ProxyDataSource) && !ScopedProxyUtils.isScopedTarget(beanName)) { final ProxyFactory factory = new ProxyFactory(bean); factory.setProxyTargetClass(true); factory.addAdvice(new ProxyDataSourceInterceptor((DataSource) bean, listener, beanName, kocaTraceProperties.getDatabase())); return factory.getProxy(); } return bean; } @Override public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException { return bean; } public static class ProxyDataSourceInterceptor implements MethodInterceptor { private final DataSource dataSource; ProxyDataSourceInterceptor(final DataSource dataSource, final TracingQueryExecutionListener listener, String beanName, KocaTraceProperties.Database database) { this.dataSource = ProxyDataSourceBuilder.create(dataSource) .name(database.getNameMapping().getOrDefault(beanName, database.getDefaultName())) .methodListener(listener).listener(listener).build(); } @Override public Object invoke(final MethodInvocation invocation) throws Throwable { final Method proxyMethod = ReflectionUtils.findMethod(this.dataSource.getClass(), invocation.getMethod().getName()); if (proxyMethod != null) { return proxyMethod.invoke(this.dataSource, invocation.getArguments()); } return invocation.proceed(); } } } }
-
使用zipkin收集链路并展示效果如下: