定制skywalking oap server的步骤

10 篇文章 0 订阅
2 篇文章 0 订阅


基于skywalking6版本

编译准备:

  1. clone项目时,需包含submodules;因为与agent项目共用的protocol模块是通过submodules引入本项目的
  2. eclipse对os-maven-plugin的支持不佳,需进行额外配置,请参考:https://github.com/trustin/os-maven-plugin

扩展metric存储的步骤

以接收并存储定时采集的jdbc连接池信息为例

1. 扩展protobuf协议

gRPC客户端(agent)采集新的指标后,发送到oap-server

message JVMMetric {
    int64 time = 1;
    CPU cpu = 2;
    repeated Memory memory = 3;
    repeated MemoryPool memoryPool = 4;
    repeated GC gc = 5;
    repeated JdbcConnectPool jdbc = 6;
}

message JdbcConnectPool {
    string name = 1;
    string peer = 2;
    int64 used = 3;
    int64 max = 4;
}

2. 扩展scope

定义新的业务scope:SERVICE_INSTANCE_JVM_JDBC_CONNECT_POOL
因为一个应用实例可能包含多个需要监控的连接池,与现有的scope的特征都不一样(一个进程里只有一个监控对象,如cpu、heap)
需要指定scope属于新的catalog:SERVICE_INSTANCE_POOL_CATALOG_NAME
这个catalog同样适用于对memoryPool进行存储(skywalking6对jvm新生代、老生代数据进行了采集,但是没有进行存储、展示)

@ScopeDeclaration(id = DefaultScopeDefine.SERVICE_INSTANCE_JVM_JDBC_CONNECT_POOL, name = "ServiceInstanceJVMJdbcConnectPool", catalog = DefaultScopeDefine.SERVICE_INSTANCE_POOL_CATALOG_NAME)
@ScopeDefaultColumn.VirtualColumnDefinition(fieldName = "entityId", columnName = "entity_id", isID = true, type = String.class)
public class ServiceInstanceJVMJdbcConnectPool extends Source {
    @Override public int scope() {
        return DefaultScopeDefine.SERVICE_INSTANCE_JVM_JDBC_CONNECT_POOL;
    }

    @Override public String getEntityId() {
        return String.valueOf(serviceInstanceId) + "_" + name;
    }

    @Getter @Setter @ScopeDefaultColumn.DefinedByField(columnName = "service_instance_id") private int serviceInstanceId;
    @Getter @Setter @ScopeDefaultColumn.DefinedByField(columnName = "name") private String name;
    @Getter @Setter private String serviceName;
    @Getter @Setter @ScopeDefaultColumn.DefinedByField(columnName = "service_id") private int serviceId;
    @Getter @Setter private long max;
    @Getter @Setter private long used;
    @Getter @Setter @ScopeDefaultColumn.DefinedByField(columnName = "peer_id") private int peerId;
}

在OALLexer.g4文件注册新的scope别名到OAL引擎

SRC_SERVICE_INSTANCE_JVM_MEMORY_POOL: 'ServiceInstanceJVMMemoryPool';
SRC_SERVICE_INSTANCE_JVM_GC: 'ServiceInstanceJVMGC';

SRC_SERVICE_INSTANCE_JVM_JDBC_CONNECT_POOL: 'ServiceInstanceJVMJdbcConnectPool';

SRC_DATABASE_ACCESS: 'DatabaseAccess';
SRC_SERVICE_INSTANCE_CLR_CPU: 'ServiceInstanceCLRCPU';

在OALParser.g4文件申明新的scope

source
    : SRC_ALL | SRC_SERVICE | SRC_DATABASE_ACCESS | SRC_SERVICE_INSTANCE | SRC_ENDPOINT |
      SRC_SERVICE_RELATION | SRC_SERVICE_INSTANCE_RELATION | SRC_ENDPOINT_RELATION |
      SRC_SERVICE_INSTANCE_JVM_CPU | SRC_SERVICE_INSTANCE_JVM_MEMORY | SRC_SERVICE_INSTANCE_JVM_MEMORY_POOL | SRC_SERVICE_INSTANCE_JVM_GC |// JVM source of service instance
      SRC_SERVICE_INSTANCE_CLR_CPU | SRC_SERVICE_INSTANCE_CLR_GC | SRC_SERVICE_INSTANCE_CLR_THREAD |
      SRC_ENVOY_INSTANCE_METRIC | SRC_SERVICE_INSTANCE_JVM_JDBC_CONNECT_POOL
    ;

3. 扩展统计、存储流水线

在official_analysis.oal定义新的流水线
流水线负责把scope数据预统计后进行存储
key值为落地的数据表名
使用的统计函数会按time_bucket(每分钟)进行统计,不同的统计方法会把统计结果存储到不同数据库字段

instance_jvm_jdbc_connect_pool = from(ServiceInstanceJVMJdbcConnectPool.used).doubleAvg();
instance_jvm_jdbc_connect_pool_max = from(ServiceInstanceJVMJdbcConnectPool.max).doubleAvg();

4. 封装scope数据,分配给流水线

在gRPC服务端逻辑里进行处理

	private void sendToJdbcConnectPoolMetricProcess(int serviceId, int serviceInstanceId, long timeBucket,
			List<JdbcConnectPool> jdbcConnectPools) {

		jdbcConnectPools.forEach(jdbcConnectPool -> {
			ServiceInstanceJVMJdbcConnectPool serviceInstanceJVMJdbcConnectPool=new ServiceInstanceJVMJdbcConnectPool();
			serviceInstanceJVMJdbcConnectPool.setServiceInstanceId(serviceInstanceId);
			serviceInstanceJVMJdbcConnectPool.setName(jdbcConnectPool.getName());
			serviceInstanceJVMJdbcConnectPool.setServiceId(serviceId);
			serviceInstanceJVMJdbcConnectPool.setServiceName(Const.EMPTY_STRING);
			serviceInstanceJVMJdbcConnectPool.setTimeBucket(timeBucket);
			serviceInstanceJVMJdbcConnectPool.setUsed(jdbcConnectPool.getUsed());
			serviceInstanceJVMJdbcConnectPool.setMax(jdbcConnectPool.getMax());
			String peer = jdbcConnectPool.getPeer();
			if (peer != null && peer.trim().length() > 0) {
				serviceInstanceJVMJdbcConnectPool
						.setPeerId(this.networkAddressInventoryRegister.getOrCreate(peer, null));
			}
			sourceReceiver.receive(serviceInstanceJVMJdbcConnectPool);
		});
	}

扩展trace存储的步骤

以接收并存储实时采集的SQL执行记录为例

1. 扩展scope

定义新的业务类型SqlSpan

@ScopeDeclaration(id = DefaultScopeDefine.SQL_SPAN, name = "SqlSpan")
public class SqlSpan extends Source {
	@Override
	public int scope() {
		return DefaultScopeDefine.SQL_SPAN;
	}

	@Override
	public String getEntityId() {
		return segmentId + "." + String.valueOf(spanId);
	}

    @Setter @Getter private int spanId;
    @Setter @Getter private String segmentId;
    @Setter @Getter private int updates;
    @Setter @Getter private String statement;
    @Setter @Getter private String parameters;
}

2. 定义@Stream实时存储逻辑

SqlSpanRecord这个类既定义了存储格式,又注册了实时存储逻辑
通过@Stream里的scopeId把scope数据与此存储逻辑进行一对一地绑定

@Stream(name = SqlSpanRecord.INDEX_NAME, scopeId = DefaultScopeDefine.SQL_SPAN, builder = SqlSpanRecord.Builder.class, processor = RecordStreamProcessor.class)
public class SqlSpanRecord extends Record {
	public static final String INDEX_NAME = "sql_span";
    public static final String SPAN_ID = "span_id";
    public static final String SEGMENT_ID = "segment_id";
    public static final String UPDATES = "updates";
    public static final String STATEMENT = "statement"; 
    public static final String PARAMETERS = "parameters";
    
    @Setter @Getter @Column(columnName = SPAN_ID) private int spanId;
    @Setter @Getter @Column(columnName = SEGMENT_ID) private String segmentId;
    @Setter @Getter @Column(columnName = UPDATES) private int updates;
    @Setter @Getter @Column(columnName = STATEMENT) private String statement;
    @Setter @Getter @Column(columnName = PARAMETERS) private String parameters;
    
    public static class Builder implements StorageBuilder<SqlSpanRecord> {}
}

3. 添加SourceDispatcher

SourceDispatcher负责把scope数据填充到存储类的对应字段,会自动注册到流水线

public class SqlSpanDispatcher implements SourceDispatcher<SqlSpan> {
	@Override
	public void dispatch(SqlSpan source) {		
		SqlSpanRecord span = new SqlSpanRecord();
		span.setSpanId(source.getSpanId());
        span.setSegmentId(source.getSegmentId());
        span.setUpdates(source.getUpdates());
        span.setStatement(source.getStatement());
        span.setParameters(source.getParameters());
        span.setTimeBucket(source.getTimeBucket());
        RecordStreamProcessor.getInstance().in(span);
	}
}

4. 封装scope数据,分配给流水线

OAP接收到trace数据后,会把其包含的ExitSpan数据分配给实现了ExitSpanListener的类来处理

public class MultiScopesSpanListener implements EntrySpanListener, ExitSpanListener, GlobalTraceIdsListener {
	private final List<SqlSpan> sqlSpans;
	
	@Override
	public void parseExit(SpanDecorator spanDecorator, SegmentCoreInfo segmentCoreInfo) {
		if (sourceBuilder.getType().equals(RequestType.DATABASE)) {
			if (isSQL && isValidSQL) {
				SqlSpan span = new SqlSpan();
				span.setSpanId(spanDecorator.getSpanId());
				span.setSegmentId(segmentCoreInfo.getSegmentId());
				span.setUpdates(updates);
				span.setStatement(statement.getStatement());
				span.setParameters(parameters);
				span.setTimeBucket(TimeBucket.getRecordTimeBucket(segmentCoreInfo.getStartTime()));
				sqlSpans.add(span);
			}
		}
	}

	@Override
	public void build() {
		sqlSpans.forEach(sourceReceiver::receive);
	}
}
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值