前言
Seata是一个分布式事务解决方案框架,既然是分布式性质的事务解决方案,那么Seata必然涉及到网络通信。Seata内部实现了一个RPC模块用于RM、TM、TC进行事务的创建、提交、回滚等操作之间的通信。
项目结构
Seata rpc模块位于core项目中,代码结构整体预览如下所示:
源码分析
Seata是使用netty做为RPC的底层通信,接下来我们先分析下Seata对netty模块的使用。
Seata封装的rpc通信类图如下:
1.Netty Server通信模块
1.1 初始化Netty Server
NettyServerBootstrap类中构造函数对netty的bossGroup、workGroup进行了初始化:
public NettyServerBootstrap(NettyServerConfig nettyServerConfig) {
this.nettyServerConfig = nettyServerConfig;
if (NettyServerConfig.enableEpoll()) {
this.eventLoopGroupBoss = new EpollEventLoopGroup(nettyServerConfig.getBossThreadSize(),
new NamedThreadFactory(nettyServerConfig.getBossThreadPrefix(), nettyServerConfig.getBossThreadSize()));
this.eventLoopGroupWorker = new EpollEventLoopGroup(nettyServerConfig.getServerWorkerThreads(),
new NamedThreadFactory(nettyServerConfig.getWorkerThreadPrefix(),
nettyServerConfig.getServerWorkerThreads()));
} else {
this.eventLoopGroupBoss = new NioEventLoopGroup(nettyServerConfig.getBossThreadSize(),
new NamedThreadFactory(nettyServerConfig.getBossThreadPrefix(), nettyServerConfig.getBossThreadSize()));
this.eventLoopGroupWorker = new NioEventLoopGroup(nettyServerConfig.getServerWorkerThreads(),
new NamedThreadFactory(nettyServerConfig.getWorkerThreadPrefix(),
nettyServerConfig.getServerWorkerThreads()));
}
// init listenPort in constructor so that getListenPort() will always get the exact port
setListenPort(nettyServerConfig.getDefaultListenPort());
}
NettyServerBootstrap.start()方法用于启动netty server,加载一系列启动所需要的参数,以及初始化Seata业务处理handler:
@Override
public void start() {
this.serverBootstrap.group(this.eventLoopGroupBoss, this.eventLoopGroupWorker)
.channel(NettyServerConfig.SERVER_CHANNEL_CLAZZ)
.option(ChannelOption.SO_BACKLOG, nettyServerConfig.getSoBackLogSize())
.option(ChannelOption.SO_REUSEADDR, true)
.childOption(ChannelOption.SO_KEEPALIVE, true)
.childOption(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.SO_SNDBUF, nettyServerConfig.getServerSocketSendBufSize())
.childOption(ChannelOption.SO_RCVBUF, nettyServerConfig.getServerSocketResvBufSize())
.childOption(ChannelOption.WRITE_BUFFER_WATER_MARK,
new WriteBufferWaterMark(nettyServerConfig.getWriteBufferLowWaterMark(),
nettyServerConfig.getWriteBufferHighWaterMark()))
.localAddress(new InetSocketAddress(listenPort))
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) {
ch.pipeline().addLast(new IdleStateHandler(nettyServerConfig.getChannelMaxReadIdleSeconds(), 0, 0))
.addLast(new ProtocolV1Decoder())
.addLast(new ProtocolV1Encoder());
if (channelHandlers != null) {
addChannelPipelineLast(ch, channelHandlers);
}
}
});
try {
ChannelFuture future = this.serverBootstrap.bind(listenPort).sync();
LOGGER.info("Server started, listen port: {}", listenPort);
RegistryFactory.getInstance().register(new InetSocketAddress(XID.getIpAddress(), XID.getPort()));
initialized.set(true);
future.channel().closeFuture().sync();
} catch (Exception exx) {
throw new RuntimeException(exx);
}
}
从上面代码来看,我们可以看到Seata使用4个Handler来处理rpc的心跳、编解码、业务请求处理:
- IdleStateHandler Netty内置的心跳检测handler
- ProtocolV1Decoder Seata解码器
- ProtocolV1Encoder Seata编码器
- ServerHandler 业务请求处理handler
Seata 使用到的Netty参数:
参数 |
默认值 |
说明 |
ChannelOption.SO_BACKLOG |
1024 |
连接请求存放到队列中的数量。 |
ChannelOption.SO_REUSEADDR |
true |
开启端口重用。 |
ChannelOption.SO_KEEPALIVE |
true |
连接保持。 |
ChannelOption.TCP_NODELAY |
true |
禁止使用Nagle算法,降低rpc接口时延。 |
ChannelOption.SO_SNDBUF ChannelOption.SO_RCVBUF |
153600 |
设置tcp发送、接收缓冲区大小 |
ChannelOption.WRITE_BUFFER_WATER_MARK |
1048576 1M 67108864 64M |
设置netty高水位、低水位线,保护系统不被压垮。 |
1.2 ProtocolV1Decoder详解
Seata rpc协议头设计:
Len |
Param |
Desc |
Desc in chinese |
2B |
Magic Code |
0xdada |
魔术位 |
1B |
ProtocolVersion |
1 |
协议版本:用于非兼容性升级 |
4B |
FullLength |
include front 3 bytes and self 4 bytes |
总长度 :用于拆包,包括前3位和自己4位 |
2B |
HeadLength |
include front 7 bytes, self 4 bytes, and head map |
头部长度:包括前面7位,自己4位,以及 HeadMap |
1B |
Message type |
request(oneway/twoway)/response/heartbeat/callback |
消息类型:请求(单向/双向 |