文章基于rocket-mq4.0 代码分析
在Broker启动类BrokerStartup启动过程中调用BrokerController的initialize()方法
在该方法执行过程中会给不同的请求注册不同的处理器
具体代码:
SendMessageProcessor这个类就是消息保存的实际处理器,处理方法是 processRequest
start流程启动后开启调用监听
org.apache.rocketmq.broker.BrokerController#start
↓
org.apache.rocketmq.remoting.netty.NettyRemotingServer#start
↓
//注入netty nio监听处理器
ServerBootstrap childHandler =
this.serverBootstrap.group(this.eventLoopGroupBoss, this.eventLoopGroupSelector)
.channel(useEpoll() ? EpollServerSocketChannel.class : NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 1024)
.option(ChannelOption.SO_REUSEADDR, true)
.option(ChannelOption.SO_KEEPALIVE, false)
.childOption(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.SO_SNDBUF, nettyServerConfig.getServerSocketSndBufSize())
.childOption(ChannelOption.SO_RCVBUF, nettyServerConfig.getServerSocketRcvBufSize())
.localAddress(new InetSocketAddress(this.nettyServerConfig.getListenPort()))
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline()
.addLast(defaultEventExecutorGroup, HANDSHAKE_HANDLER_NAME,
new HandshakeHandler(TlsSystemConfig.tlsMode))
.addLast(defaultEventExecutorGroup,
new NettyEncoder(),
new NettyDecoder(),
new IdleStateHandler(0, 0, nettyServerConfig.getServerChannelMaxIdleTimeSeconds()),
new NettyConnectManageHandler(),
new NettyServerHandler()
);
}
});
class NettyServerHandler extends SimpleChannelInboundHandler<RemotingCommand> 是处理继承了消息接入的处理器,最终会把 REQUEST_COMMAND 类型的消息交给 SendMessageProcessor 处理
***********其实这里rocketmq为什么会把这个Processor取名为SendMessageProcessor我完全不能理解(为什么名字里是Send?),对于broker来说这明明是一个接收消息的处理器不是发消息,难道取名字的时候是站在client角度的?不知道
主流程调用会到
private RemotingCommand sendMessage(final ChannelHandlerContext ctx,
final RemotingCommand request,
final SendMessageContext sendMessageContext,
final SendMessageRequestHeader requestHeader) throws RemotingCommandException {
这个方法会判断是否事务消息而做不同的处理
普通消息和事务消息都会被封装成 MessageExtBrokerInner 对象,最终都是调用 DefaultMessageStore的 putMessage方法处理消息;
public PutMessageResult putMessage(MessageExtBrokerInner msg)
DefaultMessageStore又会调用CommitLog的
public PutMessageResult putMessage(final MessageExtBrokerInner msg)
方法保存消息;CommitLog里维护了commitlog文件的引用队列,程序会拿到最后一个commitlog文件然后向文件里写入数据
上图中的
result = mappedFile.appendMessage(msg, this.appendMessageCallback);
是一个将数据append到LogFile真实文件的内存映射中(MappedFile会回调CommitLog内部类DefaultAppendMessageCallback处理)并将结果包装成一个 AppendMessageResult 对象
向文件映射中put
如果包装成功会调用我们经常听到的刷盘服务进行真正的落盘
以 GroupCommitService 为例,核心代码跟踪到
CommitLog.this.mappedFileQueue.flush(0);
MappedFileQueue
public boolean flush(final int flushLeastPages) {
boolean result = true;
MappedFile mappedFile = this.findMappedFileByOffset(this.flushedWhere, false);
if (mappedFile != null) {
long tmpTimeStamp = mappedFile.getStoreTimestamp();
int offset = mappedFile.flush(flushLeastPages);
long where = mappedFile.getFileFromOffset() + offset;
result = where == this.flushedWhere;
this.flushedWhere = where;
if (0 == flushLeastPages) {
this.storeTimestamp = tmpTimeStamp;
}
}
return result;
}
然后调用 MappedFile
public int flush(final int flushLeastPages) {
if (this.isAbleToFlush(flushLeastPages)) {
if (this.hold()) {
int value = getReadPosition();
try {
//We only append data to fileChannel or mappedByteBuffer, never both.
if (writeBuffer != null || this.fileChannel.position() != 0) {
this.fileChannel.force(false);
} else {
this.mappedByteBuffer.force();
}
} catch (Throwable e) {
log.error("Error occurred when force data to disk.", e);
}
this.flushedPosition.set(value);
this.release();
} else {
log.warn("in flush, hold failed, flush offset = " + this.flushedPosition.get());
this.flushedPosition.set(getReadPosition());
}
}
return this.getFlushedPosition();
}
最终调用的是 FileChannel.force() 方法
该方法将通道里尚未写入磁盘的数据强制写到磁盘上,至此broker将接收到的消息写到文件中流程结束