上一篇文章,研究了netty中,reactor线程在自旋中执行细节,本文则主要探讨对应io事件执行机制。
accept事件接收连接
入口是在 NioEventLoop
的processSelectedKey
中:
if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
unsafe.read();
}
对于NioServerSocektChannel
来说,对应的 unsafe 类为 NioMessageUnsafe
,即在父类 AbstractNioMessageChannel
中方法。
NioMessageUnsafe的read
@Override
public void read() {
assert eventLoop().inEventLoop();
// 获取channel config
final ChannelConfig config = config();
// ChannelPipeline
final ChannelPipeline pipeline = pipeline();
// 获取一个可回收的 RecvByteBufAllocator.Handle
final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
// 设置相关参数,例如最多16次读
allocHandle.reset(config);
boolean closed = false;
Throwable exception = null;
try {
try {
do {
// 接收连接
int localRead = doReadMessages(readBuf);
// 为0 说明没有读到数据
if (localRead == 0) {
break;
}
if (localRead < 0) {
closed = true;
break;
}
// 增加读取次数,server端是+1
allocHandle.incMessagesRead(localRead);
} while (allocHandle.continueReading());
} catch (Throwable t) {
exception = t;
}
// 逐一触发 fireChannelRead 事件
int size = readBuf.size();
for (int i = 0; i < size; i ++) {
readPending = false;
pipeline.fireChannelRead(readBuf.get(i));
}
// 操作完后,会清理buffer
readBuf.clear();
// 标记为已读完
allocHandle.readComplete();
// 传播readcomplete事件,如果是自动读,则会触发NioSocketChannel 下一次读事件,即将NioSocketChannel 绑定为读事件
pipeline.fireChannelReadComplete();
// 判断是否异常
if (exception != null) {
closed = closeOnReadError(exception);
pipeline.fireExceptionCaught(exception);
}
// 判断是否关闭
if (closed) {
inputShutdown = true;
if (isOpen()) {
close(voidPromise());
}
}
} finally {
// Check if there is a readPending which was not processed yet.
// This could be for two reasons:
// * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
// * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
//
// See https://github.com/netty/netty/issues/2254
if (!readPending && !config.isAutoRead()) {
removeReadOp();
}
}
}
}
allocHandle.reset(config)
DefaultMaxMessagesRecvByteBufAllocator
的 reset
方法,主要是设置一次性读的最大数量,默认是16:
@Override
public void reset(ChannelConfig config) {
this.config = config;
maxMessagePerRead = maxMessagesPerRead();
totalMessages = totalBytesRead = 0;
}
如何理解最大数量?
服务端 doReadMessages(readBuf)
在服务端中读取信息,之间听了accept事件,所以就是将接受的 SocketChannel
放入list对象中:
@Override
protected int doReadMessages(List<Object> buf) throws Exception {
SocketChannel ch = SocketUtils.accept(javaChannel());
try {
if (ch != null) {
buf.add(new NioSocketChannel(this, ch));
return 1;
}
} catch (Throwable t) {
... 异常处理
}
return 0;
}
并且 NioSocketChannel
的parent属性则是ServerSocketChannel
。
allocHandle.continueReading()
DefaultMaxMessageRecvByteBufAllocator
的continueReading
:
@Override
public boolean continueReading(UncheckedBooleanSupplier maybeMoreDataSupplier) {
return config.isAutoRead() &&
(!respectMaybeMoreData || maybeMoreDataSupplier.get()) &&
totalMessages < maxMessagePerRead &&
totalBytesRead > 0;
}
是否能继续往后面再读一次条件为:
- 是否配置自动度
- 是否有更多数据读取
- 是否小于16次
- 是否有读到数据
服务端 fireChannelRead
由于会去pipeline中调用每个channelHandler进行事件处理,所以在nio 的accept事件之后,会执行 fireChannelRead
方法。
读事件第一个处理为headContext
。而后由HeadContext
将事件进行向后传播。
DefaultChannelPipeline
的 HeadContext
。
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ctx.fireChannelRead(msg);
}
最终进入到 ServerBootstrap
的 channelRead
方法:
@Override
@SuppressWarnings("unchecked")
public void channelRead(ChannelHandlerContext ctx, Object msg) {
// 此时msg是NioSocketChannel
final Channel child = (Channel) msg;
// 将最开始配置中childerhandler加入
child.pipeline().addLast(childHandler);
// 设置NioServerChannel属性
setChannelOptions(child, childOptions, logger);
for (Entry<AttributeKey<?>, Object> e: childAttrs) {
child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
try {
// 给SocketChannel注册 reactor线程
childGroup.register(child).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
forceClose(child, future.cause());
}
}
});
} catch (Throwable t) {
forceClose(child, t);
}
}
服务端accpt总结
- 本质上是对nio的封装
- 在接收完链接后,会触发相关读事件(
fireChannelRead
),以及读完成事件(fireChannelReadComplete
)对于server端来说,会设置对应selectKeys事件设置为 读事件监听。
read读事件
对于读事件,入口同样是 NioEventLoop
的 processSelectedKeysOptimized
事件,但是此时 k.attachment()
则为 NioSocketChannel
。
并且NioSocketChannel
中unsafe为 AbstractNioByteChanne$NioByteUnsafe
对象。
@Override
public final void read() {
final ChannelConfig config = config();
if (shouldBreakReadReady(config)) {
clearReadPending();
return;
}
final ChannelPipeline pipeline = pipeline();
final ByteBufAllocator allocator = config.getAllocator();
final RecvByteBufAllocator.Handle allocHandle = recvBufAllocHandle();
allocHandle.reset(config);
// 定义 ByteBuf
ByteBuf byteBuf = null;
boolean close = false;
try {
do {
// 对于io事件来说,默认是pooledUnsafeDirectByteBuf,因为这样便于和channel交互。
byteBuf = allocHandle.allocate(allocator);
allocHandle.lastBytesRead(doReadBytes(byteBuf));
if (allocHandle.lastBytesRead() <= 0) {
// 没有能读的,直接退出
byteBuf.release();
byteBuf = null;
close = allocHandle.lastBytesRead() < 0;
if (close) {
// There is nothing left to read as we received an EOF.
readPending = false;
}
break;
}
// 堵了一次了。
allocHandle.incMessagesRead(1);
readPending = false;
// 发送事件
pipeline.fireChannelRead(byteBuf);
byteBuf = null;
} while (allocHandle.continueReading());
allocHandle.readComplete();
pipeline.fireChannelReadComplete();
if (close) {
closeOnRead(pipeline);
}
} catch (Throwable t) {
handleReadException(pipeline, byteBuf, t, close, allocHandle);
} finally {
// Check if there is a readPending which was not processed yet.
// This could be for two reasons:
// * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
// * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
//
// See https://github.com/netty/netty/issues/2254
if (!readPending && !config.isAutoRead()) {
removeReadOp();
}
}
}
由上面读事件可以看出,默认ByteBuf大小是 1024,且为 PooledUnsafeDirectByteBuf
。池化且直接内存,因为和channel交互,和使用直接内存能够获取更高效率。
即一次io事件,默认最大能读16*1024 信息,即16kb信息。并且每一次都会fireChannelRead去传播事件,这一步并没有提及netty粘包半包处理方案。
总结
- 本文主要分析了accept事件和read事件处理。
- NioServerSocketChannel 主要处理的为accept事件,会将读到事件往后传播。
- 其中,read事件是由HeadContext往后传播。
关注博主公众号: 六点A君。
哈哈哈,一起研究Netty: