之前的服务端的channel初始化好了,接下来就会监听连接事件看以下reactor线程的死循环做的事情跟踪代码到
nioeventloop类的run方法
//事件循环
@Override
protected void run() {
for (;;) {
try {
try {
//hasTasks() 若taskQueue or tailTasks任务队列中有任务 返回false 没有则返回true
// //有任务返回selectnow的返回值 没任务返回-1
switch (selectStrategy.calculateStrategy(selectNowSupplier, hasTasks())) {
case SelectStrategy.CONTINUE:
continue;
case SelectStrategy.BUSY_WAIT:
// fall-through to SELECT since the busy-wait is not supported with NIO
case SelectStrategy.SELECT:
//首先轮询注册到reactor线程对应的selector上的所有的channel的IO事件
//wakenUp 表示是否应该唤醒正在阻塞的select操作,netty在每次进行新的loop之前,都会将wakeUp 被设置成false,标志新的一轮loop的开始
select(wakenUp.getAndSet(false));
if (wakenUp.get()) {
selector.wakeup();
}
// fall through
default:
}
} catch (IOException e) {
// If we receive an IOException here its because the Selector is messed up. Let's rebuild
// the selector and retry. https://github.com/netty/netty/issues/8566
rebuildSelector0();
handleLoopException(e);
continue;
}
cancelledKeys = 0;
needsToSelectAgain = false;
final int ioRatio = this.ioRatio;
if (ioRatio == 100) {
try {
processSelectedKeys();
} finally {
// Ensure we always run tasks.
runAllTasks();
}
} else {
final long ioStartTime = System.nanoTime();
try {
//2.处理产生网络IO事件的channel
processSelectedKeys();
} finally {
// Ensure we always run tasks.
final long ioTime = System.nanoTime() - ioStartTime;
//3.处理任务队列
runAllTasks(ioTime * (100 - ioRatio) / ioRatio);
}
}
} catch (Throwable t) {
handleLoopException(t);
}
// Always handle shutdown even if the loop processing threw an exception.
try {
if (isShuttingDown()) {
closeAll();
if (confirmShutdown()) {
return;
}
}
} catch (Throwable t) {
handleLoopException(t);
}
}
}
之前初始化通道的时候当taskqueue里有任务的时候就会 去执行任务,现在任务都执行完了,就会返回-1走
select(wakenUp.getAndSet(false));
这个方法里主要是执行定时任务的以及空轮训的规避方案;
接下来看 **processSelectedKeys();**跟踪代码到
private void processSelectedKeysOptimized() {
for (int i = 0; i < selectedKeys.size; ++i) {
final SelectionKey k = selectedKeys.keys[i];
// null out entry in the array to allow to have it GC'ed once the Channel close
// See https://github.com/netty/netty/issues/2363
selectedKeys.keys[i] = null;
final Object a = k.attachment();
if (a instanceof AbstractNioChannel) {
processSelectedKey(k, (AbstractNioChannel) a);
} else {
@SuppressWarnings("unchecked")
NioTask<SelectableChannel> task = (NioTask<SelectableChannel>) a;
processSelectedKey(k, task);
}
if (needsToSelectAgain) {
// null out entries in the array to allow to have it GC'ed once the Channel close
// See https://github.com/netty/netty/issues/2363
selectedKeys.reset(i + 1);
selectAgain();
i = -1;
}
}
}
这里表示有select有事件发生的时候走的看代码知道现在目前只有一个服务端的seversocketchannel且附加对象为abstractniochannel 所以会走 **processSelectedKey(k, (AbstractNioChannel) a);**点击去
private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
// k.isValid()告知此键是否有效。
if (!k.isValid()) {
final EventLoop eventLoop;
try {
eventLoop = ch.eventLoop();
} catch (Throwable ignored) {
// If the channel implementation throws an exception because there is no event loop, we ignore this
// because we are only trying to determine if ch is registered to this event loop and thus has authority
// to close ch.
return;
}
// Only close ch if ch is still registered to this EventLoop. ch could have deregistered from the event loop
// and thus the SelectionKey could be cancelled as part of the deregistration process, but the channel is
// still healthy and should not be closed.
// See https://github.com/netty/netty/issues/5125
if (eventLoop != this || eventLoop == null) {
return;
}
// close the channel if the key is not valid anymore
unsafe.close(unsafe.voidPromise());
return;
}
try {
//获取此键的 ready 操作集合。
int readyOps = k.readyOps();
// System.out.println("readyOps--"+readyOps);
// We first need to call finishConnect() before try to trigger a read(...) or write(...) as otherwise
// the NIO JDK channel implementation may throw a NotYetConnectedException.
if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
// remove OP_CONNECT as otherwise Selector.select(..) will always return without blocking
// See https://github.com/netty/netty/issues/924
int ops = k.interestOps();
ops &= ~SelectionKey.OP_CONNECT;
k.interestOps(ops);
unsafe.finishConnect();
}
// Process OP_WRITE first as we may be able to write some queued buffers and so free memory.
if ((readyOps & SelectionKey.OP_WRITE) != 0) {
// Call forceFlush which will also take care of clear the OP_WRITE once there is nothing left to write
ch.unsafe().forceFlush();
}
// Also check for readOps of 0 to workaround possible JDK bug which may otherwise lead
// to a spin loop
if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
unsafe.read();
}
} catch (CancelledKeyException ignored) {
unsafe.close(unsafe.voidPromise());
}
}
这个方法更nio很像,当第一次来肯定是对连接事件accept刚兴趣所以会走 unsafe.read();这里的unsafe是serversocketchannel的属性且该对象是NioMessageUnsafe实力所以跟踪下去
@Override
public void read() {
assert eventLoop().inEventLoop();
final ChannelConfig config = config();
final ChannelPipeline pipeline = pipeline();
final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
allocHandle.reset(config);
boolean closed = false;
Throwable exception = null;
try {
try {
do {
int localRead = doReadMessages(readBuf);
if (localRead == 0) {
break;
}
if (localRead < 0) {
closed = true;
break;
}
allocHandle.incMessagesRead(localRead);
} while (allocHandle.continueReading());
} catch (Throwable t) {
exception = t;
}
int size = readBuf.size();
for (int i = 0; i < size; i ++) {
readPending = false;
pipeline.fireChannelRead(readBuf.get(i));
}
readBuf.clear();
allocHandle.readComplete();
pipeline.fireChannelReadComplete();
if (exception != null) {
closed = closeOnReadError(exception);
pipeline.fireExceptionCaught(exception);
}
if (closed) {
inputShutdown = true;
if (isOpen()) {
close(voidPromise());
}
}
} finally {
// Check if there is a readPending which was not processed yet.
// This could be for two reasons:
// * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
// * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
//
// See https://github.com/netty/netty/issues/2254
if (!readPending && !config.isAutoRead()) {
removeReadOp();
}
}
}
int localRead = doReadMessages(readBuf);跟踪到nioserversocketchannel的以下方法
@Override
protected int doReadMessages(List<Object> buf) throws Exception {
SocketChannel ch = SocketUtils.accept(javaChannel());
try {
if (ch != null) {
buf.add(new NioSocketChannel(this, ch));
return 1;
}
} catch (Throwable t) {
logger.warn("Failed to create a new channel from an accepted socket.", t);
try {
ch.close();
} catch (Throwable t2) {
logger.warn("Failed to close a socket.", t2);
}
}
return 0;
}
SocketChannel ch = SocketUtils.accept(javaChannel());获取客户端的socketchannel
buf.add(new NioSocketChannel(this, ch));把原生的socketchannel封装成netty的niosocketchannel然后加入到一个集合中
而netty的niosocketchannel对象有是感兴趣事件为read属性,且也有pipeline这时候只有head->tail两个节点;还有unsafe属性实
现类是niobyteunsafe;
然后退出doReadMessages(readBuf)方法往下走,
pipeline.fireChannelRead(readBuf.get(i));这个比较熟悉了之前服务端初始化时候是调用firechannelactive()方法现在是调用服务端的
pipeline的处理链的channelRead方法head->nettyTestHendler->ServerBootstrapAcceptor-> tail;
所以先走到head发现调用下个节点的channelRead方法所以这时候就走到自定义的节点,所以我们可以在里面做一些有关操作的,然后进入
ServerBootstrapAcceptor(serverbootstrap的内部类)的channelRead方法代码如下
public void channelRead(ChannelHandlerContext ctx, Object msg) {
final Channel child = (Channel) msg;
child.pipeline().addLast(childHandler);
setChannelOptions(child, childOptions, logger);
for (Entry<AttributeKey<?>, Object> e: childAttrs) {
child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
try {
childGroup.register(child).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
forceClose(child, future.cause());
}
}
});
} catch (Throwable t) {
forceClose(child, t);
}
}
child.pipeline().addLast(childHandler);这里往处理链加了节点这个节点是我们自定义的
.childHandler(new ChannelInitializer() {
@Override
protected void initChannel(NioSocketChannel nioSocketChannel) throws Exception {
nioSocketChannel.pipeline().addLast(new StringDecoder(),new NettyServerHendler());
}
});
为什么是这个呢,可以分析下.在一开始执行上面代码的时候会给serverbootstrap的childHandler赋值为这个new ChannelInitializer
然后在之前服务端的处理链加节点的时候如下
p.addLast(new ChannelInitializer<Channel>() {
@Override
public void initChannel(final Channel ch) throws Exception {
//System.out.println(ch==channel); true
final ChannelPipeline pipeline = ch.pipeline();
//System.out.println(pipeline==p); true
//config.handler()=自己创建的new ChannelInitializer<ServerSocketChannel>()
ChannelHandler handler = config.handler();
if (handler != null) {
pipeline.addLast(handler);
}
ch.eventLoop().execute(new Runnable() {
@Override
public void run() {
// System.out.println("执行了");
//bossGroup将客户端连接转交给workerGroup
pipeline.addLast(new ServerBootstrapAcceptor(
ch, currentChildGroup, currentChildHandler, currentChildOptions, currentChildAttrs));
}
});
}
});
在往处理链加入了ServerBootstrapAcceptor节点的时候
pipeline.addLast(new ServerBootstrapAcceptor(
ch, currentChildGroup, currentChildHandler, currentChildOptions, currentChildAttrs));
}
这个就是我们自定义的处理节点,所以给ServerBootstrapAcceptor这个初始化的时候属性附上了这个值;
且其中currentChildGroup这个是workgroup;
所以此时的niosocketchannel的pipeline的处理链为head->ChannelInitializer->tail;
接着看
childGroup.register(child).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
forceClose(child, future.cause());
}
}
});
这里其实和服务端初始化基本上一致的注册方法.
从工作eventloopgroup取出一个,eventloop且有一个新的select(netty自己的,改了数据结构),然后把niosocketchannel注册到这个select然后也有个死循环线程执行任务处理io所以处理任务然后这个时候的该niosocketchannel的pipeline的处理链变成
head->StringDecoder->NettyServerHendler-> tail;
接着把该socket的监听事件设置对read感兴趣;
总结:对于有新连接到来的时候,服务端的处理链的serverbootstrapacceptor的channelread方法会在工作线程中取出一个重新构建一个select而这个select只对读写感兴趣,;这就是reactor模型,服务端的select只负责连接,然后把对应得socket分配到给其他的select的上面去;
一
图片第一张是netty工作流程,第二张是启动流程,重要类的包含的组件