Netty接收新连接原理

前面学了服务启动的流程,服务启动过程创建了一个NioServerSocketChannel,里面有如下几个重要信息:

  • java.nio.channels.ServerSocketChannel:Java NIO中的类,将其注册到NioEventLoopjava.nio.channels.Selector上,注册的是OP_ACCEPT事件。

  • NioEventLoop:里面的run方法会做两件事情

    ①从任务队列里面获取任务并执行

    ②调用Selector#select方法监听是否有IO事件发生

  • 管道DefaultChannelPipelineHeadContext<=>LoggineHandler<=>ServerBootstrapAcceptor<=>TailContext,负责处理和拦截入站和出站的事件。

run()

通过前面学习Netty线程池原理可以知道,NioEventLooprun()方法里面负责监听IO事件,所以可以在这里打个断点

public final class NioEventLoop extends SingleThreadEventLoop {
    private Selector selector;
    private Selector unwrappedSelector;

    private final IntSupplier selectNowSupplier = new IntSupplier() {
        @Override
        public int get() throws Exception {
            return selectNow();
        }
    };
    int selectNow() throws IOException {
        return selector.selectNow();
    }

    @Override
    protected void run() {
        // 记录发生空转的次数, 既没有IO需要处理、也没有执行任何任务
        int selectCnt = 0;
        for (;;) {
            try {
                int strategy;
                try {
                    // hasTasks() 是判断任务队列是否有任务
                    // 如果 hasTasks()为true, 则返回selectNowSupplier.get(), 否则返回 SelectStrategy.SELECT
                    strategy = selectStrategy.calculateStrategy(selectNowSupplier, hasTasks());
                    switch (strategy) {
                    case SelectStrategy.CONTINUE: // -2
                        continue;
                    case SelectStrategy.BUSY_WAIT: // -3
                        // fall-through to SELECT since the busy-wait is not supported with NIO

                    case SelectStrategy.SELECT: // -1
                        // 获取下一个定时任务执行时间
                        long curDeadlineNanos = nextScheduledTaskDeadlineNanos();
                        if (curDeadlineNanos == -1L) {
                            curDeadlineNanos = NONE; // nothing on the calendar
                        }
                        nextWakeupNanos.set(curDeadlineNanos);
                        try {
                            // 再次判断有没有任务
                            if (!hasTasks()) {
                                // key 调用select
                                strategy = select(curDeadlineNanos);
                            }
                        } finally {
                            // This update is just to help block unnecessary selector wakeups
                            // so use of lazySet is ok (no race condition)
                            nextWakeupNanos.lazySet(AWAKE);
                        }
                        // fall through
                    default:
                    }
                } catch (IOException e) {
                    // If we receive an IOException here its because the Selector is messed up. Let's rebuild
                    // the selector and retry. https://github.com/netty/netty/issues/8566
                    // 出现异常, 重新构建Selector并注册事件
                    rebuildSelector0();
                    selectCnt = 0;
                    handleLoopException(e);
                    continue;
                }
                // 计算器加1
                selectCnt++;
                cancelledKeys = 0;
                needsToSelectAgain = false;
                // 控制处理执行io事件时间的占用比例, 默认是百分之50
                // 一半时间用来处理io事件, 一半时间用来处理任务队列taskQueue里面的任务
                final int ioRatio = this.ioRatio;
                boolean ranTasks;
                if (ioRatio == 100) { 
                    try {
                        if (strategy > 0) {
                            // 处理IO事件
                            processSelectedKeys();
                        }
                    } finally {
                        // Ensure we always run tasks.
                        // key 执行taskQueue中全部的任务
                        ranTasks = runAllTasks();
                    }
                } else if (strategy > 0) { // 如果 ioRatio != 100, 优先处理IO事件
                    final long ioStartTime = System.nanoTime();
                    try {
                        // 处理IO事件
                        processSelectedKeys();
                    } finally {
                        // 计算处理IO花费的时间
                        // Ensure we always run tasks.
                        final long ioTime = System.nanoTime() - ioStartTime;
                        // key ioTime * (100 - ioRatio) / ioRatio 是计算任务队列执行的时间
                        ranTasks = runAllTasks(ioTime * (100 - ioRatio) / ioRatio);
                    }
                } else {
                    // 0 代表运行最小数量的任务, 即运行63个任务
                    ranTasks = runAllTasks(0); // This will run the minimum number of tasks
                }
                // 执行了任务队列的任务或者是有IO事件
                if (ranTasks || strategy > 0) {
                    if (selectCnt > MIN_PREMATURE_SELECTOR_RETURNS && logger.isDebugEnabled()) {
                        logger.debug("Selector.select() returned prematurely {} times in a row for Selector {}.",
                                selectCnt - 1, selector);
                    }
                    selectCnt = 0;
                } else if (unexpectedSelectorWakeup(selectCnt)) { // Unexpected wakeup (unusual case)
                    selectCnt = 0;
                    // 如果(!ranTasks && strategy <= 0) 即任务队里面没有任务, 也没有IO事件
                    // 则会执行 unexpectedSelectorWakeup(selectCnt) 
                    // 如果 selectCnt 达到一定最大值(默认为512), 重新构建Selector并注册事件, 防止 JDK BUG空轮训
                    // 这个BUG好像是虽然调用了阻塞的selector.select()
                    // 但是由于操作系统底层发现socket断开,还是会返回0,然后又没能处理相应的事件
                  	// 而且任务队列也为空的情况下,就会死循环下去,造成CPU100%
                  	// Netty的解决方案就是用了一个变量selectCnt统计轮询的次数。
                }
            } catch (CancelledKeyException e) {
                // Harmless exception - log anyway
                if (logger.isDebugEnabled()) {
                    logger.debug(CancelledKeyException.class.getSimpleName() + " raised by a Selector {} - JDK bug?",
                            selector, e);
                }
            } catch (Error e) {
                throw (Error) e;
            } catch (Throwable t) {
                handleLoopException(t);
            } finally {
                // Always handle shutdown even if the loop processing threw an exception.
                try {
                    if (isShuttingDown()) {
                        closeAll();
                        if (confirmShutdown()) {
                            return;
                        }
                    }
                } catch (Error e) {
                    throw (Error) e;
                } catch (Throwable t) {
                    handleLoopException(t);
                }
            }
        }
    }
}

processSelectedKeys()

接着看processSelectedKeys()里面是如何处理IO事件的

public final class NioEventLoop extends SingleThreadEventLoop {
    private void processSelectedKeys() {
        if (selectedKeys != null) {
            // 监听到事件, selectedKeys肯定不为null
            processSelectedKeysOptimized();
        } else {
            processSelectedKeysPlain(selector.selectedKeys());
        }
    }

    private void processSelectedKeysOptimized() {
        for (int i = 0; i < selectedKeys.size; ++i) {
            final SelectionKey k = selectedKeys.keys[i];
            // null out entry in the array to allow to have it GC'ed once the Channel close
            // See https://github.com/netty/netty/issues/2363
            selectedKeys.keys[i] = null;
            // 取出SelectionKey中的附件
            // 服务启动那一节我们知道, 
            // Java原生的java.nio.channels.ServerSocketChannel把NioServerSocketChannel作为附件注册到Selector
            // 所以这里取出来的就是Netty中的NioServerSocketChannel
            final Object a = k.attachment();

            if (a instanceof AbstractNioChannel) {
                // 走的是这里
                processSelectedKey(k, (AbstractNioChannel) a);
            } else {
                @SuppressWarnings("unchecked")
                NioTask<SelectableChannel> task = (NioTask<SelectableChannel>) a;
                processSelectedKey(k, task);
            }

            if (needsToSelectAgain) {
                // null out entries in the array to allow to have it GC'ed once the Channel close
                // See https://github.com/netty/netty/issues/2363
                selectedKeys.reset(i + 1);

                selectAgain();
                i = -1;
            }
        }
    }

    private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
        // 这里取出来的 NioUnsafe 是 io.netty.channel.nio.AbstractNioMessageChannel.NioMessageUnsafe
        final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
        if (!k.isValid()) {
            // 这里是一些异常校验, 省略......
        }
        try {
            // 读取事件类型
            int readyOps = k.readyOps();
            // 如果是 CONNECT 事件
            if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
                // remove OP_CONNECT as otherwise Selector.select(..) will always return without blocking
                // See https://github.com/netty/netty/issues/924
                int ops = k.interestOps();
                ops &= ~SelectionKey.OP_CONNECT;
                k.interestOps(ops);

                unsafe.finishConnect();
            }

            // 如果是 WRITE 事件
            if ((readyOps & SelectionKey.OP_WRITE) != 0) {
                // Call forceFlush which will also take care of clear the OP_WRITE once there is nothing left to write
                ch.unsafe().forceFlush();
            }

            // 如果是 READ 事件或者 ACCEPT 事件
            // ACCEPT: 有新连接
            // READ: 有可读数据
            if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
                unsafe.read();
            }
        } catch (CancelledKeyException ignored) {
            unsafe.close(unsafe.voidPromise());
        }
    }
}

针对不同的事件使用不同的逻辑进行处理,具体的处理逻辑交给了Channelunsafe来处理,对于接收新连接的过程,这里的unsafeNioServerSocketChannel中的unsafe了,即io.netty.channel.nio.AbstractNioMessageChannel.NioMessageUnsafe

public abstract class AbstractNioMessageChannel extends AbstractNioChannel {
    private final class NioMessageUnsafe extends AbstractNioUnsafe {
        private final List<Object> readBuf = new ArrayList<Object>();
        @Override
        public void read() {
            assert eventLoop().inEventLoop();
            final ChannelConfig config = config();
            // pipeline的内容为: HeadContext<=>LoggineHandler<=>ServerBootstrapAcceptor<=>TailContext
            final ChannelPipeline pipeline = pipeline();
            final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
            allocHandle.reset(config);

            boolean closed = false;
            Throwable exception = null;
            try {
                try {
                    do {
                        // ① 读取信息
                        int localRead = doReadMessages(readBuf);
                        if (localRead == 0) {
                            break;
                        }
                        if (localRead < 0) {
                            closed = true;
                            break;
                        }

                        allocHandle.incMessagesRead(localRead);
                    } while (continueReading(allocHandle));
                } catch (Throwable t) {
                    exception = t;
                }

                int size = readBuf.size();
                for (int i = 0; i < size; i ++) {
                    readPending = false;
                    // ② 触发channelRead()
                    pipeline.fireChannelRead(readBuf.get(i));
                }
                readBuf.clear();
                allocHandle.readComplete();
                pipeline.fireChannelReadComplete();

                if (exception != null) {
                    closed = closeOnReadError(exception);

                    pipeline.fireExceptionCaught(exception);
                }

                if (closed) {
                    inputShutdown = true;
                    if (isOpen()) {
                        close(voidPromise());
                    }
                }
            } finally {
                // Check if there is a readPending which was not processed yet.
                // This could be for two reasons:
                // * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
                // * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
                //
                // See https://github.com/netty/netty/issues/2254
                if (!readPending && !config.isAutoRead()) {
                    removeReadOp();
                }
            }
        }
    }
}

这里主要做了两件事情:

  • doReadMessages(readBuf),读取消息

  • pipeline.fireChannelRead(readBuf.get(i)),触发ChannelHandlerchannelRead()方法,即将读取到的内容readBuf.get(i)ChannelPipeline中传播

先看doReadMessages(readBuf)

public class NioServerSocketChannel extends AbstractNioMessageChannel
                             implements io.netty.channel.socket.ServerSocketChannel {
    @Override
    protected int doReadMessages(List<Object> buf) throws Exception {
        // 这里返回的是 java.nio.channels.SocketChannel
        SocketChannel ch = SocketUtils.accept(javaChannel());

        try {
            if (ch != null) {
                // 构造了一个Netty的NioSocketChannel, 并把Java NIO 中的java.nio.channels.SocketChannel传入
                buf.add(new NioSocketChannel(this, ch));
                return 1;
            }
        } catch (Throwable t) {
            logger.warn("Failed to create a new channel from an accepted socket.", t);

            try {
                ch.close();
            } catch (Throwable t2) {
                logger.warn("Failed to close a socket.", t2);
            }
        }

        return 0;
    }   
}
public final class SocketUtils {
    public static SocketChannel accept(final ServerSocketChannel serverSocketChannel) throws IOException {
        try {
            return AccessController.doPrivileged(new PrivilegedExceptionAction<SocketChannel>() {
                @Override
                public SocketChannel run() throws IOException {
                    // 调用Java原生的aacept()方法创建一个SocketChannel
                    return serverSocketChannel.accept();
                }
            });
        } catch (PrivilegedActionException e) {
            throw (IOException) e.getCause();
        }
    }
}

可以发现,Netty最终是调用的Java原生的SeverSocketChannelaccept()方法来创建一个SocketChannel
并把这个SocketChannel绑定到NettyNioSocketChannel中。

NioSocketChannel的创建流程跟NioServerSocketChannel创建流程是类似的。

NioSocketChannel创建,主要做了如下几件事情

  1. NioSocketChannel里面会有一个Java原生的java.nio.channels.SocketChannel

  2. 记录监听事件为READ事件

  3. Channel分配一个id

  4. Channel创建一个unsafe,io.netty.channel.nio.AbstractNioByteChannel.NioByteUnsafe

  5. Channel分配一个pipeline(DefaultChannelPipeline),默认情况,这是一个双向链表:HeadContext<=>TailContext

到这里NioSocketChannel就创建好了,但是它的pipeline中还是只有headtail两个Handler,还无法处理消息。

继续看pipeline.fireChannelRead(readBuf.get(i))

这里的pipelineNioServerSocketChannel对应的ChannelPipeline
服务启动完成后这个ChannelPipeline中的双向链表为HeadContext<=>LoggineHandler<=>ServerBootstrapAcceptor<=>TailContext

public class ServerBootstrap extends AbstractBootstrap<ServerBootstrap, ServerChannel> {
    private static class ServerBootstrapAcceptor extends ChannelInboundHandlerAdapter {
        public void channelRead(ChannelHandlerContext ctx, Object msg) {
            // 这里的 msg 就是上面的 readBuf.get(i),也就是NioSocketChannel
            final Channel child = (Channel) msg;
            // 添加子ChannelHandler, 这里同样也是以ChannelInitializer的形式添加的
            child.pipeline().addLast(childHandler);

            setChannelOptions(child, childOptions, logger);
            setAttributes(child, childAttrs);

            try {
                // 将NioSocketChannel注册到workerGroup中的一个EventLoop上
                // 这一步执行完成后NioSocketChannel中的pipeline为:HeadContext<=>LoggineHandler<=>EchoServerHandler<=>TailContext
                childGroup.register(child).addListener(new ChannelFutureListener() {
                    @Override
                    public void operationComplete(ChannelFuture future) throws Exception {
                        if (!future.isSuccess()) {
                            forceClose(child, future.cause());
                        }
                    }
                });
            } catch (Throwable t) {
                forceClose(child, t);
            }
        }
    }
}

这里的代码跟上一节中initAndRegister()方法中的逻辑是类似的,就不细说了。

一个java.nio.channels.SocketChannel建立完成,也就可以接收消息了。

总结

  1. Netty中监听IO事件写在NioEventLoooprun方法

  2. Netty底层通过 Java原生ServerSocketChannel来接收新连接

  3. Netty将接收到的SocketChannel包装成了NioSocketChannel

    NioSocketChannel有如下几个重要参数

    • ch,java.nio.channels.SocketChannel,Java NIO中的类
    • readInterestOp,记录要监听的事件为:SelectionKey.OP_READ
    • unsafe,io.netty.channel.nio.AbstractNioByteChannel.NioByteUnsafe
    • pipeline,DefaultChannelPipeline,这是一个双向链表,默认为:HeadContext<=>TailContext
  4. 调用pipeline.fireChannelRead(readBuf.get(i))readBuf.get(i)就是NioSocketChannel

    pipelineNioServerSocketChannel中的,此时为:HeadContext<=>LoggineHandler<=>ServerBootstrapAcceptor<=>TailContext

    最终会调用到ServerBootstrapAcceptor中的channelRead方法,主要做了两件事情:

    ① 主要是往NioSocketChannelpipeline中添加了一个ChannelInitializer,这个是我们编写Netty程序第7步设置的childHandler。此时NioSocketChannel中的pipeline为: HeadContext<=>ChannelInitializer<=>TailContext

    ② 从线程组中(childGroup)选择一个NioEventLoop

    • 此时NioSocketChannel就绑定了一个NioEventLoop

    • 往这个NioEventLoop提交了一个异步任务[①]io.netty.channel.AbstractChannel.AbstractUnsafe#register0

      异步任务的功能:将NioSocketChannel中的java.nio.channels.SocketChannelNioEventLoopSelector绑定。然后执行pipeline.invokeHandlerAddedIfNeeded()方法。这里会触发ChannelInitializer执行initChannel方法,该方法会把编写Netty程序第7步设置的handler添加到NioSocketChannel中的pipeline,然后将自己从ChannelPipeline移除。

      此时NioSocketChannel中的pipeline为:HeadContext<=>LoggineHandler<=>EchoServerHandler<=>TailContext

  • 19
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值