netty服务端通道的事件处理

系列文章目录

Netty源码学习 - netty服务端通道的实例化流程
Netty源码学习 - netty服务端通道的初始化流程
Netty源码学习 - netty服务端通道的注册流程
Netty源码学习 - netty服务端通道的事件处理



前言

在前面一章中详细介绍了netty服务端通道的注册过程,通过在选择器上面注册事件并绑定到指定的端口上,最后并开始读取网络事件,此时就可以接收客户端的请求了。而接收客户端的请求首先是需要创建客户端的连接。本章就来分析客户端连接事件的处理的流程。


提示:以下是本篇文章正文内容,下面案例可供参考

一、客户端连接事件处理

接收到客户端的连接请求之后,在Server端的EventLoop中就会查询出感兴趣的连接事件。
在这里插入图片描述

在这里插入图片描述

在io.netty.channel.nio.NioEventLoop#processSelectedKey(java.nio.channels.SelectionKey, io.netty.channel.nio.AbstractNioChannel)方法中处理不同的事件

private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
   final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
   if (!k.isValid()) {
       final EventLoop eventLoop;
       try {
           eventLoop = ch.eventLoop();
       } catch (Throwable ignored) {
           // If the channel implementation throws an exception because there is no event loop, we ignore this
           // because we are only trying to determine if ch is registered to this event loop and thus has authority
           // to close ch.
           return;
       }
       // Only close ch if ch is still registered to this EventLoop. ch could have deregistered from the event loop
       // and thus the SelectionKey could be cancelled as part of the deregistration process, but the channel is
       // still healthy and should not be closed.
       // See https://github.com/netty/netty/issues/5125
       if (eventLoop != this || eventLoop == null) {
           return;
       }
       // close the channel if the key is not valid anymore
       unsafe.close(unsafe.voidPromise());
       return;
   }

   try {
       int readyOps = k.readyOps();
       // We first need to call finishConnect() before try to trigger a read(...) or write(...) as otherwise
       // the NIO JDK channel implementation may throw a NotYetConnectedException.
       if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
           // remove OP_CONNECT as otherwise Selector.select(..) will always return without blocking
           // See https://github.com/netty/netty/issues/924
           int ops = k.interestOps();
           ops &= ~SelectionKey.OP_CONNECT;
           k.interestOps(ops);

           unsafe.finishConnect();
       }

       // Process OP_WRITE first as we may be able to write some queued buffers and so free memory.
       if ((readyOps & SelectionKey.OP_WRITE) != 0) {
           // Call forceFlush which will also take care of clear the OP_WRITE once there is nothing left to write
           ch.unsafe().forceFlush();
       }

       // Also check for readOps of 0 to workaround possible JDK bug which may otherwise lead
       // to a spin loop
       if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
           unsafe.read();
       }
   } catch (CancelledKeyException ignored) {
       unsafe.close(unsafe.voidPromise());
   }
}

在最后一行当中,对于OP_READ和OP_ACCEPT都会执行unsafe的读取操作,这里属于多态的一个使用了。在NioServerSocketChannel,感兴趣的是OP_ACCEPT事件,对应的是NioMessageUnsafe,而NioSocketChannel感兴趣的OP_READ事件,对应的为NioByteUnsafe。当然这里对应的是前者。处理一个连接事件。NioMessageUnsafe类作为AbstractNioMessageChannel的内部类。在这个类当中包含一个名称为readBuf的列表,用于保存通过doReadMessages获取的数据,当然这里获取的是来自客户端的连接,因为支持多个连接,所以是列表。如果实在UDP通讯当中,接收的就是DatagramPacket了。
对应的io.netty.channel.nio.AbstractNioMessageChannel.NioMessageUnsafe#read方法如下

private final List<Object> readBuf = new ArrayList<Object>();

@Override
public void read() {
    assert eventLoop().inEventLoop();
    final ChannelConfig config = config();
    final ChannelPipeline pipeline = pipeline();
    final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
    allocHandle.reset(config);

    boolean closed = false;
    Throwable exception = null;
    try {
        try {
            do {
                1. 读取数据 关键处
                int localRead = doReadMessages(readBuf);
                if (localRead == 0) {
                    break;
                }
                if (localRead < 0) {
                    closed = true;
                    break;
                }

                allocHandle.incMessagesRead(localRead);
            } while (allocHandle.continueReading());
        } catch (Throwable t) {
            exception = t;
        }

        int size = readBuf.size();
        for (int i = 0; i < size; i ++) {
            readPending = false;
            2. 触发通道读取事件
            pipeline.fireChannelRead(readBuf.get(i));
        }
        readBuf.clear();
        allocHandle.readComplete();
        3. 触发通道读取完毕事件
        pipeline.fireChannelReadComplete();

        if (exception != null) {
            closed = closeOnReadError(exception);

            pipeline.fireExceptionCaught(exception);
        }

        if (closed) {
            inputShutdown = true;
            if (isOpen()) {
                close(voidPromise());
            }
        }
    } finally {
        // Check if there is a readPending which was not processed yet.
        // This could be for two reasons:
        // * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
        // * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
        //
        // See https://github.com/netty/netty/issues/2254
        if (!readPending && !config.isAutoRead()) {
            removeReadOp();
        }
    }
}

在这里的doReadMessages是由子类NioServerSocketChannel来实现的。通过serverSocketChannel.accept()获取一个NIO的连接,并包装为NioSocketChannel并保存到列表当中。创建的过程涉及到一系列流程。
io.netty.channel.socket.nio.NioServerSocketChannel#doReadMessages

@Override
protected int doReadMessages(List<Object> buf) throws Exception {
    SocketChannel ch = SocketUtils.accept(javaChannel());

    try {
        if (ch != null) {
            buf.add(new NioSocketChannel(this, ch));
            return 1;
        }
    } catch (Throwable t) {
        logger.warn("Failed to create a new channel from an accepted socket.", t);

        try {
            ch.close();
        } catch (Throwable t2) {
            logger.warn("Failed to close a socket.", t2);
        }
    }

    return 0;
}

1、客户端连接对象NioSocketChannel的实例化

每一个客户端的连接都会在服务端创建一个NioSocketChannel的实例,内部包含有NIO的连接实例。另外,NioServerSocketChannel会作为parent记录。

/**
 * Create a new instance
 *
 * @param parent    the {@link Channel} which created this instance or {@code null} if it was created by the user
 * @param socket    the {@link SocketChannel} which will be used
 */
public NioSocketChannel(Channel parent, SocketChannel socket) {
    super(parent, socket);
    config = new NioSocketChannelConfig(this, socket.socket());
}

这个通道对读事件感兴趣。

/**
 * Create a new instance
 *
 * @param parent            the parent {@link Channel} by which this instance was created. May be {@code null}
 * @param ch                the underlying {@link SelectableChannel} on which it operates
 */
protected AbstractNioByteChannel(Channel parent, SelectableChannel ch) {
    super(parent, ch, SelectionKey.OP_READ);
}

AbstractNioByteChannel也继承了AbstractNioChannel,也就是说除了感兴趣的事件不一样之外,其他的构造逻辑与NioServerSocketChannel是一模一样的,创建通道对应的管道、Unsafe属性。
在这里插入图片描述

2、fireChannelRead

如果可以获取到客户端的连接并实例化,而且此时没有需要读取的数据的话,就会进入到通道读取事件的触发流程了。就是从上面获取的连接实例列表readBuf中一个一个获取,然后触发即可。此时会从HeadContext节点一个一个入站处理器依次触发。对于HeadContext的实现如下,这里没有啥特殊处理,仅仅是将事件向下传递。

@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
    ctx.fireChannelRead(msg);
}

在向下传递的过程中就会进入到io.netty.bootstrap.ServerBootstrap.ServerBootstrapAcceptor#ServerBootstrapAcceptor这个入站处理器,这个是在服务端通道初始化的时候注册(参考:netty服务端通道的初始化流程)的任务,然后在注册时添加到服务端通道管道当中的(参考:服务端通道的注册流程)。对应的channelRead实现如下。

public void channelRead(ChannelHandlerContext ctx, Object msg) {
    final Channel child = (Channel) msg;
    1. 添加子处理器
    child.pipeline().addLast(childHandler);
    2. 设置子通道参数
    setChannelOptions(child, childOptions, logger);

    for (Entry<AttributeKey<?>, Object> e: childAttrs) {
        child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
    }

    try {
        3. 进行子通道的注册
        childGroup.register(child).addListener(new ChannelFutureListener() {
            @Override
            public void operationComplete(ChannelFuture future) throws Exception {
                if (!future.isSuccess()) {
                    forceClose(child, future.cause());
                }
            }
        });
    } catch (Throwable t) {
        forceClose(child, t);
    }
}

a、添加子处理器

什么是子处理器呢?在我们通过ServerBootstrap调用childHandler方法设置事件处理器的时候其实就是设置子处理器,通常也就是我们最关心的业务处理器。因为对于服务端接收连接并分配子通道,我们并不关心,而子通道是实际与客户端连接并进行业务沟通的。也就是说这里给当前的子通道添加的子处理器就是如下这个匿名ChannelInitializer实例。
在这里插入图片描述
在这里插入图片描述
添加事件处理器的逻辑与前面基本差不多,此时由于子通道还未开始注册事件,所以不会就触发handlerAdded事件。而是注册了一个未来的事件,这个与前面服务端刚开始也是一样的。

在这里插入图片描述

b、子通道的注册

进行注册操作,从EventLoopGroup当中取一个EventLoop来处理注册事件。这里跟前面NioServerSocketChannel其实是一模一样的,将这个通道绑定到这个EventLoop上面,如果EventLoop没有创建对应的线程,则启动一个线程并绑定,然后执行Loop事件,执行任务队列当中的register0事件)。

private void register0(ChannelPromise promise) {
   try {
       // check if the channel is still open as it could be closed in the mean time when the register
       // call was outside of the eventLoop
       if (!promise.setUncancellable() || !ensureOpen(promise)) {
           return;
       }
       boolean firstRegistration = neverRegistered;
       doRegister();
       neverRegistered = false;
       registered = true;

       // Ensure we call handlerAdded(...) before we actually notify the promise. This is needed as the
       // user may already fire events through the pipeline in the ChannelFutureListener.
       pipeline.invokeHandlerAddedIfNeeded();

       safeSetSuccess(promise);
       pipeline.fireChannelRegistered();
       // Only fire a channelActive if the channel has never been registered. This prevents firing
       // multiple channel actives if the channel is deregistered and re-registered.
       if (isActive()) {
           if (firstRegistration) {
               pipeline.fireChannelActive();
           } else if (config().isAutoRead()) {
               // This channel was registered before and autoRead() is set. This means we need to begin read
               // again so that we process inbound data.
               //
               // See https://github.com/netty/netty/issues/4805
               beginRead();
           }
       }
   } catch (Throwable t) {
       // Close the channel directly to avoid FD leak.
       closeForcibly();
       closeFuture.setClosed();
       safeSetFailure(promise, t);
   }
}

父通道对应EventLoop对应的线程返回继续Loop。父通道的fireChannelRead事件结束。查找入站的处理器并触发channelReadComplete事件。从头节点开始一个一个遍历。其中最主要的还是在头节点当中。传递完事件之后继续读取数据。与之前channelActive的逻辑差不多。
在这里插入图片描述

@Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
    ctx.fireChannelReadComplete();

    readIfIsAutoRead();
}

在这里插入图片描述
在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

lang20150928

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值