阅读本文之前最好先阅读前一篇博文的分析,以便于更好的理解本文内容。链接如下Netty源码解析一:客户端的连接源码分析
本文主要针对服务器端的Netty常见编码形式进行源码分析,从而充分理解Netty的运行机制和各个组件的本质。
首先,NioEventLoopGroup的创建过程都是类似的,这里就不再进行追溯,而在创建的bs方法,客户端使用的是Bootstrap,服务器端使用的是ServerBootstrap。因此在sbs的链式调用这一部分有所区别,这里就从链式调用开始进行分析。先将服务端设置监听的源码展示如下。
public static void main(String[] args) {
// 创建过程与客户端一致不再进行赘述
NioEventLoopGroup bossGroup = new NioEventLoopGroup(1);
NioEventLoopGroup workGroup = new NioEventLoopGroup(6);
// 空构造方法,主要还是链式调用和bind方法的分析
ServerBootstrap bootstrap = new ServerBootstrap();
// 链式调用,需要重点进行分析
bootstrap.group(bossGroup,workGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG,128)
.childOption(ChannelOption.SO_KEEPALIVE,true)
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(new MyMessageDecoder());
pipeline.addLast(new MyMessageEncoder());
pipeline.addLast(new MyServerHandler());
}
});
try {
// 监听,需要重点分析
ChannelFuture channelFuture = bootstrap.bind(9000).sync();
channelFuture.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
}finally {
bossGroup.shutdownGracefully();
workGroup.shutdownGracefully();
}
1、sbs的链式调用
sbs的链式调用与客户端的主要区别就是创建了两个group,因此在参数设置方面有些不同之处。但大体上来说还是可以分为以下几个流程。
- group参数设置
- channel参数设置
- channeloption参数设置
- handler参数设置
(1) group参数设置
此段代码的分析起始位置为bootstrap.group(bossGroup,workGroup)
.
该方法调用io.netty.bootstrap.ServerBootstrap#group(io.netty.channel.EventLoopGroup, io.netty.channel.EventLoopGroup)方法
public ServerBootstrap group(EventLoopGroup parentGroup, EventLoopGroup childGroup) {
// 此处传入的是parentGroup,也即是bossGroup。对于该方法进行追溯
super.group(parentGroup);
// 将workgroup赋值为sbs的类属性
this.childGroup = ObjectUtil.checkNotNull(childGroup, "childGroup");
return this;
}
调用io.netty.bootstrap.AbstractBootstrap#group(EventLoopGroup)
public B group(EventLoopGroup group) {
// 将设置为父类BossGroup设置为父类.AbstractBootstrap中的group类属性
this.group = group;
return self();
}
@SuppressWarnings("unchecked")
private B self() {
return (B) this;
}
(2) channel参数设置
在channel参数设置时,传入的Channel的类型是NIoServerSocketChannel,所以其创建过程与客户端的NioSocketChannel会有所区别。
调用io.netty.bootstrap.AbstractBootstrap#channel() 方法
public B channel(Class<? extends C> channelClass) {
return channelFactory(new ReflectiveChannelFactory<C>(
ObjectUtil.checkNotNull(channelClass, "channelClass")
));
}
此处与客户端的设置方式相同,都是初始化了一个channelfactory,并赋值给bs。
但是因为其传入的类对象不同,故而在真正进行channel的创建时,过程会有所不同。在这里先将埋一个伏笔。
(3) ChannelOption参数设置
channelOption的设置分为option() 方法和childOption() 方法,option() 用来给BossGroup的channel作option设置,childOption()用来给workergroup的channel作option设置。接下来,对这两个方法分别进行分析。
- option()
调用io.netty.bootstrap.AbstractBootstrap#option()方法
public <T> B option(ChannelOption<T> option, T value) {
ObjectUtil.checkNotNull(option, "option");
synchronized (options) {
if (value == null) {
options.remove(option);
} else {
// 由于设置了参数,将会调用该方法,所以以该方法为例进行继续追溯
options.put(option, value);
}
}
return self();
}
调用io.netty.util.AbstractConstant#hashCode()
@Override
public final int hashCode() {
return super.hashCode();
}
这里需要进行分析的是options是什么?
public abstract class AbstractBootstrap<B extends AbstractBootstrap<B, C>, C extends Channel> implements Cloneable{ private final Map<ChannelOption<?>, Object> options = new LinkedHashMap<ChannelOption<?>, Object>();}
上面的代码可以看到options是AbstractBootstrap类中的一个属性,它本质上是一个Map,key为ChannelOption,value为Object。
那么ChannelOption是什么?
这里列举ChannelOption类中的部分代码进行分析。
public class ChannelOption<T> extends AbstractConstant<ChannelOption<T>> { /** * Returns the {@link ChannelOption} of the specified name. */ @SuppressWarnings("unchecked") public static <T> ChannelOption<T> valueOf(String name) { return (ChannelOption<T>) pool.valueOf(name); } public static final ChannelOption<ByteBufAllocator> ALLOCATOR = valueOf("ALLOCATOR"); public static final ChannelOption<RecvByteBufAllocator> RCVBUF_ALLOCATOR = valueOf("RCVBUF_ALLOCATOR"); public static final ChannelOption<MessageSizeEstimator> MESSAGE_SIZE_ESTIMATOR = valueOf("MESSAGE_SIZE_ESTIMATOR"); public static final ChannelOption<Integer> CONNECT_TIMEOUT_MILLIS = valueOf("CONNECT_TIMEOUT_MILLIS"); public static final ChannelOption<Boolean> SO_BROADCAST = valueOf("SO_BROADCAST"); public static final ChannelOption<Boolean> SO_KEEPALIVE = valueOf("SO_KEEPALIVE"); public static final ChannelOption<Integer> SO_SNDBUF = valueOf("SO_SNDBUF"); public static final ChannelOption<Integer> SO_RCVBUF = valueOf("SO_RCVBUF"); public static final ChannelOption<Boolean> SO_REUSEADDR = valueOf("SO_REUSEADDR"); public static final ChannelOption<Integer> SO_LINGER = valueOf("SO_LINGER"); public static final ChannelOption<Integer> SO_BACKLOG = valueOf("SO_BACKLOG"); public static final ChannelOption<Integer> SO_TIMEOUT = valueOf("SO_TIMEOUT"); public static final ChannelOption<Integer> IP_TOS = valueOf("IP_TOS"); public static final ChannelOption<InetAddress> IP_MULTICAST_ADDR = valueOf("IP_MULTICAST_ADDR"); public static final ChannelOption<NetworkInterface> IP_MULTICAST_IF = valueOf("IP_MULTICAST_IF"); public static final ChannelOption<Integer> IP_MULTICAST_TTL = valueOf("IP_MULTICAST_TTL"); public static final ChannelOption<Boolean> IP_MULTICAST_LOOP_DISABLED = valueOf("IP_MULTICAST_LOOP_DISABLED"); public static final ChannelOption<Boolean> TCP_NODELAY = valueOf("TCP_NODELAY"); }
该类中定义了很多静态属性变量,用来指代TCP连接中的一项设置参数,可以认为该类就是一个定义TCP配置常量的类。
option方法,将参数设置到了AbstractBootstrap(bossGroup所在的类)的options类属性中。
- childOption()
调用io.netty.bootstrap.ServerBootstrap#childOption() 方法。
public <T> ServerBootstrap childOption(ChannelOption<T> childOption, T value) {
ObjectUtil.checkNotNull(childOption, "childOption");
synchronized (childOptions) {
if (value == null) {
childOptions.remove(childOption);
} else {
// 将其设置到ServerBootstrap的ChildOption属性中去
childOptions.put(childOption, value);
}
}
return this;
}
childOptions是什么呢?
private final Map<ChannelOption<?>, Object> childOptions = new LinkedHashMap<ChannelOption<?>, Object>();
接下来的内容就不再赘述了。
(4) handler参数设置
handler参数的设置与option的设置相同,有handler()和childHandler()之分。
- handler()
调用io.netty.bootstrap.AbstractBootstrap#handler(ChannelHandler)。
public B handler(ChannelHandler handler) {
this.handler = ObjectUtil.checkNotNull(handler, "handler");
return self();
}
将handler方法中传入的ChannelHandler赋值给AbstractBootstrap(bossGroup所在类)的handler属性。
- childHandler()
调用io.netty.bootstrap.ServerBootstrap#childHandler(io.netty.channel.ChannelHandler)
public ServerBootstrap childHandler(ChannelHandler childHandler) {
this.childHandler = ObjectUtil.checkNotNull(childHandler, "childHandler");
return this;
}
将childHandler()传入的参数设置为ServerBootstrap中childHandler的类变量。
至于为何创建匿名内部类ChannelInitializer的原因在客户端的分析时已经进行阐述了。
2、启动服务端
启动服务端的代码就一行,但是却包含了在链式调用阶段设置的各种参数的启用以及服务器端的开启。
首先,先对ChannelFuture channelFuture = bootstrap.bind(9000).sync();
进行追溯,将有代码的分割点后再进行分别分析。
调用io.netty.bootstrap.AbstractBootstrap#bind(int),需要注意的是BossGroup就注册在这个类中。
// 同一个类中的方法调用
public ChannelFuture bind(int inetPort) {
return bind(new InetSocketAddress(inetPort));
}
public ChannelFuture bind(SocketAddress localAddress) {
validate();
return doBind(ObjectUtil.checkNotNull(localAddress, "localAddress"));
}
private ChannelFuture doBind(final SocketAddress localAddress) {
// 熟悉的initAndRegister()方法,重点追溯
final ChannelFuture regFuture = initAndRegister();
final Channel channel = regFuture.channel();
if (regFuture.isDone()) {
// At this point we know that the registration was complete and successful.
ChannelPromise promise = channel.newPromise();
// 有点熟悉的doBind0() 方法,主要负责进行监听,需要追溯
doBind0(regFuture, channel, localAddress, promise);
return promise;
} else {
// Registration future is almost always fulfilled already, but just in case it's not.
final PendingRegistrationPromise promise = new PendingRegistrationPromise(channel);
regFuture.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
Throwable cause = future.cause();
if (cause != null) {
// Registration on the EventLoop failed so fail the ChannelPromise directly to not cause an
// IllegalStateException once we try to access the EventLoop of the Channel.
promise.setFailure(cause);
} else {
// Registration was successful, so set the correct executor to use.
// See https://github.com/netty/netty/issues/2586
promise.registered();
doBind0(regFuture, channel, localAddress, promise);
}
}
});
return promise;
}
}
(1) initAndRegister()方法追溯
调用io.netty.bootstrap.AbstractBootstrap#initAndRegister() 方法
final ChannelFuture initAndRegister() {
Channel channel = null;
try {
// 这里的channelFactory创建的channel对象是在链式调用时channel() 方法中设置的NioServerSocketChannel对象
channel = channelFactory.newChannel();
// 初始化参数
init(channel);
} catch (Throwable t) {
if (channel != null) {
// channel can be null if newChannel crashed (eg SocketException("too many open files"))
channel.unsafe().closeForcibly();
// as the Channel is not registered yet we need to force the usage of the GlobalEventExecutor
return new DefaultChannelPromise(channel, GlobalEventExecutor.INSTANCE).setFailure(t);
}
// as the Channel is not registered yet we need to force the usage of the GlobalEventExecutor
return new DefaultChannelPromise(new FailedChannel(), GlobalEventExecutor.INSTANCE).setFailure(t);
}
// 将创建的对象进行注册的方法,需要重点追溯
ChannelFuture regFuture = config().group().register(channel);
if (regFuture.cause() != null) {
if (channel.isRegistered()) {
channel.close();
} else {
channel.unsafe().closeForcibly();
}
}
return regFuture;
}
这里重点对NIoServerSocketChannel的创建过程做追溯。
调用io.netty.channel.socket.nio.NioServerSocketChannel#NioServerSocketChannel()
public NioServerSocketChannel() {
this(newSocket(DEFAULT_SELECTOR_PROVIDER));
}
private static final SelectorProvider DEFAULT_SELECTOR_PROVIDER = SelectorProvider.provider();
其中newSocket(DEFAULT_SELECTOR_PROVIDER)的内容和客户端创建NIOSocketChannel时的分析一般无二。
主要对构造器方法的重载做主要分析。继续调用NioServerSocketChannel#NioServerSocketChannel(ServerSocketChannel)方法
public NioServerSocketChannel(ServerSocketChannel channel) {
// 服务器端的Channel的关注事件为OP_ACCEPT
super(null, channel, SelectionKey.OP_ACCEPT);
config = new NioServerSocketChannelConfig(this, javaChannel().socket());
}
调用io.netty.channel.nio.AbstractNioMessageChannel#AbstractNioMessageChannel()
protected AbstractNioMessageChannel(Channel parent, SelectableChannel ch, int readInterestOp) {
super(parent, ch, readInterestOp);
}
protected AbstractNioChannel(Channel parent, SelectableChannel ch, int readInterestOp) {
// 重点追溯该方法
super(parent);
this.ch = ch;
this.readInterestOp = readInterestOp;
try {
ch.configureBlocking(false);
} catch (IOException e) {
try {
ch.close();
} catch (IOException e2) {
logger.warn(
"Failed to close a partially initialized socket.", e2);
}
throw new ChannelException("Failed to enter non-blocking mode.", e);
}
}
调用io.netty.channel.AbstractChannel#AbstractChannel(io.netty.channel.Channel)
protected AbstractChannel(Channel parent) {
this.parent = parent;
id = newId();
// 注意这里创建的unsafe对象是AbstractNioChannel.AbstractNioUnsafe,与客户端的AbstractNioByteChannel.AbstractNioUnsafe不同。
unsafe = newUnsafe();
pipeline = newChannelPipeline();
}
newUnsafe调用的AbstractNioMessageChannel#newUnsafe() 方法
@Override protected AbstractNioUnsafe newUnsafe() { return new NioMessageUnsafe(); } private final class NioMessageUnsafe extends AbstractNioUnsafe { private final List<Object> readBuf = new ArrayList<Object>(); @Override public void read() { assert eventLoop().inEventLoop(); final ChannelConfig config = config(); final ChannelPipeline pipeline = pipeline(); final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle(); allocHandle.reset(config); boolean closed = false; Throwable exception = null; try { try { do { int localRead = doReadMessages(readBuf); if (localRead == 0) { break; } if (localRead < 0) { closed = true; break; } allocHandle.incMessagesRead(localRead); } while (allocHandle.continueReading()); } catch (Throwable t) { exception = t; } int size = readBuf.size(); for (int i = 0; i < size; i ++) { readPending = false; pipeline.fireChannelRead(readBuf.get(i)); } readBuf.clear(); allocHandle.readComplete(); pipeline.fireChannelReadComplete(); if (exception != null) { closed = closeOnReadError(exception); pipeline.fireExceptionCaught(exception); } if (closed) { inputShutdown = true; if (isOpen()) { close(voidPromise()); } } } finally { if (!readPending && !config.isAutoRead()) { removeReadOp(); } } } }
在创建channel的操作完成之后,还要对创建完成的Channel进行初始化操作。也即是对init(channel)方法进行追溯。该方法调用io.netty.bootstrap.ServerBootstrap#init()方法,源代码如下
@Override
void init(Channel channel) {
// 设置在链式调用时设置的ChannelOptions参数
setChannelOptions(channel, newOptionsArray(), logger);
setAttributes(channel, attrs0().entrySet().toArray(EMPTY_ATTRIBUTE_ARRAY));
// 此时的channel就是NioServerSocketChannel
ChannelPipeline p = channel.pipeline();
// 对当前的childGroup也即是WorkGroup,进行handler的设置
final EventLoopGroup currentChildGroup = childGroup;
final ChannelHandler currentChildHandler = childHandler;
final Entry<ChannelOption<?>, Object>[] currentChildOptions;
synchronized (childOptions) {
currentChildOptions = childOptions.entrySet().toArray(EMPTY_OPTION_ARRAY);
}
final Entry<AttributeKey<?>, Object>[] currentChildAttrs = childAttrs.entrySet().toArray(EMPTY_ATTRIBUTE_ARRAY);
p.addLast(new ChannelInitializer<Channel>() {
@Override
public void initChannel(final Channel ch) {
// 此时传入的ch就是NIoServerSocketChannel
final ChannelPipeline pipeline = ch.pipeline();
// 此时的config中存在的是一个Bootstrap,其中的NioEventLoopGroup就是BossGroup
// 因此此处获取的handler就是BossGroup对应的handler
ChannelHandler handler = config.handler();
// 因为没有在链式调用时使用handler() 方法设置bossGroup对应的handler,所以这段代码不会执行
if (handler != null) {
pipeline.addLast(handler);
}
// 获取channel对应的eventLoop,此时获取的是BossGroup上的NioEventLoop
ch.eventLoop().execute(new Runnable() {
@Override
public void run() {
// 使用管道添加新的处理器
pipeline.addLast(new ServerBootstrapAcceptor(
ch, currentChildGroup, currentChildHandler, currentChildOptions, currentChildAttrs));
}
});
}
});
}
-
追溯setChannelOptions方法,調用io.netty.bootstrap.AbstractBootstrap#setChannelOptions
static void setChannelOptions( Channel channel, Map.Entry<ChannelOption<?>, Object>[] options, InternalLogger logger) { for (Map.Entry<ChannelOption<?>, Object> e: options) { setChannelOption(channel, e.getKey(), e.getValue(), logger); } } private static void setChannelOption( Channel channel, ChannelOption<?> option, Object value, InternalLogger logger) { try { if (!channel.config().setOption((ChannelOption<Object>) option, value)) { logger.warn("Unknown channel option '{}' for channel '{}'", option, channel); } } catch (Throwable t) { logger.warn( "Failed to set channel option '{}' with value '{}' for channel '{}'", option, value, channel, t); } }
调用io.netty.channel.socket.nio.NioServerSocketChannel.NioServerSocketChannelConfig#setOption() 方法
@Override public <T> boolean setOption(ChannelOption<T> option, T value) { if (PlatformDependent.javaVersion() >= 7 && option instanceof NioChannelOption) { return NioChannelOption.setOption(jdkChannel(), (NioChannelOption<T>) option, value); } return super.setOption(option, value); }
调用io.netty.channel.socket.DefaultServerSocketChannelConfig#setOption() 方法。
public <T> boolean setOption(ChannelOption<T> option, T value) { validate(option, value); if (option == SO_RCVBUF) { setReceiveBufferSize((Integer) value); } else if (option == SO_REUSEADDR) { setReuseAddress((Boolean) value); } else if (option == SO_BACKLOG) { setBacklog((Integer) value); } else { return super.setOption(option, value); } return true; } // 进行参数的设置 @Override public ServerSocketChannelConfig setBacklog(int backlog) { checkPositiveOrZero(backlog, "backlog"); this.backlog = backlog; return this; }
-
在上述代码中分析了各个变量的不同含义,现在针对new ServerBootstrapAcceptor(ch, currentChildGroup, currentChildHandler, currentChildOptions, currentChildAttrs)进行分析,因为该对象是BossGroup对应的NIoServerSocketChannel对应的pipline上唯一的一个处理器。
调用io.netty.bootstrap.ServerBootstrap.ServerBootstrapAcceptor#ServerBootstrapAcceptor() 构造方法
ServerBootstrapAcceptor( final Channel channel, EventLoopGroup childGroup, ChannelHandler childHandler, Entry<ChannelOption<?>, Object>[] childOptions, Entry<AttributeKey<?>, Object>[] childAttrs) { // 传入childGroup,即是workgroup this.childGroup = childGroup; // 传入workgroup对应的handler this.childHandler = childHandler; // 传入workgroup对应的option this.childOptions = childOptions; this.childAttrs = childAttrs; enableAutoReadTask = new Runnable() { @Override public void run() { channel.config().setAutoRead(true); } }; } // 重写方法 @Override @SuppressWarnings("unchecked") public void channelRead(ChannelHandlerContext ctx, Object msg) { // 当bossGroup上的Server监测到连接事件时,创建连接 final Channel child = (Channel) msg; child.pipeline().addLast(childHandler); setChannelOptions(child, childOptions, logger); setAttributes(child, childAttrs); try { // 将接收到的连接注册到childGroup的selector上,对于该方法进行追溯 childGroup.register(child).addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { if (!future.isSuccess()) { forceClose(child, future.cause()); } } }); } catch (Throwable t) { forceClose(child, t); } }
目前还有一个问题需要进行分析,channelRead何时被调用呢?当一个Client连接到server时,javaNio底层的ServerSocketChannel就会有一个SelectionKey.OP_Accept的事件发生,接着就会调用NIoServerSocketChannel的doReadMessage()方法。
@Override protected int doReadMessages(List<Object> buf) throws Exception { SocketChannel ch = SocketUtils.accept(javaChannel()); try { if (ch != null) { // 添加一个NioSocketChannel buf.add(new NioSocketChannel(this, ch)); return 1; } } catch (Throwable t) { logger.warn("Failed to create a new channel from an accepted socket.", t); try { ch.close(); } catch (Throwable t2) { logger.warn("Failed to close a socket.", t2); } } return 0; }
(2) doBind0() 方法追溯
调用io.netty.bootstrap.AbstractBootstrap#doBind0() 方法
private static void doBind0(
final ChannelFuture regFuture, final Channel channel,
final SocketAddress localAddress, final ChannelPromise promise) {
// 此处的channel是NIoServerSocketChannel,对应的eventLoop是bossGroup中的eventLoop
// 在此先追溯channel.eventLoop().execute() 方法
channel.eventLoop().execute(new Runnable() {
@Override
public void run() {
if (regFuture.isSuccess()) {
// 重点追溯该方法
channel.bind(localAddress, promise).addListener(ChannelFutureListener.CLOSE_ON_FAILURE);
} else {
promise.setFailure(regFuture.cause());
}
}
});
}
-
channel.eventLoop().execute() 方法追溯
调用io.netty.util.concurrent.SingleThreadEventExecutor#execute(java.lang.Runnable)方法
@Override public void execute(Runnable task) { ObjectUtil.checkNotNull(task, "task"); execute(task, !(task instanceof LazyRunnable) && wakesUpForTask(task)); } private void execute(Runnable task, boolean immediate) { boolean inEventLoop = inEventLoop(); // 如果是当前正在运行的线程,就直接将任务添加进去 addTask(task); // 当前线程不是需要进行NioEventLoop,开启一个新的线程 if (!inEventLoop) { // 开始线程,对于该方法进行追溯 startThread(); if (isShutdown()) { boolean reject = false; try { if (removeTask(task)) { reject = true; } } catch (UnsupportedOperationException e) { // The task queue does not support removal so the best thing we can do is to just move on and // hope we will be able to pick-up the task before its completely terminated. // In worst case we will log on termination. } if (reject) { reject(); } } } if (!addTaskWakesUp && immediate) { wakeup(inEventLoop); } } private void startThread() { if (state == ST_NOT_STARTED) { // STATE_UPDATER是SingleThreadEventExecutor内部维护的一个属性,用来表示当前线程的状态 if (STATE_UPDATER.compareAndSet(this, ST_NOT_STARTED, ST_STARTED)) { boolean success = false; try { // 对该方法进行追溯 doStartThread(); success = true; } finally { if (!success) { STATE_UPDATER.compareAndSet(this, ST_STARTED, ST_NOT_STARTED); } } } } } private void doStartThread() { assert thread == null; executor.execute(new Runnable() { @Override public void run() { thread = Thread.currentThread(); if (interrupted) { thread.interrupt(); } boolean success = false; updateLastExecutionTime(); try { // 追溯该方法 SingleThreadEventExecutor.this.run(); success = true; } catch (Throwable t) { ..... } }
调用io.netty.channel.nio.NioEventLoop#run() 方法
protected void run() { int selectCnt = 0; // 无线循环 for (;;) { try { int strategy; try { strategy = selectStrategy.calculateStrategy(selectNowSupplier, hasTasks()); switch (strategy) { case SelectStrategy.CONTINUE: continue; case SelectStrategy.BUSY_WAIT: // fall-through to SELECT since the busy-wait is not supported with NIO case SelectStrategy.SELECT: long curDeadlineNanos = nextScheduledTaskDeadlineNanos(); if (curDeadlineNanos == -1L) { curDeadlineNanos = NONE; // nothing on the calendar } nextWakeupNanos.set(curDeadlineNanos); try { if (!hasTasks()) { // 开启select监听 strategy = select(curDeadlineNanos); } } selectCnt++; cancelledKeys = 0; needsToSelectAgain = false; final int ioRatio = this.ioRatio; boolean ranTasks; if (ioRatio == 100) { try { if (strategy > 0) { // 处理监听到的事件 processSelectedKeys(); } } finally { // 处理任务队列中的任务 // Ensure we always run tasks. ranTasks = runAllTasks(); } } else if (strategy > 0) { final long ioStartTime = System.nanoTime(); try { processSelectedKeys(); } finally { // Ensure we always run tasks. final long ioTime = System.nanoTime() - ioStartTime; ranTasks = runAllTasks(ioTime * (100 - ioRatio) / ioRatio); } } else { ranTasks = runAllTasks(0); // This will run the minimum number of tasks } if (ranTasks || strategy > 0) { if (selectCnt > MIN_PREMATURE_SELECTOR_RETURNS && logger.isDebugEnabled()) { logger.debug("Selector.select() returned prematurely {} times in a row for Selector {}.", selectCnt - 1, selector); } selectCnt = 0; } else if (unexpectedSelectorWakeup(selectCnt)) { // Unexpected wakeup (unusual case) selectCnt = 0; } } catch (CancelledKeyException e) { // Harmless exception - log anyway if (logger.isDebugEnabled()) { logger.debug(CancelledKeyException.class.getSimpleName() + " raised by a Selector {} - JDK bug?", selector, e); } } catch (Throwable t) { handleLoopException(t); } // Always handle shutdown even if the loop processing threw an exception. try { if (isShuttingDown()) { closeAll(); if (confirmShutdown()) { return; } } } catch (Throwable t) { handleLoopException(t); } } }
接下来对于其中的三大事件进行分析
-
select
// select事件 private int select(long deadlineNanos) throws IOException { if (deadlineNanos == NONE) { // 选择器进行监听 return selector.select(); } // Timeout will only be 0 if deadline is within 5 microsecs long timeoutMillis = deadlineToDelayNanos(deadlineNanos + 995000L) / 1000000L; return timeoutMillis <= 0 ? selector.selectNow() : selector.select(timeoutMillis); }
-
processSelectedKeys
// 处理selector上监听到的事件 private void processSelectedKeys() { if (selectedKeys != null) { processSelectedKeysOptimized(); } else { processSelectedKeysPlain(selector.selectedKeys()); } } // 处理代码,与NIO时编写的代码一致 private void processSelectedKeysPlain(Set<SelectionKey> selectedKeys) { if (selectedKeys.isEmpty()) { return; } Iterator<SelectionKey> i = selectedKeys.iterator(); for (;;) { final SelectionKey k = i.next(); final Object a = k.attachment(); i.remove(); if (a instanceof AbstractNioChannel) { processSelectedKey(k, (AbstractNioChannel) a); } else { @SuppressWarnings("unchecked") NioTask<SelectableChannel> task = (NioTask<SelectableChannel>) a; processSelectedKey(k, task); } if (!i.hasNext()) { break; } if (needsToSelectAgain) { selectAgain(); selectedKeys = selector.selectedKeys(); // Create the iterator again to avoid ConcurrentModificationException if (selectedKeys.isEmpty()) { break; } else { i = selectedKeys.iterator(); } } } }
-
runAllTasks
调用io.netty.util.concurrent.SingleThreadEventExecutor#runAllTasks()
protected boolean runAllTasks() { assert inEventLoop(); boolean fetchedAll; boolean ranAtLeastOne = false; do { fetchedAll = fetchFromScheduledTaskQueue(); if (runAllTasksFrom(taskQueue)) { ranAtLeastOne = true; } } while (!fetchedAll); // keep on processing until we fetched all scheduled tasks. if (ranAtLeastOne) { lastExecutionTime = ScheduledFutureTask.nanoTime(); } afterRunningAllTasks(); return ranAtLeastOne; }
-
-
channel.bind() 方法追溯
调用AbstractChannel#bind(java.net.SocketAddress, io.netty.channel.ChannelPromise)方法
@Override public ChannelFuture bind(SocketAddress localAddress, ChannelPromise promise) { return pipeline.bind(localAddress, promise); }
调用io.netty.channel.DefaultChannelPipeline#bind(java.net.SocketAddress, io.netty.channel.ChannelPromise)
@Override public final ChannelFuture bind(SocketAddress localAddress, ChannelPromise promise) { return tail.bind(localAddress, promise); }
调用io.netty.channel.AbstractChannelHandlerContext#bind(java.net.SocketAddress, io.netty.channel.ChannelPromise)
@Override public ChannelFuture bind(final SocketAddress localAddress, final ChannelPromise promise) { // 从尾结点开始向前找寻类型为Outbound的handler final AbstractChannelHandlerContext next = findContextOutbound(MASK_BIND); // 获取下一个handler的执行器 EventExecutor executor = next.executor(); if (executor.inEventLoop()) { next.invokeBind(localAddress, promise); } else { safeExecute(executor, new Runnable() { @Override public void run() { next.invokeBind(localAddress, promise); } }, promise, null, false); } return promise; } private void invokeBind(SocketAddress localAddress, ChannelPromise promise) { if (invokeHandler()) { try { // 追溯该方法 ((ChannelOutboundHandler) handler()).bind(this, localAddress, promise); } catch (Throwable t) { notifyOutboundHandlerException(t, promise); } } else { bind(localAddress, promise); } } @Override public void bind( ChannelHandlerContext ctx, SocketAddress localAddress, ChannelPromise promise) { unsafe.bind(localAddress, promise); }
调用io.netty.channel.AbstractChannel.AbstractUnsafe#bind() 方法
@Override public final void bind(final SocketAddress localAddress, final ChannelPromise promise) { assertEventLoop(); if (!promise.setUncancellable() || !ensureOpen(promise)) { return; } // See: https://github.com/netty/netty/issues/576 if (Boolean.TRUE.equals(config().getOption(ChannelOption.SO_BROADCAST)) && localAddress instanceof InetSocketAddress && !((InetSocketAddress) localAddress).getAddress().isAnyLocalAddress() && !PlatformDependent.isWindows() && !PlatformDependent.maybeSuperUser()) { // Warn a user about the fact that a non-root user can't receive a // broadcast packet on *nix if the socket is bound on non-wildcard address. logger.warn( "A non-root user can't receive a broadcast packet if the socket " + "is not bound to a wildcard address; binding to a non-wildcard " + "address (" + localAddress + ") anyway as requested."); } boolean wasActive = isActive(); try { // 追溯该方法 doBind(localAddress); } catch (Throwable t) { safeSetFailure(promise, t); closeIfClosed(); return; } if (!wasActive && isActive()) { invokeLater(new Runnable() { @Override public void run() { pipeline.fireChannelActive(); } }); } safeSetSuccess(promise); }