Netty中服务端绑定端口和接收连接源码分析
源码说明
源码fork至GitHub的repository,版本号是4.1.38。
一个例子
为了更加形象,我选择了一个源码中一个简单的example进行改造,在example module下的io.netty.example.echo路径下。代码如下:
Server代码:
public final class EchoServer {
static final boolean SSL = System.getProperty("ssl") != null;
static final int PORT = Integer.parseInt(System.getProperty("port", "8007"));
public static void main(String[] args) throws Exception {
// Configure SSL.
final SslContext sslCtx;
if (SSL) {
SelfSignedCertificate ssc = new SelfSignedCertificate();
sslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()).build();
} else {
sslCtx = null;
}
// Configure the server.
//server端会设置两个eventLoopGroup,而客户端只有一个
//bossGroup的线程数设置为1,负责监听accept事件
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
//不对线程数进行配置,默认生成CPU个数的2倍,负责IO事件
EventLoopGroup workerGroup = new NioEventLoopGroup();
final EchoServerHandler serverHandler = new EchoServerHandler();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 100)
.handler(new LoggingHandler(LogLevel.INFO))
//两种设置keepalive的方式
// .childOption(ChannelOption.SO_KEEPALIVE, true)
.childOption(NioChannelOption.SO_KEEPALIVE, true)
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
if (sslCtx != null) {
p.addLast(sslCtx.newHandler(ch.alloc()));
}
//p.addLast(new LoggingHandler(LogLevel.INFO));
p.addLast(serverHandler);
}
});
// Start the server.
ChannelFuture f = b.bind(PORT).sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} finally {
// Shut down all event loops to terminate all threads.
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
ServerHandler代码
public class EchoServerHandler extends ChannelInboundHandlerAdapter {
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ctx.write(Unpooled.wrappedBuffer("hello".getBytes()));
ctx.fireChannelRead(msg);
}
@Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Close the connection when an exception is raised.
cause.printStackTrace();
ctx.close();
}
}
这个handler很简单,会在请求http://127.0.0.1:8007的时候返回hello
server端bind流程
在EchoServer中进行了两步操作:
- 配置ServerBootstrap,也就是服务端的启动器;
- 进行端口的绑定,服务端启动。
其中,我们重点看一下启动过程,端口绑定的源码流程。
核心流程是在父类AbstractBootstrap中,可以看到重要的步骤只有两步,①初始化一个ServerSocketChannel,并将其注册到bossGroup的EventLoop上面,②在channel上执行bind。
private ChannelFuture doBind(final SocketAddress localAddress) {
//初始化一个ServerSocketChannel,并将其注册到bossGroup的EventLoop上面
final ChannelFuture regFuture = initAndRegister();
final Channel channel = regFuture.channel();
if (regFuture.cause() != null) {
return regFuture;
}
if (regFuture.isDone()) {
// At this point we know that the registration was complete and successful.
ChannelPromise promise = channel.newPromise();
//进行端口的绑定
doBind0(regFuture, channel, localAddress, promise);
return promise;
} else {
// Registration future is almost always fulfilled already, but just in case it's not.
final PendingRegistrationPromise promise = new PendingRegistrationPromise(channel);
regFuture.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
Throwable cause = future.cause();
if (cause != null) {
// Registration on the EventLoop failed so fail the ChannelPromise directly to not cause an
// IllegalStateException once we try to access the EventLoop of the Channel.
promise.setFailure(cause);
} else {
// Registration was successful, so set the correct executor to use.
// See https://github.com/netty/netty/issues/2586
promise.registered();
doBind0(regFuture, channel, localAddress, promise);
}
}
});
return promise;
}
现在分开深入去看。
channel的初始化和注册
final ChannelFuture initAndRegister() {
Channel channel = null;
try {
channel = channelFactory.newChannel();
//根据ServerBootstrap中的配置对Channel进行初始化
init(channel);
} catch (Throwable t) {
···
}
//从bossGroup中选出一个EventLoop进行注册
ChannelFuture regFuture = config().group().register(channel);
//对注册的future进行处理
channel的初始化很简单,将config里的项设置到channel中。其中值得注意的是,在init()方法中会在channel的pipeline上添加一个ServerBootstrapAcceptor,这个handler将会在接收连接请求的时候用到,代码如下:
ch.eventLoop().execute(new Runnable() {
@Override
public void run() {
pipeline.addLast(new ServerBootstrapAcceptor(
ch, currentChildGroup, currentChildHandler, currentChildOptions,
currentChildAttrs));
}
});
这里重点是channel的注册过程,一层一层跟读源码,进入到
SingleThreadEventLoop类中,可以看到真正的源码执行是在channel中的unsafe类中。
public ChannelFuture register(final ChannelPromise promise) {
ObjectUtil.checkNotNull(promise, "promise");
//register()注册channel到eventloop上
promise.channel().unsafe().register(this, promise);
return promise;
}
跟进源码到AbstractUnsafe中,register代码如下:
public final void register(EventLoop eventLoop, final ChannelPromise promise) {
//校验代码
...
AbstractChannel.this.eventLoop = eventLoop;
//判断当前执行的线程是否是eventLoop中持有的线程,这里会返回false
if (eventLoop.inEventLoop()) {
register0(promise);
} else {
try {
//eventLoop是Executor的子类,不在一个线程,提交任务到Executor
eventLoop.execute(new Runnable() {
@Override
public void run() {
register0(promise);
}
});
} catch (Throwable t) {
//异常处理
···
}
}
}
这里的核心代码是eventLoop.execute···,跟进源码到SingleThreadEventExecutor,执行过程如下:
public void execute(Runnable task) {
if (task == null) {
throw new NullPointerException("task");
}
boolean inEventLoop = inEventLoop();
//将register任务添加到任务队列中
addTask(task);
if (!inEventLoop) {
//分配eventLoop中的线程并启动
startThread();
if (isShutdown()) {
boolean reject = false;
try {
if (removeTask(task)) {
reject = true;
}
} catch (UnsupportedOperationException e) {
}
if (reject) {
reject();
}
}
}
SingleThreadEventExecutor中会有一个thread字段,在startThread()方法中会将当前线程赋值给它,然后调用run()方法。run()方法在NioEventLoop中的实现如下:
protected void run() {
for (; ; ) {
try {
try {
switch (selectStrategy.calculateStrategy(selectNowSupplier, hasTasks())) {
//表示需要进行重试的策略,默认情况下不会出现这种情况
case SelectStrategy.CONTINUE:
continue;
case SelectStrategy.BUSY_WAIT:
//表示使用阻塞 select 的策略,默认的策略下,当hasTasks==false的时候将返回SELECT
case SelectStrategy.SELECT:
//进行selector阻塞select,多路复用参考https://segmentfault.com/a/1190000003063859
//wakenUp.getAndSet(false)---->重置wakenUp为false并返回修改前的值
select(wakenUp.getAndSet(false));
if (wakenUp.get()) {
//将唤醒阻塞在select方法上的线程,让它立刻返回
selector.wakeup();
}
// fall through
default:
}
} catch (IOException e) {
rebuildSelector0();
handleLoopException(e);
continue;
}
cancelledKeys = 0;
needsToSelectAgain = false;
final int ioRatio = this.ioRatio;
if (ioRatio == 100) {
try {
//处理 Channel 感兴趣的就绪 IO 事件
processSelectedKeys();
} finally {
// Ensure we always run tasks.
runAllTasks();
}
} else {
final long ioStartTime = System.nanoTime();
try {
processSelectedKeys();
} finally {
// Ensure we always run tasks.
final long ioTime = System.nanoTime() - ioStartTime;
//以 #processSelectedKeys() 方法的执行时间作为基准,计算 #runAllTasks(long timeoutNanos) 方法可执行的时间。
runAllTasks(ioTime * (100 - ioRatio) / ioRatio);
}
}
} catch (Throwable t) {
handleLoopException(t);
}
// Always handle shutdown even if the loop processing threw an exception.
try {
if (isShuttingDown()) {
closeAll();
if (confirmShutdown()) {
return;
}
}
} catch (Throwable t) {
handleLoopException(t);
}
}
}
run()方法是一个死循环,可以看出,netty是在当前eventLoop持有的thread中循环执行上面的代码。一次循环执行过程如下:
其中会根据设置的ratio参数来决定执行select和普通任务的时间。
channel的bind
bind执行的过程相对简单,就是在pipeline上执行bind,在pipeline的handler链表上向下传递:
private static void doBind0(
final ChannelFuture regFuture, final Channel channel,
final SocketAddress localAddress, final ChannelPromise promise) {
channel.eventLoop().execute(new Runnable() {
@Override
public void run() {
if (regFuture.isSuccess()) {
channel.bind(localAddress, promise).addListener(ChannelFutureListener.CLOSE_ON_FAILURE);
} else {
promise.setFailure(regFuture.cause());
}
}
});
}
接收连接过程
接收连接过程比较简单,以上看出,从startThread()中调用eventLoop方法之后,run()方法就会在循环select。这里eventLoop中会有一个原生的java selector,当客户端发起连接的时候,就会select到ACCEPT事件。NioEventLoop中处理selectKey的源码如下:
private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
if (!k.isValid()) {
final EventLoop eventLoop;
try {
eventLoop = ch.eventLoop();
} catch (Throwable ignored) {
}
if (eventLoop != this || eventLoop == null) {
return;
}
unsafe.close(unsafe.voidPromise());
return;
}
try {
int readyOps = k.readyOps();
// We first need to call finishConnect() before try to trigger a read(...) or write(...) as otherwise
// the NIO JDK channel implementation may throw a NotYetConnectedException.
if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
// remove OP_CONNECT as otherwise Selector.select(..) will always return without blocking
// See https://github.com/netty/netty/issues/924
int ops = k.interestOps();
ops &= ~SelectionKey.OP_CONNECT;
k.interestOps(ops);
unsafe.finishConnect();
}
if ((readyOps & SelectionKey.OP_WRITE) != 0) {
ch.unsafe().forceFlush();
}
// Also check for readOps of 0 to workaround possible JDK bug which may otherwise lead
// to a spin loop
if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
unsafe.read();
}
} catch (CancelledKeyException ignored) {
unsafe.close(unsafe.voidPromise());
}
}
其中值得注意的是,OP_ACCEPT等事件定义为一个常量,通过移位操作来和彼此之间进行区分,这样只需要进行一个&操作就可以判断是否对某个事件感兴趣,且位运算的效率很高。
接收到连接后这里会进入unsafe.read,这里会进入到AbstractNioMessageChannel的read方法,这里会做两件事情,读取连接数据(doReadMessages(readBuf))和在pipeline上传递read事件(pipeline.fireChannelRead(readBuf.get(i)))。doReadMessages(readBuf)源码如下:
//NioServerSocketChannel.class
protected int doReadMessages(List<Object> buf) throws Exception {
//接收连接
SocketChannel ch = SocketUtils.accept(javaChannel());
try {
if (ch != null) {
//将接收到channel转换为NioSocketChannel,并加入到bufList中
buf.add(new NioSocketChannel(this, ch));
return 1;
}
} catch (Throwable t) {
logger.warn("Failed to create a new channel from an accepted socket.", t);
try {
ch.close();
} catch (Throwable t2) {
logger.warn("Failed to close a socket.", t2);
}
}
return 0;
}
接下来在pipeline上传递read事件,在上一小节中,初始化ServerSocketChannel的时候在pipeline上添加了ServerBootstrapAcceptor,该handler中的read处理逻辑为:
public void channelRead(ChannelHandlerContext ctx, Object msg) {
//传入的msg是NioSocketChannel
//初始化channel的属性
final Channel child = (Channel) msg;
child.pipeline().addLast(childHandler);
setChannelOptions(child, childOptions, logger);
for (Entry<AttributeKey<?>, Object> e : childAttrs) {
child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
try {
//将该NioSocketChannel注册到childGroup中的EventLoop上
childGroup.register(child).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
forceClose(child, future.cause());
}
}
});
} catch (Throwable t) {
forceClose(child, t);
}
}
以上的注册过程和上一节的注册过程相同。
总结
根据以上源码分析可以看出,服务端的bind过程实际上是将一个ServerSocketChannel注册到eventLoop上,并在一个新的线程内执行select操作,处理连接事件。每当新建立一个连接,会将一个NioSocketChannel注册到childGroup的eventLoop上,这样eventLoop的selector就可以对该channel进行select。这样就会将连接事件和IO事件在不同的线程、不同的group中进行处理,完全符合reactor模型:
netty通过多Reactor线程模式将“接受客户端的连接请求”和“与该客户端的通信”分在了两个Reactor线程来完成。mainReactor完成接收客户端连接请求的操作,将建立好的连接转交给subReactor线程来完成与客户端的通信,这样就可以处理高并发下的海量连接。
参考资料
netty源码:https://github.com/netty/netty
reactor模型分析:https://www.cnblogs.com/winner-0715/p/8733787.html