3. netty源代码分析 - Netty Server端启动流程

分析的版本为Netty-4.14

典型的Netty Server端代码如下:

public static void main(String[] args){
EventLoopGroup boss = new NioEventLoopGroup();
EventLoopGroup worker = new NioEventLoopGroup();
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(boss,worker)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>()
{
@Override
protected void initChannel(SocketChannel socketChannel) throws Exception
{
socketChannel.pipeline().addLast("hello world hanlder",new HelloWorldHandler());
}
});
try
{
ChannelFuture channelFuture = bootstrap.bind(8080).sync();
channelFuture.channel().closeFuture().sync();
} catch (InterruptedException e){
e.printStackTrace();
}finally
{
boss.shutdownGracefully();
worker.shutdownGracefully();
}
}

这里的bossworker两个EventExecutorGroup就是常说的netty中worker线程组和boss线程组,其中boss线程组用来处理TCP连接相关事件,worker线程组用来处理IO.

NioEventLoopGroup继承自MultithreadEventExecutorGroup
对于NioEventLoopGroup来说,在初始化的时候很多工作都是放在其继承链上的 MultithreadEventExecutorGroup来进行的:

protected MultithreadEventExecutorGroup(int nThreads, Executor executor,
                                        EventExecutorChooserFactory chooserFactory, Object... args) {
    //传入的nTreads = cpu * 2
   if (nThreads <= 0) {
        throw new IllegalArgumentException(String.format("nThreads: %d (expected: > 0)", nThreads));
    }
    // 默认传入的executor=null
    if (executor == null) {
        executor = new ThreadPerTaskExecutor(newDefaultThreadFactory());
    }
    children = new EventExecutor[nThreads];
    for (int i = 0; i < nThreads; i ++) {
        boolean success = false;
        try {
            //初始化了group对应的线程组,newChild方法在NioEventLoopGroup中实现,实际上是新建了一个NioEventLoop
            children[i] = newChild(executor, args);
            success = true;
        } catch (Exception e) {
            // TODO: Think about if this is a good exception type
            throw new IllegalStateException("failed to create a child event loop", e);
        } finally {
            if (!success) {
                for (int j = 0; j < i; j ++) {
                    children[j].shutdownGracefully();
                }

                for (int j = 0; j < i; j ++) {
                    EventExecutor e = children[j];
                    try {
                        while (!e.isTerminated()) {
                            e.awaitTermination(Integer.MAX_VALUE, TimeUnit.SECONDS);
                        }
                    } catch (InterruptedException interrupted) {
                        // Let the caller handle the interruption.
                        Thread.currentThread().interrupt();
                        break;
                    }
                }
            }
        }
    }
    //chooser实际上是每次进行相关操作时线程的选择的实现,默认使用的是轮询策略
    chooser = chooserFactory.newChooser(children);
    final FutureListener<Object> terminationListener = new FutureListener<Object>() {
        @Override
        public void operationComplete(Future<Object> future) throws Exception {
            if (terminatedChildren.incrementAndGet() == children.length) {
                terminationFuture.setSuccess(null);
            }
        }
    };
    for (EventExecutor e: children) {
        e.terminationFuture().addListener(terminationListener);
    }
    Set<EventExecutor> childrenSet = new LinkedHashSet<EventExecutor>(children.length);
    Collections.addAll(childrenSet, children);
    readonlyChildren = Collections.unmodifiableSet(childrenSet);
}

可以看出在NioEventLoopGroup在初始化的时候就完成了线程组里面线程的相关初始化,这个时候并未启动线程。

在MultithreadEventLoopGroup判断如果传入的nThreads=0,将cpu*2用来构造默认的线程个数

protected MultithreadEventLoopGroup(int nThreads, Executor executor, Object... args) {
    super(nThreads == 0 ? DEFAULT_EVENT_LOOP_THREADS : nThreads, executor, args);
}

NioEventLoopGroup里面实现了newChild方法

protected EventLoop newChild(Executor executor, Object... args) throws Exception {
    EventLoopTaskQueueFactory queueFactory = args.length == 4 ? (EventLoopTaskQueueFactory) args[3] : null;
    return new NioEventLoop(this, executor, (SelectorProvider) args[0],
        ((SelectStrategyFactory) args[1]).newSelectStrategy(), (RejectedExecutionHandler) args[2], queueFactory);
}

这样就为NioEventLoopGroup就完成了对线程相关初始化过程。

bossGroup和workerGroup的过程都是一样的,只不过二者在处理不同I/O事件。

在聊下面的内容之前,需要先着重的分析一下NioEventLoopGroup,后续Netty的线程调度都是基于这个来实现。
在这里插入图片描述

其类结构如图所示。
其构造函数如下,这里的逻辑很重要,初始化执行了很多逻辑

protected MultithreadEventExecutorGroup(int nThreads, Executor executor,
                                            EventExecutorChooserFactory chooserFactory, Object... args) {
        if (nThreads <= 0) {
            throw new IllegalArgumentException(String.format("nThreads: %d (expected: > 0)", nThreads));
        }
        if (executor == null) {
            executor = new ThreadPerTaskExecutor(newDefaultThreadFactory());
        }
        children = new EventExecutor[nThreads];
        for (int i = 0; i < nThreads; i ++) {
            boolean success = false;
            try {
                children[i] = newChild(executor, args);
                success = true;
            } catch (Exception e) {
                throw new IllegalStateException("failed to create a child event loop", e);
            } finally {
                if (!success) {
                    for (int j = 0; j < i; j ++) {
                        children[j].shutdownGracefully();
                    }
                    for (int j = 0; j < i; j ++) {
                        EventExecutor e = children[j];
                        try {
                            while (!e.isTerminated()) {
                                e.awaitTermination(Integer.MAX_VALUE, TimeUnit.SECONDS);
                            }
                        } catch (InterruptedException interrupted) {
                            Thread.currentThread().interrupt();
                            break;
                        }
                    }
                }
            }
        }
        chooser = chooserFactory.newChooser(children);
        final FutureListener<Object> terminationListener = new FutureListener<Object>() {
            public void operationComplete(Future<Object> future) throws Exception {
                if (terminatedChildren.incrementAndGet() == children.length) {
                    terminationFuture.setSuccess(null);
                }
            }
        };

        for (EventExecutor e: children) {
            e.terminationFuture().addListener(terminationListener);
        }

        Set<EventExecutor> childrenSet = new LinkedHashSet<EventExecutor>(children.length);
        Collections.addAll(childrenSet, children);
        readonlyChildren = Collections.unmodifiableSet(childrenSet);
    }

上述就是MultithreadEventExecutorGroup初始化逻辑,主要进行了如下处理

  1. 执行器初始化ThreadPerTaskExecutor,同时初始化执行器里对应ThreadFactory,从这里可以看出,如果不传入ThreadFactory的话,默认Netty中用的ThreadFactory就是DefaultThreadFactory,另外在NioEventLoopGroup也可以传入一个Executor
  2. 初始化EventExecutor数组,这个数组的大小,就是我们传入的线程数量,然后每个EventExecutor都会绑定上面的ThreadPerTaskExecutor,这里会调用newChild来实例化EventExecutor数组中的每个元素,其在NioEventLoopGroup实现:
 protected EventLoop newChild(Executor executor, Object... args) throws Exception {
        EventLoopTaskQueueFactory queueFactory = args.length == 4 ? (EventLoopTaskQueueFactory) args[3] : null;
        return new NioEventLoop(this, executor, (SelectorProvider) args[0],
            ((SelectStrategyFactory) args[1]).newSelectStrategy(), (RejectedExecutionHandler) args[2], queueFactory);
    }

这里EventExecutor数组中具体实现为NioEventLoop,我们需要记住在NioEventLoopGroup初始化调用父类传入的相关参数,这里在NioEventLoopGroup

  public NioEventLoopGroup(int nThreads, ThreadFactory threadFactory,
        final SelectorProvider selectorProvider, final SelectStrategyFactory selectStrategyFactory) {
        super(nThreads, threadFactory, selectorProvider, selectStrategyFactory, RejectedExecutionHandlers.reject());
    }

然后上面newChild会用到这几个参数,selectorProvider为网络IO的系统实现,会通过从系统变量java.nio.channels.spi.SelectorProvider获取,如果获取不到,通过SPI机制获取SelectorProvider类型的服务,一般Linux下默认为SelectorProviderImpl,SelectStrategyFactory 默认为DefaultSelectStrategyFactoryRejectedExecutionHandler为内部实现类:

 private static final RejectedExecutionHandler REJECT = new RejectedExecutionHandler() {
        public void rejected(Runnable task, SingleThreadEventExecutor executor) {
            throw new RejectedExecutionException();
        }
    };

这里EventLoopTaskQueueFactory按照默认传入的话,是null

  1. 初始化EventLoop选择器,实现如下:
public EventExecutorChooser newChooser(EventExecutor[] executors) {
        if (isPowerOfTwo(executors.length)) {
            return new PowerOfTwoEventExecutorChooser(executors);
        } else {
            return new GenericEventExecutorChooser(executors);
        }
    }

这里的选择器就是回递增一个id然后对executors数组的长度取余,然后返回一个EventExecutor也就是NioEventLoop

  1. 为每个EventExecutor添加terminationListener
  2. 根据children生成readonlyChildren

接下来就是将boosGroup和workerGroup与ServerBootStrap完成了绑定。这里就会区分出boss和work线程组。

接下来就是设定Serve端Nio类型以及一些其他设定,Netty服务端Nio类的初始化实际上是采用反射的方式来实现的,后面会进行说明。这里比较重要的一点是设定对应处理的handler列表。然后进行bind方法时会进行服务端真正的初始化。

在ServerBootStrap的父类AbstractBootStrap进行bind的处理

public ChannelFuture bind(int inetPort) {
    return bind(new InetSocketAddress(inetPort));
}
public ChannelFuture bind(SocketAddress localAddress) {
    validate();
    return doBind(ObjectUtil.checkNotNull(localAddress, "localAddress"));
}
private ChannelFuture doBind(final SocketAddress localAddress) {
    final ChannelFuture regFuture = initAndRegister();
    final Channel channel = regFuture.channel();
    if (regFuture.cause() != null) {
        return regFuture;
    }

    if (regFuture.isDone()) {
        ChannelPromise promise = channel.newPromise();
        doBind0(regFuture, channel, localAddress, promise);
        return promise;
    } else {
        final PendingRegistrationPromise promise = new PendingRegistrationPromise(channel);
        regFuture.addListener(new ChannelFutureListener() {
            @Override
            public void operationComplete(ChannelFuture future) throws Exception {
                Throwable cause = future.cause();
                if (cause != null) {
                    promise.setFailure(cause);
                } else {
                    promise.registered();
                    doBind0(regFuture, channel, localAddress, promise);
                }
            }
        });
        return promise;
    }
}

真正进行bind操作时调用了doBind方法,而在doBind中第一步是调用了initAndRegister,对Channel进行初始化和注册操作。

final ChannelFuture initAndRegister() {
    Channel channel = null;
    try {
        channel = channelFactory.newChannel();
        init(channel);
    } catch (Throwable t) {
        if (channel != null) {
            channel.unsafe().closeForcibly();
            return new DefaultChannelPromise(channel, GlobalEventExecutor.INSTANCE).setFailure(t);
        }
        return new DefaultChannelPromise(new FailedChannel(), GlobalEventExecutor.INSTANCE).setFailure(t);
    }

    ChannelFuture regFuture = config().group().register(channel);
    if (regFuture.cause() != null) {
        if (channel.isRegistered()) {
            channel.close();
        } else {
            channel.unsafe().closeForcibly();
        }
    }

    return regFuture;
}

newChannel最终调用的是 ReflectiveChannelFactory中

public T newChannel() {
    try {
        return constructor.newInstance();
    } catch (Throwable t) {
        throw new ChannelException("Unable to create Channel from class " + constructor.getDeclaringClass(), t);
    }
}

通过反射,来对我们在开始传入的NioServerSocketChannel进行实例的初始化。NioServerSocketChannel的初始化:
在这里插入图片描述
NioServerSocketChannel的类结构如上,其中在父类AbstractChannel的默认构造中,实现了pipeline的初始化:

protected AbstractChannel(Channel parent) {
        this.parent = parent;
        id = newId();
        unsafe = newUnsafe();
        pipeline = newChannelPipeline();
    }
protected DefaultChannelPipeline newChannelPipeline() {
        return new DefaultChannelPipeline(this);
    }
    public NioServerSocketChannel(ServerSocketChannel channel) {
        super(null, channel, SelectionKey.OP_ACCEPT);
        config = new NioServerSocketChannelConfig(this, javaChannel().socket());
    }

可以看到,NioServerSocketChannel注册的是一个OP_ACCEPT事件,后续会监测处理这个事件

另外,在这里需要强调的一点是,NioServerSocketChannel虽然带有channel,但是其并不是继承自JDK相关的网络处理接口,而是在其类继承体系中持有了一个ServerSocketChannelImpl,后续都是对这个ServerSocketChannelImpl的操作,来实现底层网络的相关处理。NioServerSocketChannel对底层JDK网络处理做了一层封装

init(channel);对应了两个不同实现,ServerBootStrap和BootStrap,在ServerBootStrap中的实现如下:

void init(Channel channel) {
    setChannelOptions(channel, options0().entrySet().toArray(EMPTY_OPTION_ARRAY), logger);
    setAttributes(channel, attrs0().entrySet().toArray(EMPTY_ATTRIBUTE_ARRAY));
    ChannelPipeline p = channel.pipeline();
    final EventLoopGroup currentChildGroup = childGroup;
    final ChannelHandler currentChildHandler = childHandler;
    final Entry<ChannelOption<?>, Object>[] currentChildOptions =
            childOptions.entrySet().toArray(EMPTY_OPTION_ARRAY);
    final Entry<AttributeKey<?>, Object>[] currentChildAttrs = childAttrs.entrySet().toArray(EMPTY_ATTRIBUTE_ARRAY);
    p.addLast(new ChannelInitializer<Channel>() {
        @Override
        public void initChannel(final Channel ch) {
        	// 实现为 DefaultChannelPipeline
            final ChannelPipeline pipeline = ch.pipeline();
            ChannelHandler handler = config.handler();
            if (handler != null) {
                pipeline.addLast(handler);
            }
            ch.eventLoop().execute(new Runnable() {
                @Override
                public void run() {
                    pipeline.addLast(new ServerBootstrapAcceptor(
                            ch, currentChildGroup, currentChildHandler, currentChildOptions, currentChildAttrs));
                }
            });
        }
    });
}

主要是完成了channel的初始化,如果我们在启动前对channel设置一些参数,这里也会传递过去。另外需要注意的是,这里所谓的handler并不是前面我们提交的childHandler(Netty中一般可以理解为childXXX是用来处理接入的客户端的连接)。这里的ServerBootstrapAcceptor就是用来处理新建立的客户端的连接。
这里需要注意的是在这里往pipline中添加了一个ChannelInitializer,在DefaultChannelPipeline中,默认有两个ChannelHandlerContext:

    protected DefaultChannelPipeline(Channel channel) {
        this.channel = ObjectUtil.checkNotNull(channel, "channel");
        succeededFuture = new SucceededChannelFuture(channel, null);
        voidPromise =  new VoidChannelPromise(channel, true);

        tail = new TailContext(this);
        head = new HeadContext(this);

        head.next = tail;
        tail.prev = head;
    }

addLast则是在head和tail中加入一个ChannelHandlerContextDefaultChannelPipeline很重要,后续专门一个章节来说。
在对channel进行初始化之后,接下来进行注册:

 ChannelFuture regFuture = config().group().register(channel);

这里的group对饮的就是我们在启动的时候传入的boss线程。而在MultithreadEventLoopGroup里面的实现为:

public ChannelFuture register(Channel channel) {
        return next().register(channel);
    }

这里的next就是从boss线程组中选择一个线程出来,然后进行注册绑定。在SingleThreadEventLoop实现:

 public ChannelFuture register(Channel channel) {
        return register(new DefaultChannelPromise(channel, this));
    }
 public ChannelFuture register(final ChannelPromise promise) {
        ObjectUtil.checkNotNull(promise, "promise");
        promise.channel().unsafe().register(this, promise);
        return promise;
    }

上面其实只是增加了一个异步回调的存根传入进去,没有做具体的事情,最终实现逻辑实在AbstractChannel中来实现主要逻辑

public final void register(EventLoop eventLoop, final ChannelPromise promise) {
    ObjectUtil.checkNotNull(eventLoop, "eventLoop");
  ...
    AbstractChannel.this.eventLoop = eventLoop;
    if (eventLoop.inEventLoop()) {
        register0(promise);
    } else {
        try {
            eventLoop.execute(new Runnable() {
                @Override
                public void run() {
                    register0(promise);
                }
            });
        } catch (Throwable t) {
                    AbstractChannel.this, t);
            closeForcibly();
            closeFuture.setClosed();
            safeSetFailure(promise, t);
        }
    }
}
private void register0(ChannelPromise promise) {
    try {
        if (!promise.setUncancellable() || !ensureOpen(promise)) {
            return;
        }
        boolean firstRegistration = neverRegistered;
        doRegister();
        neverRegistered = false;
        registered = true;
        pipeline.invokeHandlerAddedIfNeeded();
        safeSetSuccess(promise);
        pipeline.fireChannelRegistered();
        if (isActive()) {
            if (firstRegistration) {
                pipeline.fireChannelActive();
            } else if (config().isAutoRead()) {
                beginRead();
            }
        }
    } catch (Throwable t) {
        closeForcibly();
        closeFuture.setClosed();
        safeSetFailure(promise, t);
    }
}

在这里判断是否在EventLoop线程中,如果不在,则调用EventLoop.execute,在AbstractNioChannel实现了最终的注册:

protected void doRegister() throws Exception {
    boolean selected = false;
    for (;;) {
        try {
            selectionKey = javaChannel().register(eventLoop().unwrappedSelector(), 0, this);
            return;
        } catch (CancelledKeyException e) {
            if (!selected) {
                eventLoop().selectNow();
                selected = true;
            } else {
                throw e;
            }
        }
    }
}

这样就把channel会注册到boss线程的selector多路复用器上,完成了channel的初始化和注册。
需要注意的是,这里注册的是一个ops=0,表示只注册,不进行任何处理,等整个channel都完全就绪之后,后续通过SlectionKey.interestOps可以修改需要监听的事件。
注册完之后会触发fireChannelRegisteredfireChannelActive
在执行fireChannelActive时,最后会调用到:

public void read(ChannelHandlerContext ctx) {
            unsafe.beginRead();
        }
  // AbstractNioChannel.java
  protected void doBeginRead() throws Exception {
        final SelectionKey selectionKey = this.selectionKey;
        if (!selectionKey.isValid()) {
            return;
        }
        readPending = true;
        final int interestOps = selectionKey.interestOps();
        if ((interestOps & readInterestOp) == 0) {
            selectionKey.interestOps(interestOps | readInterestOp);
        }
    }

可以看到,这里会判断已经注册Ops和当前需要注册的ops是否一致,如果 不一致,这里会重新注册,在JDK底层网络中定义了如下几种Ops:

 public static final int OP_READ = 1 << 0;
 public static final int OP_WRITE = 1 << 2;
 public static final int OP_CONNECT = 1 << 3;
 public static final int OP_ACCEPT = 1 << 4;

那么Server端是何时启动监听呢,其实通过上述代码会发现,每个Channel(不管server还是client)在运行期间,全局绑定一个唯一的线程不变(NioEventLoop),Netty所有的I/O操作都是和这个channel对应NioEventLoop进行操作,也就是很多步骤都会有一个eventLoop.inEventLoop()的判断,判断是否在这个channel对应的线程中,如果不在,则会执行eventLoop.execute(new Runnable() {}这步操作时,会判断IO线程是否启动,如果没有启动,会启动IO线程:

private void execute(Runnable task, boolean immediate) {
    boolean inEventLoop = inEventLoop();
    addTask(task);
    if (!inEventLoop) {
        startThread();
        if (isShutdown()) {
            boolean reject = false;
            try {
                if (removeTask(task)) {
                    reject = true;
                }
            } catch (UnsupportedOperationException e) {
            }
            if (reject) {
                reject();
            }
        }
    }

    if (!addTaskWakesUp && immediate) {
        wakeup(inEventLoop);
    }
}

最终会调用到NioEventLoop的run方法:

protected void run() {
    int selectCnt = 0;
    for (;;) {
        try {
            int strategy;
            try {
                strategy = selectStrategy.calculateStrategy(selectNowSupplier, hasTasks());
                switch (strategy) {
                case SelectStrategy.CONTINUE:
                    continue;
                case SelectStrategy.BUSY_WAIT:
                case SelectStrategy.SELECT:
                    long curDeadlineNanos = nextScheduledTaskDeadlineNanos();
                    if (curDeadlineNanos == -1L) {
                        curDeadlineNanos = NONE; // nothing on the calendar
                    }
                    nextWakeupNanos.set(curDeadlineNanos);
                    try {
                        if (!hasTasks()) {
                            strategy = select(curDeadlineNanos);
                        }
                    } finally {
                        nextWakeupNanos.lazySet(AWAKE);
                    }
                    // fall through
                default:
                }
            } catch (IOException e) {
                rebuildSelector0();
                selectCnt = 0;
                handleLoopException(e);
                continue;
            }
            selectCnt++;
            cancelledKeys = 0;
            needsToSelectAgain = false;
            final int ioRatio = this.ioRatio;
            boolean ranTasks;
            if (ioRatio == 100) {
                try {
                    if (strategy > 0) {
                        processSelectedKeys();
                    }
                } finally {
                    ranTasks = runAllTasks();
                }
            } else if (strategy > 0) {
                final long ioStartTime = System.nanoTime();
                try {
                    processSelectedKeys();
                } finally {
                    final long ioTime = System.nanoTime() - ioStartTime;
                    ranTasks = runAllTasks(ioTime * (100 - ioRatio) / ioRatio);
                }
            } else {
                ranTasks = runAllTasks(0); // This will run the minimum number of tasks
            }
            if (ranTasks || strategy > 0) {
                if (selectCnt > MIN_PREMATURE_SELECTOR_RETURNS && logger.isDebugEnabled()) {
                }
                selectCnt = 0;
            } else if (unexpectedSelectorWakeup(selectCnt)) { // Unexpected wakeup (unusual case)
                selectCnt = 0;
            }
        } catch (CancelledKeyException e) {
        } catch (Throwable t) {
            handleLoopException(t);
        }
        try {
            if (isShuttingDown()) {
                closeAll();
                if (confirmShutdown()) {
                    return;
                }
            }
        } catch (Throwable t) {
            handleLoopException(t);
        }
    }
}

最终完成了server端的启动。

这里我们总结下server端启动流程

  1. 用户通过代码实现,选择启动的Channel类型和boss线程组以及worker线程组(NioEventLoopGroup),然后通过ServerBootstrap,然后bind端口号
  2. NioEventLoopGroup在实例化的时候,NioEventLoopGroup里面维护了一个EventLoop数组,会根据传入的线程数进行初始化,如果没有指定,那么默认数组大小就是cpu*2,也就是线程数
  3. 使用bind()方法进行端口绑定,绑定端口时,首先会根据传入Channel类型进行实例化,这里我们是NioServerSocketChannel,在初始化的NioServerSocketChannel的时候会实例化其pipline,实现为DefaultChannelPipeline
  4. 生成NioServerSocketChannel会对其进行初始化,主要是把相关channel的配置设置好,同时会向pipline中增加一个ServerBootstrapAcceptorChannelInboundHandlerAdapter
  5. 初始化之后进行注册,这里注册会从boss线程组里选择一个线程去执行,在执行的时候,会把待注册的NioServerSocketChanneleventLoop绑定到当前执行的线程上去也就是EventLoop,然后会进行底层Channel的实际注册,注册成功之后通过触发pipline.fireChannelActive,再次修改监听的网络事件为OP_ACCEPT
  6. 当在进行注册的时候,同时会启动EventLoop开始监听客户端的连接。

另外需要指出的是,这里用来处理的是boss线程组,默认是cpu*2,结合后续的研究,如果对于server端只有一个端口的话,那么只需一个线程处理即可,其他线程都是空炮的

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值