Netty代码阅读

阅读Netty官方文档的时候,提到了Netty主要有三大核心,分别是buffer、channel、Event Model,接下来我们就从阅读Netty代码来理解这三大核心。

注意:一下分析都是基于Netty的4.1.34.Final版本的。

示例程序

先给出示例程序,方便自己也方便读者进行debug调试。

Server端代码

# Server.java文件

package org.example;

public class Server {

    public static void main(String[] args) throws Exception {
        int port = 8080;
        if (args.length > 0) {
            port = Integer.parseInt(args[0]);
        }

        new DiscardServer(port).run(); // ref-1
    }

}

ref-1处的代码创建的DiscardServer对象如下所示。

// DiscardServer.java文件
package org.example;

import io.netty.bootstrap.ServerBootstrap;

import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;

public class DiscardServer {

    private int port;

    public DiscardServer(int port) {
        this.port = port;
    }

    public void run() throws Exception {
        EventLoopGroup bossGroup = new NioEventLoopGroup();
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        try {
            ServerBootstrap b = new ServerBootstrap();
            b.group(bossGroup, workerGroup)
                    .channel(NioServerSocketChannel.class)
                    .childHandler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        public void initChannel(SocketChannel ch) throws Exception {
                            ch.pipeline().addLast(new TimeEncoder(), new TimeServerHandler()); // ref-2
                        }
                    })
                    .option(ChannelOption.SO_BACKLOG, 128)          
                    .childOption(ChannelOption.SO_KEEPALIVE, true);

            // Bind and start to accept incoming connections.
            ChannelFuture f = b.bind(port).sync();

            // Wait until the server socket is closed.
            // In this example, this does not happen, but you can do that to gracefully
            // shut down your server.
            f.channel().closeFuture().sync();
        } finally {
            workerGroup.shutdownGracefully();
            bossGroup.shutdownGracefully();
        }
    }

}

ref-2处会将ChannelHandler的实现类TimeEncoder和TimeServerHandler的对象添加到pipeline的最后位置。这个两个类的代码如下所示:

// TimeEncoder.java
package org.example;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.MessageToByteEncoder;

public class TimeEncoder extends MessageToByteEncoder<UnixTime> {

    @Override
    protected void encode(ChannelHandlerContext ctx, UnixTime msg, ByteBuf out) {
        out.writeInt((int)msg.value()); // ref-3
    }

}
// TimeServerHandler.java文件
package org.example;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;

public class TimeServerHandler extends ChannelInboundHandlerAdapter {

    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        ChannelFuture f = ctx.writeAndFlush(new UnixTime());
        f.addListener(ChannelFutureListener.CLOSE);
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        cause.printStackTrace();
        ctx.close();
    }
}

ref-3处代码就是将数据写入到ByteBuf中,后文会详细讲解。

Client端代码

package org.example;

import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;

public class Client {

    public static void main(String[] args) throws Exception {
        String host = "127.0.0.1";
        int port = 8080;
        EventLoopGroup workerGroup = new NioEventLoopGroup();

        try {
            Bootstrap b = new Bootstrap(); 
            b.group(workerGroup);
            b.channel(NioSocketChannel.class);
            b.option(ChannelOption.SO_KEEPALIVE, true); 
            b.handler(new ChannelInitializer<SocketChannel>() {
                @Override
                public void initChannel(SocketChannel ch) throws Exception {
                    ch.pipeline().addLast(new TimeDecoder(), new TimeClientHandler()); // ref-4
                }
            });

            // Start the client.
            ChannelFuture f = b.connect(host, port).sync(); 

            // Wait until the connection is closed.
            f.channel().closeFuture().sync();
        } finally {
            workerGroup.shutdownGracefully();
        }
    }

}

ref-4处会将ChannelHandler的实现类TimeDecoder和TimeClientHandler的对象注册到pipeline的最后处,这两个类的实现代码如下所示:

package org.example;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ByteToMessageDecoder;

import java.util.List;

public class TimeDecoder extends ByteToMessageDecoder {
    @Override
    protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) {
        if (in.readableBytes() < 4) {
            return;
        }

        out.add(new UnixTime(in.readUnsignedInt())); // ref-5
    }
}

package org.example;

import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;

public class TimeClientHandler extends ChannelInboundHandlerAdapter {
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        UnixTime m = (UnixTime) msg;
        System.out.println(m);
        ctx.close();
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        cause.printStackTrace();
        ctx.close();
    }
}

ref-5处的代码会从ByteBuf中读取数据,详细内容后文会解析。

运行Server和Client

先运行Server的main方法,然后运行Client的main方法,会得到如下的输出:

Sat Aug 24 15:06:11 CST 2024

ByteBuf解析

写入数据到ByteBuf

ref-3代码会向ByteBuf写入一个Int类型的数据,先看一下在抽象类ByteBuf中的申明:

// io.netty.buffer.ByteBuf.java文件
/**
 * Sets the specified 32-bit integer at the current {@code writerIndex}
 * and increases the {@code writerIndex} by {@code 4} in this buffer.
 * If {@code this.writableBytes} is less than {@code 4}, {@link #ensureWritable(int)}
 * will be called in an attempt to expand capacity to accommodate.
 */
public abstract ByteBuf writeInt(int value);

这个方法的大意就是会将32位的整数设置到当前写指针(writerIndex)的位置,并且将writerIndex增加4。如果可以写的空间少于4,那么就会调用ensureWritable(int)方法尝试扩大容量以容纳32位整数数据。

然后我们在看一下具体的实现:

// io.netty.buffer.AbstractByteBuf.java文件
@Override
public ByteBuf writeInt(int value) {
    ensureWritable0(4);
    _setInt(writerIndex, value);
    writerIndex += 4;
    return this;
}

在实现类中,这个写入32位整数的代码就是在完成上述申明中的步骤。先调用ensureWritable0(4)确保有足够的空间写入32位整数,然后调用_setInt(writerIndex, value)执行写入操作,最后将writerIndex增加4。

接下来我们追一下写入数据的步骤,如下所示:

// io.netty.buffer.PooledUnsafeDirectByteBuf.java文件
@Override
protected void _setInt(int index, int value) {
    UnsafeByteBufUtil.setInt(addr(index), value); // ref-6
}

继续跟一下,可以发现是调用了Java的Unsafe方法:

// io.netty.buffer.UnsafeByteBufUtil.java文件
static void setInt(long address, int value) {
    if (UNALIGNED) {
        PlatformDependent.putInt(address, BIG_ENDIAN_NATIVE_ORDER ? value : Integer.reverseBytes(value));
    } else {
        PlatformDependent.putByte(address, (byte) (value >>> 24));
        PlatformDependent.putByte(address + 1, (byte) (value >>> 16));
        PlatformDependent.putByte(address + 2, (byte) (value >>> 8));
        PlatformDependent.putByte(address + 3, (byte) value);
    }
}
// io.netty.util.internal.PlatformDependent.java文件
public static void putInt(long address, int value) {
    PlatformDependent0.putInt(address, value);
}
// io.netty.util.internal.PlatformDependent0.java文件
static void putInt(long address, int value) {
    UNSAFE.putInt(address, value);
}

我们看一下Unsafe类的说明,直接上jdk文档内容:


/**
 * A collection of methods for performing low-level, unsafe operations.
 * Although the class and all methods are public, use of this class is
 * limited because only trusted code can obtain instances of it.
 *
 * <em>Note:</em> It is the responsibility of the caller to make sure
 * arguments are checked before methods of this class are
 * called. While some rudimentary checks are performed on the input,
 * the checks are best effort and when performance is an overriding
 * priority, as when methods of this class are optimized by the
 * runtime compiler, some or all checks (if any) may be elided. Hence,
 * the caller must not rely on the checks and corresponding
 * exceptions!
 *
 * @author John R. Rose
 * @see #getUnsafe
 */

public final class Unsafe {
    ......
}

第一句话就说明了,这个类提供了一系列方法来执行底层的、不安全的操作。简单点说,就是这个类直接操作的内存。

ref-6 处有个细节,就是计算地址的方法调用addr(index),我们下面详细看一下:

// io.netty.buffer.PooledUnsafeDirectByteBuf.java文件
private long addr(int index) {
    return memoryAddress + index;
}

计算地址就是起始地址加一个偏移量index,这个index就是我们在上层传递的writerIndex。这儿就体现了写指针的作用,它就是记录数据已经写到哪个位置了,下一次写数据就从这个位置开始写。

从ByteBuf读取数据

写入数据分析完了,我们再分析一下读取数据。ref-5处的代码就是在从ByteBuf中读取数据in.readUnsignedInt(),我们先看一下这个方法的申明。

// io.netty.buffer.ByteBuf.java文件
/**
     * Gets an unsigned 32-bit integer at the current {@code readerIndex}
     * and increases the {@code readerIndex} by {@code 4} in this buffer.
     *
     * @throws IndexOutOfBoundsException
     *         if {@code this.readableBytes} is less than {@code 4}
     */
public abstract long  readUnsignedInt();

这个方法会在readerIndex位置读取32位的整数,然后将readerIndex增加4。

我们再看一下具体实现:

// 会进入到io.netty.buffer.AbstractByteBuf.java中的这个方法。
@Override
public int readInt() {
    checkReadableBytes0(4);
    int v = _getInt(readerIndex);
    readerIndex += 4;
    return v;
}

接下来看看_getInt(readerIndex)方法的调用:

// io.netty.buffer.PooledUnsafeDirectByteBuf.java
@Override
protected int _getInt(int index) {
    return UnsafeByteBufUtil.getInt(addr(index));
}

这个方法是不是很熟悉啊,和写入数据一样,都是先计算地址,再进行操作,底层也是依赖的Unsafe类。

到这儿也能体现出来readerIndex的作用了,它就是记录读取数据到哪儿了,然后下一次读取的时候就从readerIndex开始读取。

ByteBuf总结

结合ByteBuf类上的注释,对它进行一个总结。ByteBuf是底层byte数组或者java NIO Buffer的一个视图,它维护了两个指针,分别是读指针(readerIndex)和写指针(writerIndex),这两个指针分别记录读取和写入数据的位置。

具体示意图如下:

       +-------------------+------------------+------------------+
       | discardable bytes |  readable bytes  |  writable bytes  |
       |                   |     (CONTENT)    |                  |
       +-------------------+------------------+------------------+
       |                   |                  |                  |
       0      <=      readerIndex   <=   writerIndex    <=    capacity

这两个指针将对应byte数组分成了三个区域。readable bytes区域是实际存储数据的区域,writable bytes是需要填充的未定义区域,discardable bytes区域包含的是已经被读操作获取了的数据。

Channel解析

接下来我们看一下核心组件Channel,它代表的是与Socket或者有能力进行I/O操作的组件的连结,比如读、写、连接或者绑定。

Channel为用户提供如下能力:

  • 获取channel的当前状态。
  • 获取Channel的配置参数。
  • Channel支持的I/O操作。
  • ChannelPipeling会处理和channel相关的所有I/O事件和请求。

由于使用Netty时并不直接使用Channel,所以对于Channel的理解,目前就到这儿。

Event Model解析

ServerBootStrap的绑定流程

在示例代码中调用绑定的语句如下:

// DiscardServer.java文件
ChannelFuture f = b.bind(port).sync();

实际调用的方法如下:

// io.netty.bootstrap.AbstractBootstrap.java文件
private ChannelFuture doBind(final SocketAddress localAddress) {
    final ChannelFuture regFuture = initAndRegister(); // ref-7
    final Channel channel = regFuture.channel();
    if (regFuture.cause() != null) {
        return regFuture;
    }

    if (regFuture.isDone()) {
        // At this point we know that the registration was complete and successful.
        ChannelPromise promise = channel.newPromise();
        doBind0(regFuture, channel, localAddress, promise); // ref-6
        return promise;
    } else {
        // ...... 省略
        return promise;
    }
}

ref-6处代码实际就是将Channel绑定到具体的地址上进行监听。重点要看ref-7处的initAndRegister()方法的调用,实际内容如下:

// io.netty.bootstrap.AbstractBootstrap.java文件    
final ChannelFuture initAndRegister() {
    Channel channel = null;
    try {
        channel = channelFactory.newChannel();
        init(channel); // 对Channel进行初始化,做的操作比较简单
    } catch (Throwable t) {
       // ...... 省略
    }

    ChannelFuture regFuture = config().group().register(channel); // ref-8
    if (regFuture.cause() != null) {
        if (channel.isRegistered()) {
            channel.close();
        } else {
            channel.unsafe().closeForcibly();
        }
    }

    return regFuture;
}

ref-8处代码会将Channel注册到一个EventLoop中去,我们看一下它的实际代码:

// io.netty.channel.SingleThreadEventLoop.java文件
@Override
public ChannelFuture register(final ChannelPromise promise) {
    ObjectUtil.checkNotNull(promise, "promise");
    promise.channel().unsafe().register(this, promise);
    return promise;
}
// io.netty.channel.AbstractChannel.java文件
public final void register(EventLoop eventLoop, final ChannelPromise promise) {
    // ...... 省略
    AbstractChannel.this.eventLoop = eventLoop;

    if (eventLoop.inEventLoop()) {
        register0(promise);
    } else {
        try {
            eventLoop.execute(new Runnable() {
                @Override
                public void run() {
                    register0(promise);
                }
            });
        } catch (Throwable t) {
            // ...... 省略
        }
    }
}

private void register0(ChannelPromise promise) {
    try {
        // check if the channel is still open as it could be closed in the mean time when the register
        // call was outside of the eventLoop
        if (!promise.setUncancellable() || !ensureOpen(promise)) {
            return;
        }
        boolean firstRegistration = neverRegistered;
        doRegister(); // ref-9
        neverRegistered = false;
        registered = true;

        // Ensure we call handlerAdded(...) before we actually notify the promise. This is needed as the
        // user may already fire events through the pipeline in the ChannelFutureListener.
        pipeline.invokeHandlerAddedIfNeeded();

        safeSetSuccess(promise);
        pipeline.fireChannelRegistered();
        // Only fire a channelActive if the channel has never been registered. This prevents firing
        // multiple channel actives if the channel is deregistered and re-registered.
        if (isActive()) {
            if (firstRegistration) {
                pipeline.fireChannelActive();  // 触发ChannelHandler的ChannelActive(......)方法
            } else if (config().isAutoRead()) {
                // This channel was registered before and autoRead() is set. This means we need to begin read
                // again so that we process inbound data.
                //
                // See https://github.com/netty/netty/issues/4805
                beginRead();
            }
        }
    } catch (Throwable t) {
        // ...... 省略
    }
}

ref-9处的代码会将Channel注册到Selector上去,实际上就是调用底层Java Nio

到这里就把绑定流程分析完了。整个绑定流程是在bossGroup NioEventLoopGrooup中的NioEventLoop中执行的。

Selector事件处理

绑定流程将绑定了地址的Channel注册到Selector,然后就会有相应的事件处理逻辑就行处理。

// io.netty.channel.nio.NioEventLoop.java文件    
protected void run() {
    for (;;) {
        try {
            try {
                switch (selectStrategy.calculateStrategy(selectNowSupplier, hasTasks())) {
                    case SelectStrategy.CONTINUE:
                        continue;

                    case SelectStrategy.BUSY_WAIT:
                        // fall-through to SELECT since the busy-wait is not supported with NIO

                    case SelectStrategy.SELECT:
                        select(wakenUp.getAndSet(false)); // 调用底层Selector的select()方法。
                        if (wakenUp.get()) {
                            selector.wakeup();
                        }
                        // fall through
                    default:
                }
            } catch (IOException e) {
                // If we receive an IOException here its because the Selector is messed up. Let's rebuild
                // the selector and retry. https://github.com/netty/netty/issues/8566
                rebuildSelector0();
                handleLoopException(e);
                continue;
            }

            cancelledKeys = 0;
            needsToSelectAgain = false;
            final int ioRatio = this.ioRatio;
            if (ioRatio == 100) {
                try {
                    processSelectedKeys();
                } finally {
                    // Ensure we always run tasks.
                    runAllTasks();
                }
            } else {
                final long ioStartTime = System.nanoTime();
                try {
                    processSelectedKeys(); // ref-10 处理Selector上就绪的事件
                } finally {
                    // Ensure we always run tasks.
                    final long ioTime = System.nanoTime() - ioStartTime;
                    runAllTasks(ioTime * (100 - ioRatio) / ioRatio); // ref-13
                }
            }
        } catch (Throwable t) {
            handleLoopException(t);
        }
        // Always handle shutdown even if the loop processing threw an exception.
        try {
            if (isShuttingDown()) {
                closeAll();
                if (confirmShutdown()) {
                    return;
                }
            }
        } catch (Throwable t) {
            handleLoopException(t);
        }
    }
}

ref-9处会处理Selector上就绪的事件,最终调用的代码如下:

// io.netty.channel.nio.NioEventLoop.java文件 
private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
    final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
    if (!k.isValid()) {
        final EventLoop eventLoop;
        try {
            eventLoop = ch.eventLoop();
        } catch (Throwable ignored) {
            return;
        }
        // Only close ch if ch is still registered to this EventLoop. ch could have deregistered from the event loop
        // and thus the SelectionKey could be cancelled as part of the deregistration process, but the channel is
        // still healthy and should not be closed.
        // See https://github.com/netty/netty/issues/5125
        if (eventLoop != this || eventLoop == null) {
            return;
        }
        // close the channel if the key is not valid anymore
        unsafe.close(unsafe.voidPromise());
        return;
    }

    try {
        int readyOps = k.readyOps();
        // We first need to call finishConnect() before try to trigger a read(...) or write(...) as otherwise
        // the NIO JDK channel implementation may throw a NotYetConnectedException.
        if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
            // remove OP_CONNECT as otherwise Selector.select(..) will always return without blocking
            // See https://github.com/netty/netty/issues/924
            int ops = k.interestOps();
            ops &= ~SelectionKey.OP_CONNECT;
            k.interestOps(ops);

            unsafe.finishConnect();
        }

        // Process OP_WRITE first as we may be able to write some queued buffers and so free memory.
        if ((readyOps & SelectionKey.OP_WRITE) != 0) {
            // Call forceFlush which will also take care of clear the OP_WRITE once there is nothing left to write
            ch.unsafe().forceFlush();
        }

        // Also check for readOps of 0 to workaround possible JDK bug which may otherwise lead
        // to a spin loop
        if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
            unsafe.read(); // ref-10
        }
    } catch (CancelledKeyException ignored) {
        unsafe.close(unsafe.voidPromise());
    }
}

代码进入到ref-10处,实际内容就会调用底层代码将数据读取出来,然后调用ChannelHandler的ChannelRead(…)方法进行处理。

在初始化Channel的时候,ServerBootStrap向初始Channel注册了ServerBootstrapAcceptor处理器,这个处理器会将接收到的SocketChannel注册到Worker NioEventLoopGroup中,相关代码如下:

//  io.netty.bootstrap.ServerBootstrap.java文件
@Override
void init(Channel channel) throws Exception {
    // ...... 省略
    p.addLast(new ChannelInitializer<Channel>() {
        @Override
        public void initChannel(final Channel ch) throws Exception {
            final ChannelPipeline pipeline = ch.pipeline();
            ChannelHandler handler = config.handler();
            if (handler != null) {
                pipeline.addLast(handler);
            }

            ch.eventLoop().execute(new Runnable() {
                @Override
                public void run() {
                    pipeline.addLast(new ServerBootstrapAcceptor(
                        ch, currentChildGroup, currentChildHandler, currentChildOptions, currentChildAttrs));
                }
            });
        }
    });
}
// io.netty.bootstrap.ServerBootstrap.ServerBootstrapAcceptor
@Override
@SuppressWarnings("unchecked")
public void channelRead(ChannelHandlerContext ctx, Object msg) {
    final Channel child = (Channel) msg;

    child.pipeline().addLast(childHandler);

    setChannelOptions(child, childOptions, logger);

    for (Entry<AttributeKey<?>, Object> e: childAttrs) {
        child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
    }

    try {
        childGroup.register(child).addListener(new ChannelFutureListener() { // ref-11
            @Override
            public void operationComplete(ChannelFuture future) throws Exception {
                if (!future.isSuccess()) {
                    forceClose(child, future.cause());
                }
            }
        });
    } catch (Throwable t) {
        forceClose(child, t);
    }
}

ref-11的代码又会重复上文中register0()方法,将Channel注册到Selector上去。不过是在worker NioEventLoopGroup中的NioEventLoop中。

Neety线程模型

从上文分析中可以得出,Netty中的NioEventLoopGoup相当于一个线程池,NioEventLoop先当于一个线程。

我们首先看一下向NioEventLoopGoup提交任务的代码:

// io.netty.channel.MultithreadEventLoopGroup.java文件
@Override
public ChannelFuture register(Channel channel) {
    return next().register(channel);
}

注册一个Channel,就相当于提交了一个任务。

接下来我们再看一下EventLoop是怎么执行任务的。

// io.netty.util.concurrent.SingleThreadEventExecutor.java文件
@Override
public void execute(Runnable task) {
    if (task == null) {
        throw new NullPointerException("task");
    }

    boolean inEventLoop = inEventLoop();
    addTask(task); // ref-12  添加任务到队列中
    if (!inEventLoop) {
        startThread();
        if (isShutdown()) {
            boolean reject = false;
            try {
                if (removeTask(task)) {
                    reject = true;
                }
            } catch (UnsupportedOperationException e) {
                // The task queue does not support removal so the best thing we can do is to just move on and
                // hope we will be able to pick-up the task before its completely terminated.
                // In worst case we will log on termination.
            }
            if (reject) {
                reject();
            }
        }
    }

    if (!addTaskWakesUp && wakesUpForTask(task)) {
        wakeup(inEventLoop);
    }
}

ref-12会将任务添加到任务队列中,然后会在开启的线程循环中拉取任务并执行,进入执行任务的入口是ref-13处,调用的方法如下所示:

// io.netty.util.concurrent.SingleThreadEventExecutor.java文件
/**
     * Poll all tasks from the task queue and run them via {@link Runnable#run()} method.  This method stops running
     * the tasks in the task queue and returns if it ran longer than {@code timeoutNanos}.
     */
protected boolean runAllTasks(long timeoutNanos) {
    fetchFromScheduledTaskQueue();
    Runnable task = pollTask();
    if (task == null) {
        afterRunningAllTasks();
        return false;
    }

    final long deadline = ScheduledFutureTask.nanoTime() + timeoutNanos;
    long runTasks = 0;
    long lastExecutionTime;
    for (;;) {
        safeExecute(task); // 执行任务

        runTasks ++;

        // Check timeout every 64 tasks because nanoTime() is relatively expensive.
        // XXX: Hard-coded value - will make it configurable if it is really a problem.
        if ((runTasks & 0x3F) == 0) {
            lastExecutionTime = ScheduledFutureTask.nanoTime();
            if (lastExecutionTime >= deadline) {
                break;
            }
        }

        task = pollTask();
        if (task == null) {
            lastExecutionTime = ScheduledFutureTask.nanoTime();
            break;
        }
    }

    afterRunningAllTasks();
    this.lastExecutionTime = lastExecutionTime;
    return true;
}

总结

自己的水平有限,对于Netty的源码就只能分析到这儿了。

做个简单的总结,Netty底层是基于Java NIO的,在其上创造了三个重要的概念,(1)Channel,接收客户端请求的通道;(2)ByteBuf 对底层内存进行直接操作的缓冲区;(3)Event Model,主要是EventLoop对线程池的封装,还有对各个生命周期函数的调用。

最后用一张图对Netty进行一个总结。
在这里插入图片描述

  • 13
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Netty代码编写涉及到许多方面,包括网络通信、消息处理、编解码、线程模型等。下面是一个简单的Netty服务端的代码示例,用于监听客户端连接并处理客户端发送的请求消息。 ```java import io.netty.bootstrap.ServerBootstrap; import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelInitializer; import io.netty.channel.ChannelOption; import io.netty.channel.EventLoopGroup; import io.netty.channel.nio.NioEventLoopGroup; import io.netty.channel.socket.SocketChannel; import io.netty.channel.socket.nio.NioServerSocketChannel; public class NettyServer { public static void main(String[] args) throws Exception { // 创建两个EventLoopGroup,一个用于接收连接,一个用于处理连接 EventLoopGroup bossGroup = new NioEventLoopGroup(); EventLoopGroup workerGroup = new NioEventLoopGroup(); try { // 创建ServerBootstrap实例,配置参数 ServerBootstrap b = new ServerBootstrap(); b.group(bossGroup, workerGroup) .channel(NioServerSocketChannel.class) .childHandler(new ChannelInitializer<SocketChannel>() { @Override public void initChannel(SocketChannel ch) throws Exception { // 添加处理器 ch.pipeline().addLast(new NettyServerHandler()); } }) .option(ChannelOption.SO_BACKLOG, 128) .childOption(ChannelOption.SO_KEEPALIVE, true); // 绑定端口,开始接收进来的连接 ChannelFuture f = b.bind(8080).sync(); // 等待服务器 socket 关闭 。 f.channel().closeFuture().sync(); } finally { // 释放资源 workerGroup.shutdownGracefully(); bossGroup.shutdownGracefully(); } } } ``` 在上面的代码中,我们创建了一个ServerBootstrap实例,并配置了两个EventLoopGroup:一个用于接收连接,一个用于处理连接。然后我们绑定了端口,并添加了一个ChannelInitializer,用于初始化SocketChannel的处理器。最后,我们等待服务器socket关闭,释放资源。 其中,NettyServerHandler是我们自定义的处理器,用于处理客户端发送的请求消息。我们可以在这个处理器中实现我们需要的业务逻辑,例如解析请求、处理请求、返回响应等。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值