Netty学习(一)

文章目录

学习连接

Netty学习指南(资料、文章汇总)
ASCII码对照表(完整版)
【硬核】肝了一月的Netty知识点
Netty入门案例——Netty实现websocket
实用水文篇–SpringBoot整合Netty实现消息推送服务器
Netty对WebSocket的支持视频教程 - bilibili视频
Netty In Action作者:统治一切的框架 Netty - One Framework to rule them all - bilibili视频

一. NIO 基础

non-blocking io 非阻塞 IO

1. 三大组件

1.1 Channel & Buffer

channel 有一点类似于 stream,它就是读写数据的双向通道,可以从 channel 将数据读入 buffer,也可以将 buffer 的数据写入 channel,而之前的 stream 要么是输入,要么是输出(如:输入流只能读取数据,输出流只能写入数据),channel 比 stream 更为底层

channel
buffer

常见的 Channel 有

  • FileChannel(文件的数据传输通道)
  • DatagramChannel(UDP网络编程数据传输通道)
  • SocketChannel(TCP数据传输通道,客户端和服务端都能用)
  • ServerSocketChannel(TCP数据传输通道,服务端专用)

buffer 则用来缓冲读写数据(就是一个暂存数据的缓冲区,用来暂存从channel中读取到的数据,或者将数据写出到channel之前要先把数据写到buffer中),常见的 buffer 有

  • ByteBuffer(抽象类)
    • MappedByteBuffer
    • DirectByteBuffer
    • HeapByteBuffer
  • ShortBuffer
  • IntBuffer
  • LongBuffer
  • FloatBuffer
  • DoubleBuffer
  • CharBuffer

1.2 Selector

selector 单从字面意思不好理解,需要结合服务器的设计演化来理解它的用途

多线程版设计

在nio出来之前,服务器怎么处理客户端的连接请求?如下图,每个线程专门处理 一个socket连接(就好比:一个餐馆,每来一个客人,就雇佣一个服务端专门服务这个客人,但成本太高)。

多线程版
socket1
thread
socket2
thread
socket3
thread
多线程版缺点
  • 内存占用高
  • 线程上下文切换成本高
  • 只适合连接数少的场景
线程池版设计

为了解决上面由于连接客户端数量过多,导致线程数量过多的问题,因此采用线程池来处理。

线程池版
socket1
thread
socket2
thread
socket3
socket4
线程池版缺点
  • 阻塞模式下,线程仅能处理一个 socket 连接(当一个thread处理一个socket1客户端时,不能同时处理socket3的请求,只有等socket1断开连接后,才能为socket3客户端提供服务)
  • 仅适合短连接场景
selector 版设计

selector 的作用就是配合一个线程来管理多个 channel,获取这些 channel 上发生的事件(事件如:可连接、可读、可写),这些 channel 工作在非阻塞模式下,不会让线程吊死在一个 channel 上(与前面线程池版区别:一个线程只有等socket断开了连接,这个线程才能处理下socket客户端请求)。适合连接数特别多,但流量低的场景(low traffic)(这句话的意思是:当某个客户端要发送大量数据时,线程就一直在处理这个客户端,其它客户端的请求就被暂时搁置)

如下图(thread相当于是一个服务员,channel是客人,selector就相当于是可以监测到所有客人需求的一个工具-就相当于是一个监视所有客人的监视器,一旦有某个客人有某些需求,selector就可以知道,然后把需求派给服务员提供服务,这样就不用像之前多线程版—那样一个客人就雇佣一个服务员,线程池版—一个服务员只有等服务员一个客人之后,才能服务下一个客人)

selector 版
selector
thread
channel
channel
channel

调用 selector 的 select() 会阻塞直到 channel 发生了读写就绪事件,这些事件发生,select 方法就会返回这些事件交给 thread 来处理

2. ByteBuffer

有一普通文本文件 data.txt,内容为

1234567890abcd

使用 FileChannel 来读取文件内容

@Slf4j
public class ChannelDemo1 {
    public static void main(String[] args) {
        try (RandomAccessFile file = new RandomAccessFile("helloword/data.txt", "rw")) {
            FileChannel channel = file.getChannel();
            ByteBuffer buffer = ByteBuffer.allocate(10);
            do {
                // 向 buffer 写入
                int len = channel.read(buffer);
                log.debug("读到字节数:{}", len);
                if (len == -1) {
                    break;
                }
                // 切换 buffer 读模式
                buffer.flip();
                while(buffer.hasRemaining()) {
                    log.debug("{}", (char)buffer.get());
                }
                // 切换 buffer 写模式
                buffer.clear();
            } while (true);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

输出

10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 读到字节数:10
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 1
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 2
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 3
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 4
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 5
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 6
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 7
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 8
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 9
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 0
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 读到字节数:4
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - a
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - b
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - c
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - d
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 读到字节数:-1

课堂示例

@Slf4j
public class TestByteBuffer {

    public static void main(String[] args) {
        
        // FileChannel
        // 1. 输入输出流, 2. RandomAccessFile
        try (FileChannel channel = new FileInputStream("data.txt").getChannel()) {
            
            // 准备缓冲区
            ByteBuffer buffer = ByteBuffer.allocate(10);
            
            while(true) {
                
                // 从 channel 读取数据,(就是向 buffer 写入)
                int len = channel.read(buffer);
                
                log.debug("读取到的字节数 {}", len);
                
                if(len == -1) { // 没有内容了
                    break;
                }
                
                // 打印 buffer 的内容
                buffer.flip(); // 切换至读模式
                
                while(buffer.hasRemaining()) { // 是否还有剩余未读数据
                    
                    byte b = buffer.get();
                    
                    log.debug("实际字节 {}", (char) b);
                }
                
                buffer.clear(); // 切换为写模式
            }
            
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

}

输出

23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 读取到的字节数 10
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 1
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 2
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 3
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 4
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 5
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 6
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 7
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 8
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 9
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 0
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 读取到的字节数 5
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 a
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 b
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 c
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 

23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 读取到的字节数 -1

2.1 ByteBuffer 正确使用姿势

  1. 向 buffer 写入数据,例如调用 channel.read(buffer)
  2. 调用 flip() 切换至读模式
  3. 从 buffer 读取数据,例如调用 buffer.get()
  4. 调用 clear() 或 compact() 切换至写模式
  5. 重复 1~4 步骤

2.2 ByteBuffer 结构

ByteBuffer 有以下重要属性

  • capacity(代表容量,bytebuffer中能装多少个字节数量的数据)
  • position(代表读写指针,可理解为:读到哪了或者写到哪了这样的一个索引下标)
  • limit(代表读写的限制,应该读多少个字节,应该写多少个字节)

一开始(position为0,limit 与 capacity相同值)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-2KFVtKgj-1690207745252)(assets/0021.png)]

写模式下,position 是写入位置,limit 等于容量,下图表示写入了 4 个字节后的状态

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AGA2LNK9-1690207745254)(assets/0018.png)]

flip 动作发生后,position 切换为读取位置,limit 切换为读取限制(limit通过获取position的原位置作为读取限制)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-BRPws6VF-1690207745254)(assets/0019.png)]

读取 4 个字节后,状态

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-BApF13ew-1690207745254)(assets/0020.png)]

clear 动作发生后,状态

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AC5W1uhp-1690207745255)(…/raw/img/0021.png)]

compact 方法,是把未读完的部分向前压缩,然后切换至写模式

(它也会将bytebuffer切换至写模式,只不过如果存在还没有读的数据的话,会把未读的数据向前压缩)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-U6S2mpPX-1690207745256)(assets/0022.png)]

调试工具类
  1. 需要导入netty的依赖
  2. 可以借助这个工具类,在控制台输出可视化的buffer内存结构简图
public class ByteBufferUtil {
    
    private static final char[] BYTE2CHAR = new char[256];
    private static final char[] HEXDUMP_TABLE = new char[256 * 4];
    private static final String[] HEXPADDING = new String[16];
    private static final String[] HEXDUMP_ROWPREFIXES = new String[65536 >>> 4];
    private static final String[] BYTE2HEX = new String[256];
    private static final String[] BYTEPADDING = new String[16];

    static {
        final char[] DIGITS = "0123456789abcdef".toCharArray();
        for (int i = 0; i < 256; i++) {
            HEXDUMP_TABLE[i << 1] = DIGITS[i >>> 4 & 0x0F];
            HEXDUMP_TABLE[(i << 1) + 1] = DIGITS[i & 0x0F];
        }

        int i;

        // Generate the lookup table for hex dump paddings
        for (i = 0; i < HEXPADDING.length; i++) {
            int padding = HEXPADDING.length - i;
            StringBuilder buf = new StringBuilder(padding * 3);
            for (int j = 0; j < padding; j++) {
                buf.append("   ");
            }
            HEXPADDING[i] = buf.toString();
        }

        // Generate the lookup table for the start-offset header in each row (up to 64KiB).
        for (i = 0; i < HEXDUMP_ROWPREFIXES.length; i++) {
            StringBuilder buf = new StringBuilder(12);
            buf.append(NEWLINE);
            buf.append(Long.toHexString(i << 4 & 0xFFFFFFFFL | 0x100000000L));
            buf.setCharAt(buf.length() - 9, '|');
            buf.append('|');
            HEXDUMP_ROWPREFIXES[i] = buf.toString();
        }

        // Generate the lookup table for byte-to-hex-dump conversion
        for (i = 0; i < BYTE2HEX.length; i++) {
            BYTE2HEX[i] = ' ' + StringUtil.byteToHexStringPadded(i);
        }

        // Generate the lookup table for byte dump paddings
        for (i = 0; i < BYTEPADDING.length; i++) {
            int padding = BYTEPADDING.length - i;
            StringBuilder buf = new StringBuilder(padding);
            for (int j = 0; j < padding; j++) {
                buf.append(' ');
            }
            BYTEPADDING[i] = buf.toString();
        }

        // Generate the lookup table for byte-to-char conversion
        for (i = 0; i < BYTE2CHAR.length; i++) {
            if (i <= 0x1f || i >= 0x7f) {
                BYTE2CHAR[i] = '.';
            } else {
                BYTE2CHAR[i] = (char) i;
            }
        }
    }

    /**
     * 打印所有内容
     * @param buffer
     */
    public static void debugAll(ByteBuffer buffer) {
        int oldlimit = buffer.limit();
        buffer.limit(buffer.capacity());
        StringBuilder origin = new StringBuilder(256);
        appendPrettyHexDump(origin, buffer, 0, buffer.capacity());
        System.out.println("+--------+-------------------- all ------------------------+----------------+");
        System.out.printf("position: [%d], limit: [%d]\n", buffer.position(), oldlimit);
        System.out.println(origin);
        buffer.limit(oldlimit);
    }

    /**
     * 打印可读取内容
     * @param buffer
     */
    public static void debugRead(ByteBuffer buffer) {
        StringBuilder builder = new StringBuilder(256);
        appendPrettyHexDump(builder, buffer, buffer.position(), buffer.limit() - buffer.position());
        System.out.println("+--------+-------------------- read -----------------------+----------------+");
        System.out.printf("position: [%d], limit: [%d]\n", buffer.position(), buffer.limit());
        System.out.println(builder);
    }

    private static void appendPrettyHexDump(StringBuilder dump, ByteBuffer buf, int offset, int length) {
        if (isOutOfBounds(offset, length, buf.capacity())) {
            throw new IndexOutOfBoundsException(
                    "expected: " + "0 <= offset(" + offset + ") <= offset + length(" + length
                            + ") <= " + "buf.capacity(" + buf.capacity() + ')');
        }
        if (length == 0) {
            return;
        }
        dump.append(
                "         +-------------------------------------------------+" +
                        NEWLINE + "         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |" +
                        NEWLINE + "+--------+-------------------------------------------------+----------------+");

        final int startIndex = offset;
        final int fullRows = length >>> 4;
        final int remainder = length & 0xF;

        // Dump the rows which have 16 bytes.
        for (int row = 0; row < fullRows; row++) {
            int rowStartIndex = (row << 4) + startIndex;

            // Per-row prefix.
            appendHexDumpRowPrefix(dump, row, rowStartIndex);

            // Hex dump
            int rowEndIndex = rowStartIndex + 16;
            for (int j = rowStartIndex; j < rowEndIndex; j++) {
                dump.append(BYTE2HEX[getUnsignedByte(buf, j)]);
            }
            dump.append(" |");

            // ASCII dump
            for (int j = rowStartIndex; j < rowEndIndex; j++) {
                dump.append(BYTE2CHAR[getUnsignedByte(buf, j)]);
            }
            dump.append('|');
        }

        // Dump the last row which has less than 16 bytes.
        if (remainder != 0) {
            int rowStartIndex = (fullRows << 4) + startIndex;
            appendHexDumpRowPrefix(dump, fullRows, rowStartIndex);

            // Hex dump
            int rowEndIndex = rowStartIndex + remainder;
            for (int j = rowStartIndex; j < rowEndIndex; j++) {
                dump.append(BYTE2HEX[getUnsignedByte(buf, j)]);
            }
            dump.append(HEXPADDING[remainder]);
            dump.append(" |");

            // Ascii dump
            for (int j = rowStartIndex; j < rowEndIndex; j++) {
                dump.append(BYTE2CHAR[getUnsignedByte(buf, j)]);
            }
            dump.append(BYTEPADDING[remainder]);
            dump.append('|');
        }

        dump.append(NEWLINE +
                "+--------+-------------------------------------------------+----------------+");
    }

    private static void appendHexDumpRowPrefix(StringBuilder dump, int row, int rowStartIndex) {
        if (row < HEXDUMP_ROWPREFIXES.length) {
            dump.append(HEXDUMP_ROWPREFIXES[row]);
        } else {
            dump.append(NEWLINE);
            dump.append(Long.toHexString(rowStartIndex & 0xFFFFFFFFL | 0x100000000L));
            dump.setCharAt(dump.length() - 9, '|');
            dump.append('|');
        }
    }

    public static short getUnsignedByte(ByteBuffer buffer, int index) {
        return (short) (buffer.get(index) & 0xFF);
    }
}
示例
import static cn.itcast.nio.c2.ByteBufferUtil.debugAll;

public class TestByteBufferReadWrite {
    
    public static void main(String[] args) {
        
        ByteBuffer buffer = ByteBuffer.allocate(10);
        
        buffer.put((byte) 0x61); // 'a'
        
        // 使用调试工具查看buffer对象的内存简图
        debugAll(buffer);
        
        buffer.put(new byte[]{0x62, 0x63, 0x64}); // b  c  d
        
        debugAll(buffer);
        
        // System.out.println(buffer.get());
        
        // 切换至读模式
        buffer.flip();
        
        System.out.println(buffer.get());
        
        debugAll(buffer);
        
        // 注意这个时候,当前这个案例的最后一个字节(即 d),它并没有被清除(可以看下图-2个64),
        //                                           因为后面写的时候,它会被覆盖
        buffer.compact();
        
        debugAll(buffer);
        
        buffer.put(new byte[]{0x65, 0x6f});
        
        debugAll(buffer);
    }
}

/* 输出如下 */
+--------+-------------------- all ------------------------+----------------+
position: [1], limit: [10]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 61 00 00 00 00 00 00 00 00 00                   |a.........      |
+--------+-------------------------------------------------+----------------+
+--------+-------------------- all ------------------------+----------------+
position: [4], limit: [10]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 61 62 63 64 00 00 00 00 00 00                   |abcd......      |
+--------+-------------------------------------------------+----------------+
97
+--------+-------------------- all ------------------------+----------------+
position: [1], limit: [4]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 61 62 63 64 00 00 00 00 00 00                   |abcd......      |
+--------+-------------------------------------------------+----------------+
+--------+-------------------- all ------------------------+----------------+
position: [3], limit: [10]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 62 63 64 64 00 00 00 00 00 00                   |bcdd......      |
+--------+-------------------------------------------------+----------------+
+--------+-------------------- all ------------------------+----------------+
position: [5], limit: [10]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 62 63 64 65 6f 00 00 00 00 00                   |bcdeo.....      |
+--------+-------------------------------------------------+----------------+

2.3 ByteBuffer 常见方法

分配空间 (allocate)

可以使用 allocate 方法为 ByteBuffer 分配空间,其它 buffer 类也有该方法

Bytebuffer buf = ByteBuffer.allocate(16); // 容量一旦分配,在后面就不可改变
HeapByteBuffer & DirectByteBuffer
import java.nio.ByteBuffer;

public class TestByteBufferAllocate {
    
    public static void main(String[] args) {
        
        System.out.println(ByteBuffer.allocate(16).getClass());
        System.out.println(ByteBuffer.allocateDirect(16).getClass());
        
        /*
        class java.nio.HeapByteBuffer    - java 堆内存,读写效率较低,受到 GC 的影响
        class java.nio.DirectByteBuffer  - 直接内存,读写效率高(少一次拷贝),不会受 GC 影响,分配的效率低
         */
    }
}
读写数据
向 buffer 写入数据

有两种办法

  • 调用 channel 的 read 方法
  • 调用 buffer 自己的 put 方法
int readBytes = channel.read(buf);

buf.put((byte)127); // 每写入一个字节, buf的position会向后移动1位

buffer.put(new byte[]{0x62, 0x63, 0x64, 'e', 'f', 'g'}); // b  c  d e f g 

// 1. 注意put之后须调用flip方法切换到读模式,才能读取到
// 2. 当buffer被填满时, 就不能往buffer中继续put了, 否则报错:BufferOverflowException
从 buffer 读取数据

同样有两种办法

  • 调用 channel 的 write 方法
  • 调用 buffer 自己的 get 方法
int writeBytes = channel.write(buf);

byte b = buf.get();      // 从buffer读取一个字节, 每读取一个字节, position会向后移动1位

buffer.get(new byte[4]); // 将buffer读取到指定的字节数组中去, 注意这个length不能超过buf中剩余可读字节数量大小, 否则报错: BufferUnderflowException

get 方法会让 position 读指针向后走,如果想重复读取数据

  • 可以调用 rewind 方法将 position 重新置为 0
  • 或者调用 get(int i) 方法获取索引 i 的内容,它不会移动读指针
示例
public class TestByteBufferRead {

    public static void main(String[] args) {

        ByteBuffer buffer = ByteBuffer.allocate(10);

        buffer.put(new byte[]{'a', 'b', 'c', 'd'});

        // flip 切换至读模式
        buffer.flip();

        buffer.get(new byte[4]); // 注意这里数组长度不能超过buffer的可读字节数量4(limit - position => 4-0)
        debugAll(buffer);        // 可以直接拿buffer.remaining()作为数组长度(如果数组长度太大的话,就不行哦)

        // rewind 从头开始读
        buffer.rewind(); // position 置为0 , mark 置为 -1
        System.out.println((char)buffer.get()); // a

        debugAll(buffer);

		// get(i) 不会改变读索引的位置
        System.out.println((char) buffer.get(0)); // a

        debugAll(buffer);


    }
}

/*输出*/
+--------+-------------------- all ------------------------+----------------+
position: [4], limit: [4]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 61 62 63 64 00 00 00 00 00 00                   |abcd......      |
+--------+-------------------------------------------------+----------------+
a
+--------+-------------------- all ------------------------+----------------+
position: [1], limit: [4]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 61 62 63 64 00 00 00 00 00 00                   |abcd......      |
+--------+-------------------------------------------------+----------------+
a
+--------+-------------------- all ------------------------+----------------+
position: [1], limit: [4]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 61 62 63 64 00 00 00 00 00 00                   |abcd......      |
+--------+-------------------------------------------------+----------------+
mark 和 reset

mark 是在读取时,做一个标记,即使 position 改变,只要调用 reset 就能回到 mark 的位置

注意:rewind 和 flip 都会清除 mark 位置(所谓的清除就是设置mark为-1)

示例
public class TestByteBufferRead {

    public static void main(String[] args) {

        ByteBuffer buffer = ByteBuffer.allocate(10);

        buffer.put(new byte[]{'a', 'b', 'c', 'd'});

        // 切换至读模式
        buffer.flip();

        System.out.println((char)buffer.get()); // a
        System.out.println((char)buffer.get()); // b

        buffer.mark(); // 内部有一个mark标记, 此时标记索引为2

        System.out.println((char)buffer.get()); // c
        System.out.println((char)buffer.get()); // d

        buffer.reset(); // 将position恢复到标记的位置

        System.out.println((char)buffer.get()); // c
        System.out.println((char)buffer.get()); // d


    }
}
字符串与 ByteBuffer 互转
public class TestByteBufferString {
    public static void main(String[] args) {

        // 1. 字符串转为 ByteBuffer(需要调用flip方法切换至读模式, 看position & limit可知)
        ByteBuffer buffer1 = ByteBuffer.allocate(16);
        buffer1.put("hello".getBytes());
        debugAll(buffer1);

        // 2. Charset(不需要调用flip方法切换至读模式, 看position & limit可知)
        ByteBuffer buffer2 = StandardCharsets.UTF_8.encode("hello");
        debugAll(buffer2);

        // 3. wrap
        ByteBuffer buffer3 = ByteBuffer.wrap("hello".getBytes());
        debugAll(buffer3);

        // 4. 转为字符串
        buffer1.flip(); // 需要先将buffer1切换至读模式
        String str1 = StandardCharsets.UTF_8.decode(buffer1).toString(); // decode返回的是CharBuffer
        System.out.println(str1);

        String str2 = StandardCharsets.UTF_8.decode(buffer2).toString();
        System.out.println(str2);


    }
}

输出

+--------+-------------------- all ------------------------+----------------+
position: [5], limit: [16]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 68 65 6c 6c 6f 00 00 00 00 00 00 00 00 00 00 00 |hello...........|
+--------+-------------------------------------------------+----------------+
+--------+-------------------- all ------------------------+----------------+
position: [0], limit: [5]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 68 65 6c 6c 6f                                  |hello           |
+--------+-------------------------------------------------+----------------+
+--------+-------------------- all ------------------------+----------------+
position: [0], limit: [5]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 68 65 6c 6c 6f                                  |hello           |
+--------+-------------------------------------------------+----------------+
hello
hello

Process finished with exit code 0
Buffer 的线程安全

Buffer 是非线程安全的

2.4 Scattering Reads分散读取

分散读取,有一个文本文件 3parts.txt

onetwothree

使用如下方式读取,可以将数据填充至多个 buffer

try (RandomAccessFile file = new RandomAccessFile("helloword/3parts.txt", "rw")) {
    
    FileChannel channel = file.getChannel();
    
    ByteBuffer a = ByteBuffer.allocate(3);
    ByteBuffer b = ByteBuffer.allocate(3);
    ByteBuffer c = ByteBuffer.allocate(5);
    
    channel.read(new ByteBuffer[]{a, b, c});
    
    a.flip();
    b.flip();
    c.flip();
    
    debug(a);
    debug(b);
    debug(c);
    
} catch (IOException e) {
    e.printStackTrace();
}

结果

         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 6f 6e 65                                        |one             |
+--------+-------------------------------------------------+----------------+
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 74 77 6f                                        |two             |
+--------+-------------------------------------------------+----------------+
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 74 68 72 65 65                                  |three           |
+--------+-------------------------------------------------+----------------+

2.5 Gathering Writes聚集写入

使用如下方式写入,可以将多个 buffer 的数据填充至 channel

try (RandomAccessFile file = new RandomAccessFile("helloword/3parts.txt", "rw")) {
    
    FileChannel channel = file.getChannel();
    
    ByteBuffer d = ByteBuffer.allocate(4);
    ByteBuffer e = ByteBuffer.allocate(4);
    
    channel.position(11);

    d.put(new byte[]{'f', 'o', 'u', 'r'});
    e.put(new byte[]{'f', 'i', 'v', 'e'});
    
    d.flip();
    e.flip();
    
    debug(d);
    debug(e);
    
    channel.write(new ByteBuffer[]{d, e});
    
} catch (IOException e) {
    e.printStackTrace();
}

输出

         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 66 6f 75 72                                     |four            |
+--------+-------------------------------------------------+----------------+
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 66 69 76 65                                     |five            |
+--------+-------------------------------------------------+----------------+

文件内容

onetwothreefourfive
示例
public class TestGatheringWrites {
    public static void main(String[] args) {

        ByteBuffer b1 = StandardCharsets.UTF_8.encode("hello");
        ByteBuffer b2 = StandardCharsets.UTF_8.encode("world");
        ByteBuffer b3 = StandardCharsets.UTF_8.encode("你好");

        try (FileChannel channel = new RandomAccessFile("words2.txt", "rw").getChannel()) {

            channel.write(new ByteBuffer[]{b1, b2, b3});

        } catch (IOException e) {
            
        }
    }
}

/* 生成文件内容 */
helloworld你好

2.6 练习

网络上有多条数据发送给服务端,数据之间使用 \n 进行分隔
但由于某种原因这些数据在接收时,被进行了重新组合,例如原始数据有3条为

  • Hello,world\n
  • I’m zhangsan\n
  • How are you?\n

变成了下面的两个 byteBuffer (黏包-消息合在了一起准备发送,半包-一个消息被拆开了准备发送)

  • Hello,world\nI’m zhangsan\nHo
  • w are you?\n

现在要求你编写程序,将错乱的数据恢复成原始的按 \n 分隔的数据

public static void main(String[] args) {
    ByteBuffer source = ByteBuffer.allocate(32);
    //                     11            24
    source.put("Hello,world\nI'm zhangsan\nHo".getBytes());
    split(source);

    source.put("w are you?\nhaha!\n".getBytes());
    split(source);
}

private static void split(ByteBuffer source) {
    source.flip();
    int oldLimit = source.limit();
    for (int i = 0; i < oldLimit; i++) {
        if (source.get(i) == '\n') {
            System.out.println(i);
            ByteBuffer target = ByteBuffer.allocate(i + 1 - source.position());
            // 0 ~ limit
            source.limit(i + 1);
            target.put(source); // 从source 读,向 target 写
            debugAll(target);
            source.limit(oldLimit);
        }
    }
    source.compact();
}
消息分隔示例
public class TestByteBufferExam {
    public static void main(String[] args) {
         /*
         网络上有多条数据发送给服务端,数据之间使用 \n 进行分隔
         但由于某种原因这些数据在接收时,被进行了重新组合,例如原始数据有3条为
             Hello,world\n
             I'm zhangsan\n
             How are you?\n
         变成了下面的两个 byteBuffer (黏包,半包)
             Hello,world\nI'm zhangsan\nHo
             w are you?\n
         现在要求你编写程序,将错乱的数据恢复成原始的按 \n 分隔的数据
         */
        ByteBuffer source = ByteBuffer.allocate(32);
        source.put("Hello,world\nI'm zhangsan\nHo".getBytes());
        split(source);
        source.put("w are you?\n".getBytes());
        split(source);
    }

    private static void split(ByteBuffer source) {

        // 切换至读模式
        source.flip();

        for (int i = 0; i < source.limit(); i++) {

            // 找到一条完整消息
            if (source.get(i) == '\n') {

                // 计算消息长度
                int length = i - source.position() + 1;

                // 把这条完整消息存入新的 ByteBuffer
                ByteBuffer target = ByteBuffer.allocate(length);

                // 从 source 读,向 target 写
                for (int j = 0; j < length; j++) {

                    // 每从source中读取到一个字节, 都写到target中
                    byte b = source.get(); // 注意ByteBuffer这里每次读取到一个字节, position都会向后移动一位 
                    
                    target.put(b);
                    
                }

                debugAll(target);
            }
        }

        source.compact(); // 注意: 这里不能用source.clear(), 
                          // 因为不能丢弃后面未读到的字节数据, 使用compact()方法将未读到的数据整体移动到buf的最前面
    }
}

3. 文件编程

3.1 FileChannel

FileChannel 工作模式

FileChannel 只能工作在阻塞模式下

(什么意思?就是FileChannel不能配合Selector一起用!只有SocketChannel等网络相关的Channel可以配合Selector工作在非阻塞模式下)

获取

不能直接打开 FileChannel,必须通过 FileInputStream、FileOutputStream 或者 RandomAccessFile 来获取 FileChannel,它们都有 getChannel 方法

  • 通过 FileInputStream 获取的 channel 只能读
  • 通过 FileOutputStream 获取的 channel 只能写
  • 通过 RandomAccessFile 是否能读写根据构造 RandomAccessFile 时的读写模式决定
读取

会从 channel 读取数据填充 ByteBuffer,返回值表示读到了多少字节,-1 表示到达了文件的末尾

int readBytes = channel.read(buffer);
写入

写入的正确姿势如下, SocketChannel(不能保证一次就可以将buffer中的全部数据写入到channel中,因此须检查buffer中是否还有数据,如果还有数据,则继续写。)

ByteBuffer buffer = ...;
buffer.put(...); // 存入数据
buffer.flip();   // 切换读模式

while(buffer.hasRemaining()) {
    channel.write(buffer);
}

在 while 中调用 channel.write 是因为 write 方法并不能保证一次将 buffer 中的内容全部写入 channel

关闭

channel 必须关闭,不过调用了 FileInputStream、FileOutputStream 或者 RandomAccessFile 的 close 方法会间接地调用 channel 的 close 方法(可以结合try-with-resource块一起使用)

位置

获取当前位置

long pos = channel.position();

设置当前位置

long newPos = ...;
channel.position(newPos);

设置当前位置时,如果设置为文件的末尾

  • 这时读取会返回 -1
  • 这时写入,会追加内容,但要注意如果 position 超过了文件末尾,再写入时在新内容和原末尾之间会有空洞(00)
大小

使用 size 方法获取文件的大小

强制写入

操作系统出于性能的考虑,会将数据缓存,不是立刻写入磁盘。可以调用 force(true) 方法将文件内容和元数据(文件的权限等信息)立刻写入磁盘

3.2 两个 Channel 传输数据

String FROM = "helloword/data.txt";
String TO = "helloword/to.txt";
long start = System.nanoTime();
try (FileChannel from = new FileInputStream(FROM).getChannel();
     FileChannel to = new FileOutputStream(TO).getChannel();
    ) {
    from.transferTo(0, from.size(), to);
} catch (IOException e) {
    e.printStackTrace();
}
long end = System.nanoTime();
System.out.println("transferTo 用时:" + (end - start) / 1000_000.0);

输出

transferTo 用时:8.2011
超过 2g 大小的文件传输
  • transferTo方法效率高,底层会利用操作系统的零拷贝进行优化, 2g 数据
  • transferTo传输不可超过2g大小, 但是它会返回传了多少字节的数据(下面这段代码非常巧妙)
public class TestFileChannelTransferTo {
    
    public static void main(String[] args) {
        try (
                FileChannel from = new FileInputStream("data.txt").getChannel();
                FileChannel to = new FileOutputStream("to.txt").getChannel();
        ) {

            
            long size = from.size();

            // left 变量代表还剩余多少字节
            for (long left = size; left > 0; ) {

                System.out.println("position:" + (size - left) + " left:" + left);

                left = left - from.transferTo((size - left), left, to);

            }

        } catch (IOException e) {

            e.printStackTrace();
        }
    }
}

实际传输一个超大文件

position:0 left:7769948160
position:2147483647 left:5622464513
position:4294967294 left:3474980866
position:6442450941 left:1327497219

3.3 Path

jdk7 引入了 Path 和 Paths 类

  • Path 用来表示文件路径
  • Paths 是工具类,用来获取 Path 实例
Path source = Paths.get("1.txt");     // 相对路径 使用 user.dir 环境变量来定位 1.txt

Path source = Paths.get("d:\\1.txt"); // 绝对路径 代表了  d:\1.txt

Path source = Paths.get("d:/1.txt");  // 绝对路径 同样代表了  d:\1.txt

Path projects = Paths.get("d:\\data", "projects"); // 代表了  d:\data\projects
  • . 代表了当前路径
  • .. 代表了上一级路径

例如目录结构如下

d:
	|- data
		|- projects
			|- a
			|- b

代码

Path path = Paths.get("d:\\data\\projects\\a\\..\\b");
System.out.println(path);
System.out.println(path.normalize()); // 正常化路径

会输出

d:\data\projects\a\..\b
d:\data\projects\b

3.4 Files

检查文件是否存在
Path path = Paths.get("helloword/data.txt");
System.out.println(Files.exists(path));
创建一级目录
Path path = Paths.get("helloword/d1");
Files.createDirectory(path);
  • 如果目录已存在,会抛异常 FileAlreadyExistsException
  • 不能一次创建多级目录,否则会抛异常 NoSuchFileException
创建多级目录用
Path path = Paths.get("helloword/d1/d2");
Files.createDirectories(path);
拷贝文件
Path source = Paths.get("helloword/data.txt");
Path target = Paths.get("helloword/target.txt");

Files.copy(source, target);
  • 如果文件已存在,会抛异常 FileAlreadyExistsException

如果希望用 source 覆盖掉 target,需要用 StandardCopyOption 来控制

Files.copy(source, target, StandardCopyOption.REPLACE_EXISTING);
拷贝文件示例
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;

public class TestFilesCopy {

    public static void main(String[] args) throws IOException {
        
        long start = System.currentTimeMillis();
        
        String source = "D:\\Snipaste-1.16.2-x64";
        String target = "D:\\Snipaste-1.16.2-x64aaa";

        Files.walk(Paths.get(source)).forEach(path -> {
            try {
                String targetName = path.toString().replace(source, target);
                // 是目录
                if (Files.isDirectory(path)) {
                    Files.createDirectory(Paths.get(targetName));
                }
                // 是普通文件
                else if (Files.isRegularFile(path)) {
                    Files.copy(path, Paths.get(targetName));
                }
            } catch (IOException e) {
                e.printStackTrace();
            }
        });
        long end = System.currentTimeMillis();
        System.out.println(end - start);
    }
}
移动文件
Path source = Paths.get("helloword/data.txt");
Path target = Paths.get("helloword/data.txt");

Files.move(source, target, StandardCopyOption.ATOMIC_MOVE);
  • StandardCopyOption.ATOMIC_MOVE 保证文件移动的原子性
删除文件
Path target = Paths.get("helloword/target.txt");

Files.delete(target);
  • 如果文件不存在,会抛异常 NoSuchFileException
删除目录
Path target = Paths.get("helloword/d1");

Files.delete(target);
  • 如果目录还有内容,会抛异常 DirectoryNotEmptyException
遍历目录文件
遍历目录文件示例
public static void main(String[] args) throws IOException {
    
    Path path = Paths.get("C:\\Program Files\\Java\\jdk1.8.0_91");
    
    // 不能用int
    AtomicInteger dirCount = new AtomicInteger();
    AtomicInteger fileCount = new AtomicInteger();
    
    Files.walkFileTree(path, new SimpleFileVisitor<Path>(){
        
        @Override
        public FileVisitResult preVisitDirectory(Path dir, 
                                                 BasicFileAttributes attrs) throws IOException {
            System.out.println(dir);
            dirCount.incrementAndGet();
            return super.preVisitDirectory(dir, attrs);
        }

        @Override
        public FileVisitResult visitFile(Path file, 
                                         BasicFileAttributes attrs) throws IOException {
            System.out.println(file);
            fileCount.incrementAndGet();
            return super.visitFile(file, attrs);
        }
    });
    
    System.out.println(dirCount);  // 133
    System.out.println(fileCount); // 1479
}
统计 jar 的数目示例
Path path = Paths.get("C:\\Program Files\\Java\\jdk1.8.0_91");
AtomicInteger fileCount = new AtomicInteger();
Files.walkFileTree(path, new SimpleFileVisitor<Path>(){
    @Override
    public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) 
        throws IOException {
        if (file.toFile().getName().endsWith(".jar")) {
            fileCount.incrementAndGet();
        }
        return super.visitFile(file, attrs);
    }
});
System.out.println(fileCount); // 724
删除多级目录
Path path = Paths.get("d:\\a");

Files.walkFileTree(path, new SimpleFileVisitor<Path>(){
    
    @Override
    public FileVisitResult visitFile(Path file, 
                                     BasicFileAttributes attrs) throws IOException {
        
        Files.delete(file);
        
        return super.visitFile(file, attrs);
    }

    @Override
    public FileVisitResult postVisitDirectory(Path dir, 
                                              IOException exc) throws IOException {
        
        Files.delete(dir);
        
        return super.postVisitDirectory(dir, exc);
    }
});

删除是危险操作,确保要递归删除的文件夹没有重要内容

拷贝多级目录
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;

public class TestFilesCopy {

    public static void main(String[] args) throws IOException {
        
        long start = System.currentTimeMillis();
        
        String source = "D:\\Snipaste-1.16.2-x64";
        String target = "D:\\Snipaste-1.16.2-x64aaa";

        Files.walk(Paths.get(source)).forEach(path -> {
            
            try {
                
                String targetName = path.toString().replace(source, target);
                
                // 是目录
                if (Files.isDirectory(path)) {
                    Files.createDirectory(Paths.get(targetName));
                }
                // 是普通文件
                else if (Files.isRegularFile(path)) {
                    Files.copy(path, Paths.get(targetName));
                }
                
            } catch (IOException e) {
                e.printStackTrace();
            }
        });
        
        long end = System.currentTimeMillis();
        
        System.out.println(end - start);
    }
}

4. 网络编程

4.1 非阻塞 vs 阻塞

阻塞
  • 阻塞模式下,相关方法都会导致线程暂停
    • ServerSocketChannel.accept 会在没有连接建立时,让线程暂停
    • SocketChannel.read 会在没有数据可读时,让线程暂停
    • 阻塞的表现其实就是线程暂停了,暂停期间不会占用 cpu,但线程相当于闲置
  • 单线程下,阻塞方法之间相互影响,几乎不能正常工作,需要多线程支持
  • 但多线程下,有新的问题,体现在以下方面
    • 32 位 jvm 一个线程 320k,64 位 jvm 一个线程 1024k,如果连接数过多,必然导致 OOM,并且线程太多,反而会因为频繁上下文切换导致性能降低
    • 可以采用线程池技术来减少线程数和线程上下文切换,但治标不治本,如果有很多连接建立,但长时间 inactive,会阻塞线程池中所有线程,因此不适合长连接,只适合短连接
服务器端

阻塞问题:在阻塞模式中,服务器显得有点笨,在下面的while true的循环里,1. 只有在收到新的连接请求时,accept才会继续向下运行;2. 只有在收到客户端的数据时,read才会向下运行。不然就会阻塞在accept/read那里(不会向下运行)

import lombok.extern.slf4j.Slf4j;

import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
import java.util.ArrayList;
import java.util.List;

import static com.zzhua.util.ByteBufferUtil.debugRead;

@Slf4j
public class Server {
    public static void main(String[] args) throws Exception {
        
        // 使用 nio 来理解阻塞模式, 单线程
        // 0. ByteBuffer
        ByteBuffer buffer = ByteBuffer.allocate(16);

        // 1. 创建了服务器
        ServerSocketChannel ssc = ServerSocketChannel.open();

        // 2. 绑定监听端口
        ssc.bind(new InetSocketAddress(8080));

        // 3. 连接集合
        List<SocketChannel> channels = new ArrayList<>();

        while (true) {

            // 4. accept 建立与客户端连接, SocketChannel 用来与客户端之间通信
            log.debug("connecting...");

            // accept是阻塞方法,线程停止运行(调用此方法, 一定要等到有连接了,线程才会向下运行)
            SocketChannel sc = ssc.accept();

            log.debug("connected... {}", sc);

            channels.add(sc);

            for (SocketChannel channel : channels) {

                // 5. 接收客户端发送的数据
                log.debug("before read... {}", channel);

                // read是阻塞方法,线程停止运行(调用此方法, 一定要等到channel有数据可读了,线程才会向下运行)
                channel.read(buffer);

                buffer.flip();

                debugRead(buffer);

                buffer.clear();

                log.debug("after read...{}", channel);
            }
        }
    }
}

客户端
import java.net.InetSocketAddress;
import java.nio.channels.SocketChannel;

public class Client {
    public static void main(String[] args) throws Exception {
        SocketChannel sc = SocketChannel.open();
        sc.connect(new InetSocketAddress("localhost", 8080));
        System.out.println("waiting...");
    }
}
非阻塞
  • 非阻塞模式下,相关方法都会不会让线程暂停
    • 在 ServerSocketChannel.accept 在没有连接建立时,会返回 null,继续运行
    • SocketChannel.read 在没有数据可读时,会返回 0,但线程不必阻塞,可以去执行其它 SocketChannel 的 read 或是去执行 ServerSocketChannel.accept
    • 写数据时,线程只是等待数据写入 Channel 即可,无需等 Channel 通过网络把数据发送出去
  • 但非阻塞模式下,即使没有连接建立,和可读数据,线程仍然在不断运行,白白浪费了 cpu
  • 数据复制过程中,线程实际还是阻塞的(AIO 改进的地方)
服务器端

设置ssc.configureBlocking(false),开启ServerSocketChannel的非阻塞模式,这样acccept方法就不会阻塞了(但是如果此时没有客户端建立连接,那么accept方法返回的就是null)

设置sc.configureBlocking(false),开启SocketChannel的非阻塞模式,这样read方法就不会阻塞了(但是如果此时客户端没有发送数据,那么返回的就是0)

解决了阻塞模式下的阻塞问题(见上面提到的阻塞问题),但这里的main线程一直处于while true循环中不断的运行,有点过劳,体现在:没有连接请求的时候,也在不断的循环,没有数据可读的时候,也在不断的循环,造成cpu资源的浪费

import lombok.extern.slf4j.Slf4j;

import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
import java.util.ArrayList;
import java.util.List;

import static com.zzhua.util.ByteBufferUtil.debugRead;

@Slf4j
public class Server {
    public static void main(String[] args) throws Exception {

        // 使用 nio 来理解非阻塞模式, 单线程
        // 0. ByteBuffer
        ByteBuffer buffer = ByteBuffer.allocate(16);

        // 1. 创建了服务器
        ServerSocketChannel ssc = ServerSocketChannel.open();

        ssc.configureBlocking(false); // 非阻塞模式

        // 2. 绑定监听端口
        ssc.bind(new InetSocketAddress(8080));

        // 3. 连接集合
        List<SocketChannel> channels = new ArrayList<>();

        while (true) {

            // 4. accept 建立与客户端连接, SocketChannel 用来与客户端之间通信
            SocketChannel sc = ssc.accept(); // 非阻塞,线程还会继续运行,如果没有连接建立,但sc是null

            // (因为main线程在不断的循环接受连接,
            //   当某次循环到accept时,此时正好有一个客户端请求建立连接,此时sc就不为null了)
            if (sc != null) {

                log.debug("connected... {}", sc);

                sc.configureBlocking(false); // 非阻塞模式

                channels.add(sc);
            }

            for (SocketChannel channel : channels) {

                // 5. 接收客户端发送的数据
                int read = channel.read(buffer);// 非阻塞,线程仍然会继续运行,如果没有读到数据,read 返回 0

                if (read > 0) {

                    buffer.flip();

                    debugRead(buffer);

                    buffer.clear();

                    log.debug("after read...{}", channel);
                }
            }
        }
    }
}
客户端

客户端代码没有任何改变

import java.net.InetSocketAddress;
import java.nio.channels.SocketChannel;

public class Client {
    public static void main(String[] args) throws Exception {
        SocketChannel sc = SocketChannel.open();
        sc.connect(new InetSocketAddress("localhost", 8080));
        System.out.println("waiting...");
    }
}
多路复用

单线程可以配合 Selector 完成对多个 Channel 可读写事件的监控,这称之为多路复用

  • 多路复用仅针对网络 IO、普通文件 IO 没法利用多路复用
  • 如果不用 Selector 的非阻塞模式,线程大部分时间都在做无用功,而 Selector 能够保证
    • 有可连接事件时才去连接
    • 有可读事件才去读取
    • 有可写事件才去写入
      • 限于网络传输能力,Channel 未必时时可写,一旦 Channel 可写,会触发 Selector 的可写事件

4.2 Selector

课堂笔记
Selector的SelectionKey理解

Selector内部通过维护一个个的SelectionKey去关联一个个的channel(一一对应,并且SelectionKey可以标记这个channel感兴趣的事件,事件分为accept、connect、read、write)。

当Selector调用select()方法时,当channel都没有事件时,此selector方法就会阻塞在这里,直到某一个channel发生了感兴趣的事件。当某个channel的感兴趣事件发生时,就会把此channel对应的这个selectionKey标记了是有发生感兴趣事件的,然后把SelectionKey放入到一个特定的Set集合selectedKeys中,当处理了这个channel的这个感兴趣事件时(或取消这个感兴趣事件),selectionKey上有发生感兴趣事件的标记才会清除掉。如果不处理这个channel的这个感兴趣事件(或取消这个感兴趣事件),下次继续执行selector的select()方法时,依然会把这个有感兴趣事件标记的SelectionKey添加到这个特定的Set集合selectedKeys中,直到该感兴趣事件处理为止(或取消这个感兴趣事件)。

当客户端发送消息时,会触发socketChannel一个可读read事件

当客户端发送一条消息过来(socketChannel.write(“Hi”)),会触发服务端socketChannel的一个可读read事件,此时需要处理这个事件(要么channel.read,要么cancel这个selectionKey),否则,如果不处理,那么下次循环到selector.select()方法时不会阻塞,selector.selectedKeys中仍然还有此selectionKey,然后又没处理,然后不断循环。

当客户端因为异常关闭时,会触发socketChannel一个可读read事件

当客户端因为异常关闭时,会触发socketChannel一个可读read事件。但是,此时如果用channel.read(buf)方法读取数据会抛出异常:远程主机强迫关闭了一个现有的连接。如果只是catch住这个异常,不取消这个SelectionKey的话,会因为read方法抛出异常而没有处理这个可读事件,导致进入下一次循环遇到selector.select()方法,仍然把这个标记了发生感兴趣事件的selectionKey又给放入到了selectedKeys这个set集合当中,然后read方法又报错,又没处理,又放入到selectedKeys集合中,不断的报错中,因此需要在报错的时候,调用selectionKey.cancel()取消这个selectionKey。

当客户端因为正常关闭时,也会触发socketChannel一个可读read事件

当客户端因为正常关闭时(客户端调用socketChannel.close()方法),会触发服务端socketChannel的一个可读read事件。此时调用channel.read(buffer)方法会返回-1(返回-1即表示客户端正常关闭),此时就需要调用selectionKey的cancel()方法取消此key,否则,如果不取消此key,等待下次进入循环,selector.select()方法并不会阻塞,selector.selectedKeys()方法返回的集合当中仍然会有此key

代码示例
Server
@Slf4j
public class Server {

    public static void main(String[] args) throws IOException {

        // 1. 创建 selector, 管理多个 channel
        Selector selector = Selector.open();

        ServerSocketChannel ssc = ServerSocketChannel.open();

        // 开启非阻塞模式(即ServerSocketChannel#accept方法不会阻塞了)
        ssc.configureBlocking(false);

        // 2. 建立 selector 和 channel 的联系(注册)
        // SelectionKey 就是将来事件发生后,通过它可以知道事件和哪个channel的事件
        // 0表示啥事件也不感兴趣
        SelectionKey sscKey = ssc.register(selector, 0, null);

        // key 只关注 accept 事件
        sscKey.interestOps(SelectionKey.OP_ACCEPT);

        log.debug("sscKey:{}", sscKey);

        ssc.bind(new InetSocketAddress(8080));

        while (true) {

            // 3. select 方法, 没有事件发生,线程阻塞,有事件,线程才会恢复运行
            // 特别注意:select 在事件未处理时,它不会阻塞(会导致当前这个while循环不断的运行)!
            //         事件发生后要么处理(如:调用channel.accept()),要么取消(selectionKey.cancel()),不能置之不理
            //         初步理解为: 当selector监测到有channel有事件了, 因为有selectionKey关联到这个channel,
            //                   selector就可以收集到有事件的selectionKey, 但是如果收集了selectionKey后,不处理掉这些selectionKey的对应事件,
            //                   这个selectionKey就一直存在, 这样Selector的select方法就不会阻塞(不会阻塞的意思是:代码会继续向下运行),
            //                   注意,只是简单的从selectedKeys集合中移除该selectionKey是没用的(selector.select()方法不会阻塞),
            //                        还需要处理掉channel里发生的事件
            selector.select();

            // 4. 处理事件, selectedKeys 内部包含了所有发生的事件
            // (拿迭代器的目的是为了方便在遍历集合的时候删除集合中的元素)
            // (所有发生的事件指的是:凡是有发生事件的SelectionKey,因为在前面SelectionKey关注了感兴趣的事件)
            // (channel注册到了Selector就会得到一个SelectionKey,Selector能监测到它所管理的所有channel的所发生的事件,
            //   当有channel发生了事件时,Selector就会把对应的SelectionKey添加到到集合selectedKeys中(这是个set集合)。
            //   并且, 这个selectedKeys集合中不会主动删除这些key,我们每处理完一个selectionKey,
            //        应当自己把这个selectionKey给移除掉,否则下次循环获得的集合中仍然有这个selectionKey)
            Iterator<SelectionKey> iter = selector.selectedKeys().iterator(); // accept, read

            while (iter.hasNext()) {

                SelectionKey key = iter.next();

                // 处理key 时,要从 selectedKeys 集合中删除,否则下次处理就会有问题
                // (因为不移除的话, 下次外部的循环从selector中得到的selectedKeys还是同一个set集合,
                //  这个遍历的元素下次还会在这个set集合中, 但实际这个selectionKey的事件已经被处理过了)
                iter.remove();

                log.debug("key: {}", key);

                // 5. 区分事件类型
                if (key.isAcceptable()) { // 如果是 accept

                    // 拿到触发事件的channel
                    ServerSocketChannel channel = (ServerSocketChannel) key.channel();

                    // 当发生连接事件时, 再去调用accept方法(而不是非阻塞示例中的一直不断的循环接受连接)
                    // (处理连接事件,会去标记对应的selectionKey的该事件已经处理了,但不会主动从selectedKeys集合中删除该selectionKey - 发生事件后,不能置之不理)
                    // (当channel设置为非阻塞模式时,当没有客户端连接建立时,调用此accept方法会返回null)
                    SocketChannel sc = channel.accept();

                    // Selector必须配合SocketChannel的非阻塞模式(即SocketChannel#read方法不会阻塞了)
                    sc.configureBlocking(false);

                    // Selector是可以管理多个Channel的,因此,将此channel注册到selector中
                    // (返回的key表示,该channel上有感兴趣的事件了, 就能从key中反映出来)
                    SelectionKey scKey = sc.register(selector, 0, null);

                    // 关注 读事件
                    scKey.interestOps(SelectionKey.OP_READ);

                    log.debug("{}", sc);

                    log.debug("scKey:{}", scKey);

                } else if (key.isReadable()) { // 如果是 read



                    try {

                        // 拿到触发事件的channel
                        SocketChannel channel = (SocketChannel) key.channel();

                        ByteBuffer buffer = ByteBuffer.allocate(4);

                        // (处理可读事件 - 发生事件后,不能置之不理)
                        int read = channel.read(buffer); // 如果是正常断开,read 的方法的返回值是 -1

                        if(read == -1) { // 判断客户端是否是正常断开(当客户端正常断开时,也会触发一个可读事件)

                            // 因为客户端因正常断开了(客户端调用socketChannel.close()),
                            // 因此需要将 key 取消(从 selector 的 keys 集合中真正删除 key).
                            // 如果此处不取消这个selectionKey的话,selector的select方法不会阻塞,
                            // 而让selector.selectedKeys()再次获取到此selectionKey,而一直不断的循环
                            key.cancel();

                        } else {

                            buffer.flip();

                            // debugAll(buffer);
                            System.out.println(Charset.defaultCharset().decode(buffer));

                        }
                    } catch (IOException e) {

                        e.printStackTrace();

                        // (注意:客户端的异常关闭,会引发channel的read读事件,
                        //        但是此时,若调用channel.read(buffer),会抛出异常:远程主机强迫关闭了一个现有的连接,
                        //        此时,read方法并没有执行成功(它报错抛异常了),这个selectionKey的读事件未处理,
                        //        如果不取消这个selectionKey的话,下次循环到selector.select方法仍然把这个selectionKey
                        //        放入到selectedKeys集合当中,而导致又一次调用此早已断开连接的channel.read()方法而再次报错,
                        //        然后不断的报错循环当中)

                        // 因为客户端因为异常断开了,因此需要将 key 取消(从 selector 的 keys 集合中真正删除 key)
                        key.cancel();
                    }
                }
            }
        }
    }
}
Client
public class Client {
    public static void main(String[] args) throws Exception {
        SocketChannel sc = SocketChannel.open();
        sc.connect(new InetSocketAddress("localhost", 8080));
        System.out.println("waiting...");
    }
}
selector 版
selector
thread
channel
channel
channel

好处

  • 一个线程配合 selector 就可以监控多个 channel 的事件,事件发生线程才去处理。避免非阻塞模式下所做无用功
  • 让这个线程能够被充分利用
  • 节约了线程的数量
  • 减少了线程上下文切换
创建
Selector selector = Selector.open();
绑定 Channel 事件

也称之为注册事件,绑定的事件 selector 才会关心

channel.configureBlocking(false);
SelectionKey key = channel.register(selector, 绑定事件);
  • channel 必须工作在非阻塞模式
  • FileChannel 没有非阻塞模式,因此不能配合 selector 一起使用
  • 绑定的事件类型可以有
    • connect - 客户端连接成功时触发
    • accept - 服务器端成功接受连接时触发
    • read - 数据可读入时触发,有因为接收能力弱,数据暂不能读入的情况
    • write - 数据可写出时触发,有因为发送能力弱,数据暂不能写出的情况
监听 Channel 事件

可以通过下面三种方法来监听是否有事件发生,方法的返回值代表有多少 channel 发生了事件

方法1,阻塞直到绑定事件发生

int count = selector.select();

方法2,阻塞直到绑定事件发生,或是超时(时间单位为 ms)

int count = selector.select(long timeout);

方法3,不会阻塞,也就是不管有没有事件,立刻返回,自己根据返回值检查是否有事件

int count = selector.selectNow();
select 何时不阻塞
  • 事件发生时
    • 客户端发起连接请求,会触发 accept 事件
    • 客户端发送数据过来,客户端正常、异常关闭时,都会触发 read 事件,另外如果发送的数据大于 buffer 缓冲区,会触发多次读取事件
    • channel 可写,会触发 write 事件
    • 在 linux 下 nio bug 发生时
  • 调用 selector.wakeup()
  • 调用 selector.close()
  • selector 所在线程 interrupt

4.3 处理 accept 事件

客户端代码为

public class Client {
    public static void main(String[] args) {
        try (Socket socket = new Socket("localhost", 8080)) {
            System.out.println(socket);
            socket.getOutputStream().write("world".getBytes());
            System.in.read();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

服务器端代码为

@Slf4j
public class ChannelDemo6 {
    public static void main(String[] args) {
        try (ServerSocketChannel channel = ServerSocketChannel.open()) {
            channel.bind(new InetSocketAddress(8080));
            System.out.println(channel);
            Selector selector = Selector.open();
            channel.configureBlocking(false);
            channel.register(selector, SelectionKey.OP_ACCEPT);

            while (true) {
                int count = selector.select();
//                int count = selector.selectNow();
                log.debug("select count: {}", count);
//                if(count <= 0) {
//                    continue;
//                }

                // 获取所有事件
                Set<SelectionKey> keys = selector.selectedKeys();

                // 遍历所有事件,逐一处理
                Iterator<SelectionKey> iter = keys.iterator();
                while (iter.hasNext()) {
                    SelectionKey key = iter.next();
                    // 判断事件类型
                    if (key.isAcceptable()) {
                        ServerSocketChannel c = (ServerSocketChannel) key.channel();
                        // 必须处理
                        SocketChannel sc = c.accept();
                        log.debug("{}", sc);
                    }
                    // 处理完毕,必须将事件移除
                    iter.remove();
                }
            }
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}
事件发生后能否不处理

事件发生后,要么处理,要么取消(cancel),不能什么都不做,否则下次该事件仍会触发,这是因为 nio 底层使用的是水平触发

4.4 处理 read 事件

@Slf4j
public class ChannelDemo6 {
    public static void main(String[] args) {
        try (ServerSocketChannel channel = ServerSocketChannel.open()) {
            channel.bind(new InetSocketAddress(8080));
            System.out.println(channel);
            Selector selector = Selector.open();
            channel.configureBlocking(false);
            channel.register(selector, SelectionKey.OP_ACCEPT);

            while (true) {
                int count = selector.select();
//                int count = selector.selectNow();
                log.debug("select count: {}", count);
//                if(count <= 0) {
//                    continue;
//                }

                // 获取所有事件
                Set<SelectionKey> keys = selector.selectedKeys();

                // 遍历所有事件,逐一处理
                Iterator<SelectionKey> iter = keys.iterator();
                while (iter.hasNext()) {
                    SelectionKey key = iter.next();
                    // 判断事件类型
                    if (key.isAcceptable()) {
                        ServerSocketChannel c = (ServerSocketChannel) key.channel();
                        // 必须处理
                        SocketChannel sc = c.accept();
                        sc.configureBlocking(false);
                        sc.register(selector, SelectionKey.OP_READ);
                        log.debug("连接已建立: {}", sc);
                    } else if (key.isReadable()) {
                        SocketChannel sc = (SocketChannel) key.channel();
                        ByteBuffer buffer = ByteBuffer.allocate(128);
                        int read = sc.read(buffer);
                        if(read == -1) {
                            key.cancel();
                            sc.close();
                        } else {
                            buffer.flip();
                            debug(buffer);
                        }
                    }
                    // 处理完毕,必须将事件移除
                    iter.remove();
                }
            }
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

开启两个客户端,修改一下发送文字,输出

sun.nio.ch.ServerSocketChannelImpl[/0:0:0:0:0:0:0:0:8080]
21:16:39 [DEBUG] [main] c.i.n.ChannelDemo6 - select count: 1
21:16:39 [DEBUG] [main] c.i.n.ChannelDemo6 - 连接已建立: java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:60367]
21:16:39 [DEBUG] [main] c.i.n.ChannelDemo6 - select count: 1
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 68 65 6c 6c 6f                                  |hello           |
+--------+-------------------------------------------------+----------------+
21:16:59 [DEBUG] [main] c.i.n.ChannelDemo6 - select count: 1
21:16:59 [DEBUG] [main] c.i.n.ChannelDemo6 - 连接已建立: java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:60378]
21:16:59 [DEBUG] [main] c.i.n.ChannelDemo6 - select count: 1
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 77 6f 72 6c 64                                  |world           |
+--------+-------------------------------------------------+----------------+
为何要 iter.remove()

因为 select 在事件发生后,就会将相关的 key 放入 selectedKeys 集合,但不会在处理完后从 selectedKeys 集合中移除,需要我们自己编码删除。例如

  • 第一次触发了 ssckey 上的 accept 事件,没有移除 ssckey
  • 第二次触发了 sckey 上的 read 事件,但这时 selectedKeys 中还有上次的 ssckey ,在处理时因为没有真正的 serverSocket 连上了,就会导致空指针异常
cancel 的作用

cancel 会取消注册在 selector 上的 channel,并从 keys 集合中删除 key 后续不会再监听事件

不处理边界的问题

以前有同学写过这样的代码,思考注释中两个问题,以 bio 为例,其实 nio 道理是一样的

public class Server {
    public static void main(String[] args) throws IOException {
        ServerSocket ss=new ServerSocket(9000);
        while (true) {
            Socket s = ss.accept();
            InputStream in = s.getInputStream();
            // 这里这么写,有没有问题
            byte[] arr = new byte[4];
            while(true) {
                int read = in.read(arr);
                // 这里这么写,有没有问题
                if(read == -1) {
                    break;
                }
                System.out.println(new String(arr, 0, read));
            }
        }
    }
}

客户端

public class Client {
    public static void main(String[] args) throws IOException {
        Socket max = new Socket("localhost", 9000);
        OutputStream out = max.getOutputStream();
        out.write("hello".getBytes());
        out.write("world".getBytes());
        out.write("你好".getBytes());
        max.close();
    }
}

输出

hell
owor
ld�
�好

为什么?

消息边界问题示例
Server

服务端代码如下(注意ByteBuffer分配的大小是4个字节),假设客户端连接到服务端之后,通过socketChannel.write(“中国”)发送了6个字节过来,就会出现乱码的现象,并且我们注意到在客户端将这个消息发过来后,下面这个while(true)的循环执行了2次,既然它循环了2次也就是说当消息没有一次处理完的话,这个循环会再来执行一遍(会把剩下的消息内容发过来-理解:channel把消息内容读取到byteBuffer中,channel中的消息一次并没有读完,selector.select()不会阻塞,进入下一次循环,继续从channel中读取剩余的消息),直到所有消息处理完毕。

@Slf4j
public class Server {
    public static void main(String[] args) throws Exception {

        Selector selector = Selector.open();

        ServerSocketChannel serverSocketChannel = ServerSocketChannel.open();
        serverSocketChannel.configureBlocking(false);

        SelectionKey sscSelectionKey = serverSocketChannel.register(selector, 0, null);
        log.info("sscSelectionKey: {}", sscSelectionKey);
        sscSelectionKey.interestOps(SelectionKey.OP_ACCEPT);

        serverSocketChannel.bind(new InetSocketAddress(8080));

        Set<SelectionKey> skSet = selector.selectedKeys();


        while (true) {

            selector.select();

            log.info("select...");

            Set<SelectionKey> selectedKeys = selector.selectedKeys();

            if (skSet == selectedKeys) {
                log.info("是同一个set集合, {}, {}"); // 证明了是同一个集合
            } else {
                log.info("不是同一个set集合");
            }

            log.info("selectedKeys: {}, hash: {}" , selectedKeys, selectedKeys.hashCode());

            Iterator<SelectionKey> iterator = selectedKeys.iterator();

            while (iterator.hasNext()) {

                SelectionKey selectionKey = iterator.next();
                iterator.remove();

                if (selectionKey.isAcceptable()) {

                    ServerSocketChannel ssChannel = (ServerSocketChannel) selectionKey.channel();
                    SocketChannel socketChannel = ssChannel.accept();
                    socketChannel.configureBlocking(false);
                    SelectionKey sk = socketChannel.register(selector, SelectionKey.OP_READ);

                    log.info("注册socketChannel: {}", socketChannel);
                    log.info("注册selectionKey: {}", sk);

                } else if (selectionKey.isReadable()) {

                    try {

                        SocketChannel socketChannel = (SocketChannel) selectionKey.channel();

                        ByteBuffer buf = ByteBuffer.allocate(4);

                        int read = socketChannel.read(buf);

                        if (read == -1) {

                            selectionKey.cancel();

                        } else {
                            
                            buf.flip();

                            System.out.println(StandardCharsets.UTF_8.decode(buf).toString());
                        }

                    } catch (IOException e) {

                        log.error("发生异常: {}", e);
                        selectionKey.cancel();

                    }

                }
            }


        }

    }
}
Client
public class Client {
    public static void main(String[] args) throws Exception {
        SocketChannel sc = SocketChannel.open();
        sc.connect(new InetSocketAddress("localhost", 8080));
        System.out.println("waiting...");
    }
}
测试

当客户端发送消息:sc.write(Charset.defaultCharset().encode(“中国”))。如下,发现服务端接收消息乱码了

21:06:38 [INFO ] [main] c.z.nio.c4.Server - select...
21:06:38 [INFO ] [main] c.z.nio.c4.Server - 是同一个set集合, {}, {}
21:06:38 [INFO ] [main] c.z.nio.c4.Server - selectedKeys: [sun.nio.ch.SelectionKeyImpl@1e643faf], hash: 509886383
中�
21:06:38 [INFO ] [main] c.z.nio.c4.Server - select...
21:06:38 [INFO ] [main] c.z.nio.c4.Server - 是同一个set集合, {}, {}
21:06:38 [INFO ] [main] c.z.nio.c4.Server - selectedKeys: [sun.nio.ch.SelectionKeyImpl@1e643faf], hash: 509886383
��
处理消息的边界

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-dm8whn01-1690207745258)(assets/0023.png)]

  • 一种思路是固定消息长度,数据包大小一样,服务器按预定长度读取,缺点是浪费带宽
  • 另一种思路是按分隔符拆分,缺点是效率低
  • TLV 格式,即 Type 类型、Length 长度、Value 数据,类型和长度已知的情况下,就可以方便获取消息大小,分配合适的 buffer,缺点是 buffer 需要提前分配,如果内容过大,则影响 server 吞吐量
    • Http 1.1 是 TLV 格式
    • Http 2.0 是 LTV 格式
客户端1 服务器 ByteBuffer1 ByteBuffer2 发送 01234567890abcdef3333\r 第一次 read 存入 01234567890abcdef 扩容 拷贝 01234567890abcdef 第二次 read 存入 3333\r 01234567890abcdef3333\r 客户端1 服务器 ByteBuffer1 ByteBuffer2
服务器端示例代码
private static void split(ByteBuffer source) {
    
    // 切换为读模式
    source.flip();
    
    for (int i = 0; i < source.limit(); i++) {
        
        // 找到一条完整消息
        if (source.get(i) == '\n') {
            
            int length = i - source.position() + 1 ;
            
            // 把这条完整消息存入新的 ByteBuffer
            
            // 1. 先创建一个指定长度 新的ByteBuffer => target, 用于存放一条完整的消息
            ByteBuffer target = ByteBuffer.allocate(length);
            
            // 2. 从 source 读,向 target 写
            for (int j = 0; j < length; j++) {
                
                // 从source中每读到一个字节(source的position向后移动一位),就把这个字节写到target中
                target.put(source.get());
            }
            
            debugAll(target);
        }
    }
    
    // compact方法会:将position设置为remaining()(这是为了方便从末尾开始写入),将limit设置为capacity().
    //               并且会把后面的数据拷贝到最前面(相当于整体移动数据位置)
    source.compact(); // 0123456789abcdef  position 16 limit 16
}

public static void main(String[] args) throws IOException {
    
    // 1. 创建 selector, 管理多个 channel
    Selector selector = Selector.open();
    
    ServerSocketChannel ssc = ServerSocketChannel.open();
    
    ssc.configureBlocking(false);
    
    // 2. 建立 selector 和 channel 的联系(注册)
    // SelectionKey 就是将来事件发生后,通过它可以知道事件和哪个channel的事件
    SelectionKey sscKey = ssc.register(selector, 0, null);
    
    // key 只关注 accept 事件
    sscKey.interestOps(SelectionKey.OP_ACCEPT);
    
    log.debug("sscKey:{}", sscKey);
    
    ssc.bind(new InetSocketAddress(8080));
    
    while (true) {
        
        // 3. select 方法, 没有事件发生,线程阻塞,有事件,线程才会恢复运行
        // select 在事件未处理时,它不会阻塞, 事件发生后要么处理,要么取消,不能置之不理
        selector.select();
        
        log.info("select...");
        
        // 4. 处理事件, selectedKeys 内部包含了所有发生的事件
        Iterator<SelectionKey> iter = selector.selectedKeys().iterator(); // accept, read
        
        while (iter.hasNext()) {
            
            SelectionKey key = iter.next();
            
            // 处理key 时,要从 selectedKeys 集合中删除,否则下次处理就会有问题
            iter.remove();
            
            log.debug("key: {}", key);
            
            // 5. 区分事件类型
            if (key.isAcceptable()) { // 如果是 accept
                
                ServerSocketChannel channel = (ServerSocketChannel) key.channel();
                
                SocketChannel sc = channel.accept();
                
                sc.configureBlocking(false);
                
                
                ByteBuffer buffer = ByteBuffer.allocate(16); // attachment
                
                // 将一个 byteBuffer 作为附件关联到 selectionKey 上
                //(这里会保证socketChannel-selectionKey-附件buffer 这三者之间一一对应的关系)
                SelectionKey scKey = sc.register(selector, 0, buffer);
                
                scKey.interestOps(SelectionKey.OP_READ);
                
                log.debug("{}", sc);
                log.debug("scKey:{}", scKey);
                
            } else if (key.isReadable()) { // 如果是 read
                
                try {
                    
                    SocketChannel channel = (SocketChannel) key.channel(); // 拿到触发事件的channel
                    
                    // 获取 selectionKey 上关联的附件(一开始注册时,所关联的附件)
                    ByteBuffer buffer = (ByteBuffer) key.attachment();
                    
                    int read = channel.read(buffer); // 如果是正常断开,read 的方法的返回值是 -1     
                    log.info("读取了: {} 个字节", read);
                    
                    if(read == -1) {
                        
                        key.cancel();
                        
                    } else {
                        
                        // 调用上面的split方法
                        split(buffer);
                        
                        // 需要扩容
                        // (这里的条件可以理解为:buffer的容量已经满了,因为buffer的position都等于limit了)
                        if (buffer.position() == buffer.limit()) {
                            
                            log.info("扩容...");
                            
                            // 扩容为原来的2倍
                            ByteBuffer newBuffer = ByteBuffer.allocate(buffer.capacity() * 2);
                            
                            // 切换为读模式(在下面的拷贝操作之前先切换为读模式)
                            buffer.flip();
                            
                            // 将buffer原来的数据 拷贝到 扩容后的新的buffer中
                            // (此方法调用同:while (src.hasRemaining()) dst.put(src.get());)
                            // (此方法调用同:for (int i = 0; i < src.remaining(); i++) put(src.get());)
                            newBuffer.put(buffer); // 0123456789abcdef3333\n
                            
                            key.attach(newBuffer); // 关联一个新的附件(替换掉原来的附件,如果原来的附件存在的话)
                        }
                    }

                } catch (IOException e) {
                    
                    e.printStackTrace();
                    
                    // 因为客户端断开了,因此需要将 key 取消(从 selector 的 keys 集合中真正删除key)
                    key.cancel(); 
                }
            }
        }
    }
}
客户端示例代码
SocketChannel sc = SocketChannel.open();

sc.connect(new InetSocketAddress("localhost", 8080));

SocketAddress address = sc.getLocalAddress();

// sc.write(Charset.defaultCharset().encode("hello\nworld\n"));

// 一次性写2条消息过去
sc.write(Charset.defaultCharset().encode("0123\n456789abcdef"));
sc.write(Charset.defaultCharset().encode("0123456789abcdef3333\n"));

System.in.read();
测试

要仔细看下面的日志输出,才能明白消息处理的过程!

1.channel中发过来的数据没有一次读完,下次循环中selector.select()方法不会阻塞,下会继续把数据发送过来

2.需要注意先明白split方法的作用:如果传入的byteBuffer中有分隔符(\n),那么就会读取\n前面的数据(包括\n),然后把剩下的数据往前挪动,如果split方法调用完成后,此时的position还等于limit的话,说明根本就没有挪并且容量已经满了,那么就需要扩容了。

3.这里的扩容仅仅是简单的扩大为原来的2倍,netty对这方面做了优化,可动态调整

4.对于每一个channel来说,这个扩容后的byteBuffer应该要被同一channel的共享到,因此以附件的形式关联到selectionKey,而selectionKey关联了channel

23:28:53 [DEBUG] [main] c.z.nio.c5.Server - sscKey:sun.nio.ch.SelectionKeyImpl@2c8d66b2
23:28:58 [INFO ] [main] c.z.nio.c5.Server - select...
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - key: sun.nio.ch.SelectionKeyImpl@2c8d66b2
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:65015]
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - scKey:sun.nio.ch.SelectionKeyImpl@7c30a502
23:28:58 [INFO ] [main] c.z.nio.c5.Server - select...
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - key: sun.nio.ch.SelectionKeyImpl@7c30a502
23:28:58 [INFO ] [main] c.z.nio.c5.Server - 读取了: 16 个字节
+--------+-------------------- all ------------------------+----------------+
position: [5], limit: [5]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 30 31 32 33 0a                                  |0123.           |
+--------+-------------------------------------------------+----------------+
23:28:58 [INFO ] [main] c.z.nio.c5.Server - select...
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - key: sun.nio.ch.SelectionKeyImpl@7c30a502
23:28:58 [INFO ] [main] c.z.nio.c5.Server - 读取了: 5 个字节
23:28:58 [INFO ] [main] c.z.nio.c5.Server - 扩容...
23:28:58 [INFO ] [main] c.z.nio.c5.Server - select...
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - key: sun.nio.ch.SelectionKeyImpl@7c30a502
23:28:58 [INFO ] [main] c.z.nio.c5.Server - 读取了: 16 个字节
23:28:58 [INFO ] [main] c.z.nio.c5.Server - 扩容...
23:28:58 [INFO ] [main] c.z.nio.c5.Server - select...
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - key: sun.nio.ch.SelectionKeyImpl@7c30a502
23:28:58 [INFO ] [main] c.z.nio.c5.Server - 读取了: 1 个字节
+--------+-------------------- all ------------------------+----------------+
position: [33], limit: [33]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 34 35 36 37 38 39 61 62 63 64 65 66 30 31 32 33 |456789abcdef0123|
|00000010| 34 35 36 37 38 39 61 62 63 64 65 66 33 33 33 33 |456789abcdef3333|
|00000020| 0a                                              |.               |
+--------+-------------------------------------------------+----------------+
ByteBuffer 大小分配
  • 每个 channel 都需要记录可能被切分的消息,因为 ByteBuffer 不能被多个 channel 共同使用,因此需要为每个 channel 维护一个独立的 ByteBuffer(使用附件的方式解决,见上面示例)
  • ByteBuffer 不能太大,比如一个 ByteBuffer 1Mb 的话,要支持百万连接就要 1Tb 内存,因此需要设计大小可变的 ByteBuffer
    • 一种思路是首先分配一个较小的 buffer,例如 4k,如果发现数据不够,再分配 8k 的 buffer,将 4k buffer 内容拷贝至 8k buffer,优点是消息连续容易处理,缺点是数据拷贝耗费性能,参考实现 http://tutorials.jenkov.com/java-performance/resizable-array.html
    • 另一种思路是用多个数组组成 buffer,一个数组不够,把多出来的内容写入新的数组,与前面的区别是消息存储不连续解析复杂,优点是避免了拷贝引起的性能损耗

4.5 处理 write 事件

一次无法写完例子
  • 非阻塞模式下,无法保证把 buffer 中所有数据都写入 channel,因此需要追踪 write 方法的返回值(代表实际写入字节数)
  • 用 selector 监听所有 channel 的可写事件,每个 channel 都需要一个 key 来跟踪 buffer,但这样又会导致占用内存过多,就有两阶段策略
    • 当消息处理器第一次写入消息时,才将 channel 注册到 selector 上
    • selector 检查 channel 上的可写事件,如果所有的数据写完了,就取消 channel 的注册
    • 如果不取消,会每次可写均会触发 write 事件
public class WriteServer {

    public static void main(String[] args) throws IOException {
        
        ServerSocketChannel ssc = ServerSocketChannel.open();
        
        ssc.configureBlocking(false);
        
        ssc.bind(new InetSocketAddress(8080));

        Selector selector = Selector.open();
        
        ssc.register(selector, SelectionKey.OP_ACCEPT);

        while(true) {
            
            selector.select();

            Iterator<SelectionKey> iter = selector.selectedKeys().iterator();
            
            while (iter.hasNext()) {
                
                SelectionKey key = iter.next();
                
                iter.remove();
                
                if (key.isAcceptable()) {
                    
                    SocketChannel sc = ssc.accept();
                    
                    sc.configureBlocking(false);
                    
                    SelectionKey sckey = sc.register(selector, SelectionKey.OP_READ);
                    
                    // 1. 向客户端发送内容
                    StringBuilder sb = new StringBuilder();
                    
                    for (int i = 0; i < 3000000; i++) {
                        sb.append("a");
                    }
                    
                    ByteBuffer buffer = Charset.defaultCharset().encode(sb.toString());
                    
                    int write = sc.write(buffer);
                    
                    // 3. write 表示实际写了多少字节
                    System.out.println("实际写入字节:" + write);
                    
                    // 4. 如果有剩余未读字节,才需要关注写事件
                    if (buffer.hasRemaining()) {
                        
                        // read 1  write 4
                        // 在原有关注事件的基础上,多关注 写事件
                        sckey.interestOps(sckey.interestOps() + SelectionKey.OP_WRITE);
                        
                        // 把 buffer 作为附件加入 sckey
                        sckey.attach(buffer);
                    }
                    
                } else if (key.isWritable()) {
                    
                    ByteBuffer buffer = (ByteBuffer) key.attachment();
                    
                    SocketChannel sc = (SocketChannel) key.channel();
                    
                    int write = sc.write(buffer);
                    
                    System.out.println("实际写入字节:" + write);
                    
                    if (!buffer.hasRemaining()) { // 写完了
                        
                        key.interestOps(key.interestOps() - SelectionKey.OP_WRITE);
                        
                        key.attach(null);
                    }
                }
            }
        }
    }
}

客户端

public class WriteClient {
    public static void main(String[] args) throws IOException {
        
        Selector selector = Selector.open();
        
        SocketChannel sc = SocketChannel.open();
        
        sc.configureBlocking(false);
        
        sc.register(selector, SelectionKey.OP_CONNECT | SelectionKey.OP_READ);
        
        sc.connect(new InetSocketAddress("localhost", 8080));
        
        int count = 0;
        
        while (true) {
            
            selector.select();
            
            Iterator<SelectionKey> iter = selector.selectedKeys().iterator();
            
            while (iter.hasNext()) {
                
                SelectionKey key = iter.next();
                
                iter.remove();
                
                if (key.isConnectable()) {
                    
                    System.out.println(sc.finishConnect());
                    
                } else if (key.isReadable()) {
                    
                    ByteBuffer buffer = ByteBuffer.allocate(1024 * 1024);
                    
                    count += sc.read(buffer);
                    
                    buffer.clear();
                    
                    System.out.println(count);
                }
            }
        }
    }
}
课堂示例
服务端
@Slf4j
public class WriteServer {

    public static void main(String[] args) throws Exception {

        Selector selector = Selector.open();

        ServerSocketChannel ssc = ServerSocketChannel.open();

        ssc.configureBlocking(false);

        ssc.bind(new InetSocketAddress(8080));

        ssc.register(selector, SelectionKey.OP_ACCEPT);


        while (true) {

            selector.select();

            log.info("select...");

            Set<SelectionKey> selectedKeys = selector.selectedKeys();

            Iterator<SelectionKey> it = selectedKeys.iterator();

            while (it.hasNext()) {

                SelectionKey selectionKey = it.next();

                it.remove();

                if (selectionKey.isAcceptable()) {

                    SocketChannel socketChannel = ssc.accept();

                    socketChannel.configureBlocking(false);

                    StringBuilder sb = new StringBuilder();
                    for (int i = 0; i < 30000000; i++) {
                        sb.append("a");
                    }

                    ByteBuffer buffer = Charset.defaultCharset().encode(sb.toString());

                    // 1. 发送大量数据(socketChannel#write方法不能保证一次就能将buffer中的数据都写给客户端)
                    while (buffer.hasRemaining()) {

                        // 2. 返回值表示实际写入的字节数
                        int writeCount = socketChannel.write(buffer);
                        log.info("写入了: {} 个字节", writeCount);
                    }

                }


            }


        }


    }

}
客户端
public class WriteClient {
    public static void main(String[] args) throws Exception {

        SocketChannel socketChannel = SocketChannel.open();

        socketChannel.connect(new InetSocketAddress(8080));

        ByteBuffer buffer = ByteBuffer.allocate(1024 * 1024);// 1M

        int count = 0;

        while (true) {

            int readCount = socketChannel.read(buffer);

            count = count + readCount;

            log.info("readCount:{}, count: {}", readCount, count);

            buffer.clear();
        }

    }
}
测试
  1. 这样虽然能将大量的数据发送给客户端(客户端正确的接收到了所有的数据),但是当前线程在发送数据的时候,无法处理其它客户端的事件,只有等它处理完了,才会进入到while(true)的下一个循环,处理其它触发事件。
  2. 并且发送数据时,socketChannel实际上是发送到一个缓冲区,当缓冲区被填满时, 此时是发送不了的,那这段发送不了数据的时间就被浪费了(可以看到写了好几次都是写了0个字节)
/* 服务端日志输出 */
14:35:06 [INFO ] [main] c.z.n.c.WriteServer - select...
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 3014633 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 19267437 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 2621420 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 1441781 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 1179639 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 0 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 1441781 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 0 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 0 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 0 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 1033309 个字节

/* 客户端日志输出 */
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 131071
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 262142
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 393213
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 524284
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 655355
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 786426
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 917497
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1048568
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1179639
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1310710
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1441781
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1572852
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1703923
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1834994
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1966065
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2097136
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2228207
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2359278
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2490349
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2621420
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2752491
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2883562
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3014633
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3145704
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3276775
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3407846
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3538917
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3669988
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3801059
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3932130
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4063201
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4194272
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4325343
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4456414
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4587485
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4718556
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4849627
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4980698
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5111769
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5242840
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5373911
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5504982
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5636053
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5767124
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5898195
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6029266
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6160337
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6291408
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6422479
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6553550
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6684621
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6815692
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6946763
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7077834
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7208905
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7339976
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7471047
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7602118
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7733189
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7864260
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7995331
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8126402
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8257473
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8388544
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8519615
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8650686
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8781757
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8912828
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9043899
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9174970
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9306041
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9437112
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9568183
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9699254
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9830325
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9961396
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10092467
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10223538
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10354609
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10485680
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10616751
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10747822
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10878893
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11009964
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11141035
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11272106
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11403177
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11534248
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11665319
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11796390
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11927461
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12058532
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12189603
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12320674
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12451745
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12582816
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12713887
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12844958
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12976029
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13107100
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13238171
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13369242
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13500313
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13631384
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13762455
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13893526
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14024597
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14155668
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14286739
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14417810
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14548881
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14679952
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14811023
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14942094
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15073165
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15204236
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15335307
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15466378
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15597449
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15728520
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15859591
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15990662
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16121733
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16252804
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16383875
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16514946
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16646017
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16777088
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16908159
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17039230
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17170301
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17301372
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17432443
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17563514
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17694585
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17825656
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17956727
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18087798
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18218869
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18349940
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18481011
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18612082
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18743153
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18874224
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19005295
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19136366
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19267437
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19398508
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19529579
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19660650
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19791721
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19922792
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20053863
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20184934
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20316005
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20447076
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20578147
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20709218
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20840289
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20971360
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21102431
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21233502
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21364573
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21495644
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21626715
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21757786
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21888857
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22019928
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22150999
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22282070
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22413141
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22544212
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22675283
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22806354
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22937425
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23068496
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23199567
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23330638
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23461709
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23592780
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23723851
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23854922
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23985993
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24117064
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24248135
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24379206
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24510277
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24641348
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24772419
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24903490
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25034561
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25165632
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25296703
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25427774
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25558845
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25689916
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25820987
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25952058
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26083129
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26214200
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26345271
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26476342
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26607413
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26738484
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26869555
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27000626
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27131697
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27262768
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27393839
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27524910
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27655981
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27787052
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27918123
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28049194
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28180265
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28311336
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28442407
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28573478
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28704549
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28835620
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28966691
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29097762
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29228833
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29359904
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29490975
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29622046
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29753117
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29884188
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:115812, count: 30000000
课程示例(优化版)
服务端

针对上面课程示例中存在的问题,下面通过关注可写事件,来解决。

@Slf4j
public class WriteServer {

    public static void main(String[] args) throws Exception {

        Selector selector = Selector.open();

        ServerSocketChannel ssc = ServerSocketChannel.open();

        ssc.configureBlocking(false);

        ssc.bind(new InetSocketAddress(8080));

        ssc.register(selector, SelectionKey.OP_ACCEPT);


        while (true) {

            selector.select();

            log.info("select...");

            Set<SelectionKey> selectedKeys = selector.selectedKeys();

            Iterator<SelectionKey> it = selectedKeys.iterator();

            while (it.hasNext()) {

                SelectionKey selectionKey = it.next();

                it.remove();

                if (selectionKey.isAcceptable()) {

                    SocketChannel socketChannel = ssc.accept();

                    socketChannel.configureBlocking(false);

                    // (下面用scKey哦)
                    SelectionKey scKey = socketChannel.register(selector, SelectionKey.OP_READ);

                    // 1. 发送大量数据
                    StringBuilder sb = new StringBuilder();
                    for (int i = 0; i < 30000000; i++) {
                        sb.append("a");
                    }

                    ByteBuffer buffer = Charset.defaultCharset().encode(sb.toString());

                    // 2. 返回的值代表实际写入的字节数
                    int writeCount = socketChannel.write(buffer);
                    log.info("初次写入字节数量: {}", writeCount);

                    // 3. 判断是否还有剩余内容
                    if (buffer.hasRemaining()) {

                        // 4. 关注可写事件(当发送网络包的缓冲区可以接收内容的时候,就会触发可写事件)
                        // (目的:就是一次写不完的数据,不要一直在这里通过while循环使劲的写,
                        //        而是通过关注可写事件的方式,等触发了可写的时候,再去写剩余的内容)
                        // (不改变原来的关注事件,添加上关注可写事件)
                        // scKey.interestOps(scKey.interestOps() | scKey.OP_WRITE); // 使用位运算亦可
                        scKey.interestOps(scKey.interestOps() + scKey.OP_WRITE);

                        // 5. 把未写完的数据挂到selectionKey的附件上
                        scKey.attach(buffer);
                    }

                } else if (selectionKey.isWritable()) { // 关注了可写事件,当可写事件发生时触发

                    // 拿到附件
                    ByteBuffer buffer = (ByteBuffer) selectionKey.attachment();

                    SocketChannel socketChannel = (SocketChannel) selectionKey.channel();
                    
                    int writeCount = socketChannel.write(buffer);
                    
                    log.info("通过关注可写事件writable 写入: {}", writeCount);

                    // 6. 清理操作
                    if (!buffer.hasRemaining()) {

                        // 需要清除buffer(关联null,覆盖掉原来的buffer)
                        selectionKey.attach(null);

                        // 不需要关注可写事件了
                        selectionKey.interestOps(selectionKey.interestOps() - SelectionKey.OP_WRITE);

                        log.info("全部写完!");
                    }


                }


            }


        }


    }

}
客户端

没有代码变动

@Slf4j
public class WriteClient {
    public static void main(String[] args) throws Exception {

        SocketChannel socketChannel = SocketChannel.open();

        socketChannel.connect(new InetSocketAddress(8080));

        ByteBuffer buffer = ByteBuffer.allocate(1024 * 1024);// 1M

        int count = 0;

        while (true) {

            int readCount = socketChannel.read(buffer);

            count = count + readCount;

            log.info("readCount:{}, count: {}", readCount, count);

            buffer.clear();
        }

    }
}

测试
/* 服务端日志输出 */
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - select...
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - 初次写入字节数量: 3014633
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - select...
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - 通过关注可写事件writable 写入: 13369242
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - select...
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - 通过关注可写事件writable 写入: 13616125
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - 全部写完!

/* 客户端日志输出 */
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 131071
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 262142
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 393213
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 524284
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 655355
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 786426
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 917497
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1048568
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1179639
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1310710
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1441781
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1572852
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1703923
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1834994
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1966065
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2097136
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2228207
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2359278
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2490349
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2621420
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2752491
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2883562
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3014633
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3145704
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3276775
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3407846
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3538917
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3669988
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3801059
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3932130
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4063201
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4194272
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4325343
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4456414
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4587485
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4718556
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4849627
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 4915110
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 4980698
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5111769
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5242840
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5373911
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5504982
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5636053
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5767124
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5898195
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6029266
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6160337
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6291408
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6422479
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 6487962
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 6553550
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6684621
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6815692
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6946763
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7077834
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7208905
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7339976
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7471047
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7602118
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7733189
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7864260
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7995331
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8126402
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8257473
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8388544
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8519615
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8650686
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8781757
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8912828
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9043899
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9174970
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9306041
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9437112
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9568183
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9699254
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9830325
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9961396
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10092467
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10223538
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10354609
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 10420092
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 10485680
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10616751
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10747822
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 10813305
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 10878893
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11009964
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:130966, count: 11140930
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11272001
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:105, count: 11272106
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11403177
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11534248
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11665319
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11796390
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 11861873
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11992944
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 12058532
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 12124015
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 12189603
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12320674
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:130966, count: 12451640
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 12517228
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12648299
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12779370
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12910441
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13041512
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13172583
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13303654
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13434725
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13565796
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13696867
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13827938
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13959009
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14090080
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14221151
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14352222
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14483293
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14614364
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14745435
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14876506
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15007577
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15138648
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15269719
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15400790
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15531861
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15662932
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15794003
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15925074
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16056145
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16187216
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16318287
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16449358
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16580429
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16711500
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16842571
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16973642
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17104713
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17235784
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17366855
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17497926
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17628997
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17760068
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17891139
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18022210
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18153281
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18284352
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18415423
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18546494
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18677565
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18808636
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18939707
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19070778
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19201849
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19332920
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19463991
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19595062
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19726133
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19857204
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19988275
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 20053863
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 20119346
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 20184934
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20316005
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20447076
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20578147
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 20643630
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 20709218
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 20774701
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 20840289
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 20905772
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 20971360
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 21036843
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 21102431
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:130966, count: 21233397
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:105, count: 21233502
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21364573
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21495644
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21626715
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 21692198
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 21757786
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 21823269
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 21888857
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 21954340
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 22019928
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 22085411
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 22150999
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22282070
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22413141
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22544212
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22675283
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22806354
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22937425
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 23002908
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 23068496
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 23133979
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 23199567
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 23265050
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 23330638
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 23396121
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 23461709
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23592780
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 23658263
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 23723851
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:130966, count: 23854817
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:105, count: 23854922
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23985993
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24117064
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24248135
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24379206
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24510277
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24641348
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24772419
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24903490
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:130966, count: 25034456
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:105, count: 25034561
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25165632
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25296703
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25427774
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25558845
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25689916
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25820987
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25952058
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26083129
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26214200
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 26279683
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 26345271
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26476342
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26607413
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26738484
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26869555
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27000626
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27131697
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 27197180
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 27262768
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27393839
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27524910
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27655981
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27787052
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27918123
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28049194
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28180265
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 28245748
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 28311336
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28442407
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 28507890
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 28573478
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28704549
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:130966, count: 28835515
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28966586
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29097657
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29228728
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 29294316
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 29359904
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29490975
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 29556458
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29687529
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29818600
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29949671
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:50329, count: 30000000
write 为何要取消

只要向 channel 发送数据时,socket 缓冲可写,这个事件会频繁触发,因此应当只在 socket 缓冲区写不下时再关注可写事件,数据写完之后再取消关注

4.6 更进一步

利用多线程优化

现在都是多核 cpu,设计时要充分考虑别让 cpu 的力量被白白浪费

前面的代码只有一个选择器,没有充分利用多核 cpu,如何改进呢?

分两组选择器

  • 单线程配一个选择器,专门处理 accept 事件
  • 创建 cpu 核心数的线程,每个线程配一个选择器,轮流处理 read 事件
public class ChannelDemo7 {
    public static void main(String[] args) throws IOException {
        new BossEventLoop().register();
    }


    @Slf4j
    static class BossEventLoop implements Runnable {
        private Selector boss;
        private WorkerEventLoop[] workers;
        private volatile boolean start = false;
        AtomicInteger index = new AtomicInteger();

        public void register() throws IOException {
            if (!start) {
                ServerSocketChannel ssc = ServerSocketChannel.open();
                ssc.bind(new InetSocketAddress(8080));
                ssc.configureBlocking(false);
                boss = Selector.open();
                SelectionKey ssckey = ssc.register(boss, 0, null);
                ssckey.interestOps(SelectionKey.OP_ACCEPT);
                workers = initEventLoops();
                new Thread(this, "boss").start();
                log.debug("boss start...");
                start = true;
            }
        }

        public WorkerEventLoop[] initEventLoops() {
//        EventLoop[] eventLoops = new EventLoop[Runtime.getRuntime().availableProcessors()];
            WorkerEventLoop[] workerEventLoops = new WorkerEventLoop[2];
            for (int i = 0; i < workerEventLoops.length; i++) {
                workerEventLoops[i] = new WorkerEventLoop(i);
            }
            return workerEventLoops;
        }

        @Override
        public void run() {
            while (true) {
                try {
                    boss.select();
                    Iterator<SelectionKey> iter = boss.selectedKeys().iterator();
                    while (iter.hasNext()) {
                        SelectionKey key = iter.next();
                        iter.remove();
                        if (key.isAcceptable()) {
                            ServerSocketChannel c = (ServerSocketChannel) key.channel();
                            SocketChannel sc = c.accept();
                            sc.configureBlocking(false);
                            log.debug("{} connected", sc.getRemoteAddress());
                            workers[index.getAndIncrement() % workers.length].register(sc);
                        }
                    }
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }
        }
    }

    @Slf4j
    static class WorkerEventLoop implements Runnable {
        private Selector worker;
        private volatile boolean start = false;
        private int index;

        private final ConcurrentLinkedQueue<Runnable> tasks = new ConcurrentLinkedQueue<>();

        public WorkerEventLoop(int index) {
            this.index = index;
        }

        public void register(SocketChannel sc) throws IOException {
            if (!start) {
                worker = Selector.open();
                new Thread(this, "worker-" + index).start();
                start = true;
            }
            tasks.add(() -> {
                try {
                    SelectionKey sckey = sc.register(worker, 0, null);
                    sckey.interestOps(SelectionKey.OP_READ);
                    worker.selectNow();
                } catch (IOException e) {
                    e.printStackTrace();
                }
            });
            worker.wakeup();
        }

        @Override
        public void run() {
            while (true) {
                try {
                    worker.select();
                    Runnable task = tasks.poll();
                    if (task != null) {
                        task.run();
                    }
                    Set<SelectionKey> keys = worker.selectedKeys();
                    Iterator<SelectionKey> iter = keys.iterator();
                    while (iter.hasNext()) {
                        SelectionKey key = iter.next();
                        if (key.isReadable()) {
                            SocketChannel sc = (SocketChannel) key.channel();
                            ByteBuffer buffer = ByteBuffer.allocate(128);
                            try {
                                int read = sc.read(buffer);
                                if (read == -1) {
                                    key.cancel();
                                    sc.close();
                                } else {
                                    buffer.flip();
                                    log.debug("{} message:", sc.getRemoteAddress());
                                    debugAll(buffer);
                                }
                            } catch (IOException e) {
                                e.printStackTrace();
                                key.cancel();
                                sc.close();
                            }
                        }
                        iter.remove();
                    }
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }
        }
    }
}
如何拿到 cpu 个数
  • Runtime.getRuntime().availableProcessors() 如果工作在 docker 容器下,因为容器不是物理隔离的,会拿到物理 cpu 个数,而不是容器申请时的个数
  • 这个问题直到 jdk 10 才修复,使用 jvm 参数 UseContainerSupport 配置, 默认开启
课堂示例
注意点

selector.select()与sc.register(selector,…) 执行先后的问题

如果selector的select()方法执行,那么此时所在线程会被阻塞,这个是没有任何疑问的。但是要注意的是这个selector阻塞后,它会影响到其它线程后面使用socketChannel.register(selector,…)将socketChannel注册到此selector的方法(register方法被阻塞)。这个register的调用就得等到这个selector的select()阻塞结束了(比如:某个SelectionKey的某事件触发了),才能正常注册。这也就是说:selector调用select()方法处于阻塞时,此时无法将socketChannel注册到此selector上。

selector.wakeup

selector.wakeup可以唤醒select方法,具体来说:当selector正处于select操作时,调用selector.wakeup可以立即唤醒;如果selector还没有执行select操作,先调用了wakeup,那么下一次selector调用select方法并不会阻塞而是直接返回。有点像LockSupport.unpark。

示例1
  • 1个线程 + 1个selector 作为boss,只监听accept事件,用来接收客户端连接,并把对应的socketChannel交给worker处理
  • 1个线程 + 1个selector 作为worker(可多个),监听read事件,用来接收客户端发送过来的数据
  • 使用了队列解决boss线程和worker线程之间通信的问题
服务端
public class MultiThreadServer {

    public static void main(String[] args) throws Exception {

        Thread.currentThread().setName("boss-thread");

        ServerSocketChannel ssChannel = ServerSocketChannel.open();

        ssChannel.configureBlocking(false);

        Selector boss = Selector.open();

        ssChannel.register(boss, SelectionKey.OP_ACCEPT);

        ssChannel.bind(new InetSocketAddress(8080));

        Worker worker = new Worker("worker-1");

        while (true) {

            boss.select();

            Iterator<SelectionKey> it = boss.selectedKeys().iterator();

            while (it.hasNext()) {

                SelectionKey sk = it.next();
                it.remove();

                if (sk.isAcceptable()) {

                    SocketChannel socketChannel = ssChannel.accept();

                    // 注意,这里要配置为非阻塞模式,selector只能配合非阻塞模式
                    //      否则会报错IllegalBlockingModeException
                    socketChannel.configureBlocking(false);

                    // 将socketChannel交给worker
                    worker.register(socketChannel);


                }

            }


        }


    }

    static class Worker implements Runnable {

        private String name;

        private Thread thread;

        private Selector selector;

        private volatile boolean start = false;

        private ConcurrentLinkedQueue<Runnable> queue = new ConcurrentLinkedQueue();

        public Worker(String name) {
            this.name = name;
        }

        public void register(SocketChannel socketChannel) throws Exception {

            if (!start) {
                // 初始化操作
                selector = Selector.open();
                thread = new Thread(this, name);
                thread.start();
                start = true;
            }

            // 为了解决selector.select()方法阻塞时,不能向selector注册socketChannel的问题
            // 引入一个队列来解决线程间通信的这个问题
            queue.add(() -> {
                try {
                    socketChannel.register(selector, SelectionKey.OP_READ);
                } catch (ClosedChannelException e) {
                    e.printStackTrace();
                }
            });

            // 唤醒selector
            // (当selector正处于select操作时,调用selector.wakeup可以立即唤醒;
            //   如果selector还没有执行select操作,先调用了wakeup,
            //   那么下一次selector调用select方法并不会阻塞而是直接返回。)
            selector.wakeup();

        }

        @Override
        public void run() {

            while (true) {

                try {

                    selector.select();

                    //-------------------------------
                    // 注意:这一段代码不能移动到上面selector.select()这行代码的上面,
                    //      因为无法保证下面这一段代码 在register方法中 向queue队列中添加任务 的前面执行
                    //      因此,必须要放在select()方法下面执行, 因为select会阻塞,除非上面wakeup先执行,
                    //      而如果wakeup先执行了,那么queue中就一定添加了任务,而如果wakeup没有先执行,
                    Runnable task = queue.poll();
                    if (task != null) {
                        task.run();
                    }
                    //-------------------------------

                    Iterator<SelectionKey> it = selector.selectedKeys().iterator();

                    while (it.hasNext()) {

                        SelectionKey sk = it.next();
                        it.remove();

                        if (sk.isReadable()) {

                            ByteBuffer buffer = ByteBuffer.allocate(16);

                            SocketChannel socketChannel = (SocketChannel) sk.channel();

                            socketChannel.read(buffer);

                            buffer.flip();

                            debugAll(buffer);

                        }

                    }

                } catch (IOException e) {
                    e.printStackTrace();
                }

            }

        }
    }

}

客户端
public class TestClient {
    public static void main(String[] args) throws IOException {

        SocketChannel socketChannel = SocketChannel.open();

        socketChannel.connect(new InetSocketAddress(8080));

        socketChannel.write(Charset.defaultCharset().encode("0123456789abcdef"));

        System.in.read();

    }
}
测试

启动服务端和多个客户端,发现没有问题,

(视频还介绍了一个方法,但是觉得会存在问题(经过测试,确实存在问题),就不贴代码了。这个使用队列的没有问题。)

4.7 UDP

  • UDP 是无连接的,client 发送数据不会管 server 是否开启
  • server 这边的 receive 方法会将接收到的数据存入 byte buffer,但如果数据报文超过 buffer 大小,多出来的数据会被默默抛弃

首先启动服务器端

public class UdpServer {
    public static void main(String[] args) {
        try (DatagramChannel channel = DatagramChannel.open()) {
            channel.socket().bind(new InetSocketAddress(9999));
            System.out.println("waiting...");
            ByteBuffer buffer = ByteBuffer.allocate(32);
            channel.receive(buffer);
            buffer.flip();
            debug(buffer);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

输出

waiting...

运行客户端

public class UdpClient {
    public static void main(String[] args) {
        try (DatagramChannel channel = DatagramChannel.open()) {
            ByteBuffer buffer = StandardCharsets.UTF_8.encode("hello");
            InetSocketAddress address = new InetSocketAddress("localhost", 9999);
            channel.send(buffer, address);
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

接下来服务器端输出

         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 68 65 6c 6c 6f                                  |hello           |
+--------+-------------------------------------------------+----------------+

5. NIO vs BIO

5.1 stream vs channel

  • stream 不会自动缓冲数据,channel 会利用系统提供的发送缓冲区、接收缓冲区(更为底层)
  • stream 仅支持阻塞 API,channel 同时支持阻塞、非阻塞 API,网络 channel 可配合 selector 实现多路复用
  • 二者均为全双工,即读写可以同时进行

5.2 IO 模型

同步阻塞、同步非阻塞、同步多路复用、异步阻塞(没有此情况)、异步非阻塞(其它情况纯属胡说!O(∩_∩)O)

  • 同步:线程自己去获取结果(一个线程)
  • 异步:线程自己不去获取结果,而是由其它线程送结果(至少两个线程)

当调用一次 channel.read 或 stream.read 后,会切换至操作系统内核态来完成真正数据读取,而读取又分为两个阶段,分别为:

  • 等待数据阶段
  • 复制数据阶段

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0yt2jFpk-1690207745260)(assets/0033.png)]

5种io模型

  • 阻塞 IO(在调用read方法的时候,用户线程被阻塞了,在读取期间干不了其它的,直到数据返回)

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ajja7XLx-1690207745261)(assets/0039.png)]

  • 非阻塞 IO(在等待数据阶段,用户线程可以不断询问是否有数据可读,此阶段不处于阻塞状态,而在有数据时,还是会阻塞住,等待复制数据完,再返回)

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-85EA88Ma-1690207745261)(assets/0035.png)]

  • 多路复用(使用一个selector等待数据,当有数据时,触发可读事件通知用户线程,在复制数据时,也还是会阻塞)

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-HgFc0lDY-1690207745262)(assets/0038.png)]

  • 信号驱动

  • 异步 IO

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qu7QL5kM-1690207745262)(assets/0037.png)]

  • 阻塞 IO vs 多路复用(阻塞io:在做一件事情的时候做不了另外一件事,比如在read的时候,只有等read完了才能处理accept,等到accept了,此时同时channel1再次发送数据过来,阻塞io此时就不能马上处理,而是要等到accept完成了,才能处理channel1发送过来的数据。多路复用:使用select等待事件,当发生多个事件时,select结束等待,使用while循环处理这些个事件)

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ImXSt8Cd-1690207745263)(assets/0034.png)]

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-03WyupmC-1690207745264)(assets/0036.png)]

参考

UNIX 网络编程 - 卷 I

5.3 零拷贝

传统 IO 问题

传统的 IO 将一个文件通过 socket 写出

File f = new File("helloword/data.txt");
RandomAccessFile file = new RandomAccessFile(file, "r");

byte[] buf = new byte[(int)f.length()];
file.read(buf);

Socket socket = ...;
socket.getOutputStream().write(buf);

内部工作流程是这样的:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-HtLoImQb-1690207745264)(assets/0024.png)]

  1. java 本身并不具备 IO 读写能力,因此 read 方法调用后,要从 java 程序的用户态切换至内核态,去调用操作系统(Kernel)的读能力,将数据读入内核缓冲区。这期间用户线程阻塞,操作系统使用 DMA(Direct Memory Access)来实现文件读,其间也不会使用 cpu

    DMA 也可以理解为硬件单元,用来解放 cpu 完成文件 IO

  2. 内核态切换回用户态,将数据从内核缓冲区读入用户缓冲区(即 byte[] buf),这期间 cpu 会参与拷贝,无法利用 DMA

  3. 调用 write 方法,这时将数据从用户缓冲区(byte[] buf)写入 socket 缓冲区,cpu 会参与拷贝

  4. 接下来要向网卡写数据,这项能力 java 又不具备,因此又得从用户态切换至内核态,调用操作系统的写能力,使用 DMA 将 socket 缓冲区的数据写入网卡,不会使用 cpu

可以看到中间环节较多,java 的 IO 实际不是物理设备级别的读写,而是缓存的复制,底层的真正读写是操作系统来完成的

  • 用户态与内核态的切换发生了 3 次,这个操作比较重量级
  • 数据拷贝了共 4 次
NIO 优化

通过 DirectByteBuf

  • ByteBuffer.allocate(10) 返回的是:HeapByteBuffer 使用的还是 java 内存
  • ByteBuffer.allocateDirect(10) 返回的是:DirectByteBuffer 使用的是操作系统内存

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-CFqOY84c-1690207745265)(assets/0025.png)]

大部分步骤与优化前相同,不再赘述。唯有一点:java 可以使用 DirectByteBuf 将堆外内存映射到 jvm 内存中来直接访问使用

  • 这块内存不受 jvm 垃圾回收的影响,因此内存地址固定,有助于 IO 读写
  • java 中的 DirectByteBuf 对象仅维护了此内存的虚引用,内存回收分成两步
    • DirectByteBuf 对象被垃圾回收,将虚引用加入引用队列
    • 通过专门线程访问引用队列,根据虚引用释放堆外内存
  • 减少了一次数据拷贝,用户态与内核态的切换次数没有减少

进一步优化(底层采用了 linux 2.1 后提供的 sendFile 方法),java 中对应着两个 channel 调用 transferTo/transferFrom 方法拷贝数据

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-dNsgJXLy-1690207745265)(assets/0026.png)]

  1. java 调用 transferTo 方法后,要从 java 程序的用户态切换至内核态,使用 DMA将数据读入内核缓冲区,不会使用 cpu
  2. 数据从内核缓冲区传输到 socket 缓冲区,cpu 会参与拷贝
  3. 最后使用 DMA 将 socket 缓冲区的数据写入网卡,不会使用 cpu

可以看到

  • 只发生了一次用户态与内核态的切换
  • 数据拷贝了 3 次

进一步优化(linux 2.4)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-URNvoiOX-1690207745266)(assets/0027.png)]

  1. java 调用 transferTo 方法后,要从 java 程序的用户态切换至内核态,使用 DMA将数据读入内核缓冲区,不会使用 cpu
  2. 只会将一些 offset 和 length 信息拷入 socket 缓冲区,几乎无消耗
  3. 使用 DMA 将 内核缓冲区的数据写入网卡,不会使用 cpu

整个过程仅只发生了一次用户态与内核态的切换,数据拷贝了 2 次。所谓的【零拷贝】,并不是真正无拷贝,而是在不会拷贝重复数据到 jvm 内存中,零拷贝的优点有

  • 更少的用户态与内核态的切换
  • 不利用 cpu 计算,减少 cpu 缓存伪共享
  • 零拷贝适合小文件传输

5.3 AIO

AIO 用来解决数据复制阶段的阻塞问题

  • 同步意味着,在进行读写操作时,线程需要等待结果,还是相当于闲置
  • 异步意味着,在进行读写操作时,线程不必等待结果,而是将来由操作系统来通过回调方式由另外的线程来获得结果

异步模型需要底层操作系统(Kernel)提供支持

  • Windows 系统通过 IOCP 实现了真正的异步 IO
  • Linux 系统异步 IO 在 2.6 版本引入,但其底层实现还是用多路复用模拟了异步 IO,性能没有优势
文件 AIO

先来看看 AsynchronousFileChannel

@Slf4j
public class AioFileChannel {
    public static void main(String[] args) throws IOException {

        try (AsynchronousFileChannel channel = AsynchronousFileChannel.open(Paths.get("data.txt"), StandardOpenOption.READ)) {


            ByteBuffer buffer = ByteBuffer.allocate(16);

            log.debug("read begin...");

            // 参数1 ByteBuffer
            // 参数2 读取的起始位置
            // 参数3 附件
            // 参数4 回调对象 CompletionHandler
            channel.read(
                    buffer,
                    0,
                    buffer,
                    new CompletionHandler<Integer, ByteBuffer>() {

                        @Override // read 成功
                        public void completed(Integer result, ByteBuffer attachment) {

                            log.debug("read completed...{}", result);

                            attachment.flip();

                            debugAll(attachment);
                        }

                        @Override // read 失败
                        public void failed(Throwable exc, ByteBuffer attachment) {

                            exc.printStackTrace();
                        }
                    }
            );

            log.debug("read end...");
        } catch (IOException e) {
            e.printStackTrace();
        }
        System.in.read();
    }
}

输出

22:00:09 [DEBUG] [main] c.i.n.c.AioFileChannel - read begin...
22:00:09 [DEBUG] [main] c.i.n.c.AioFileChannel - read end...
22:00:09 [DEBUG] [Thread-6] c.i.n.c.AioFileChannel - read completed...15
+--------+-------------------- all ------------------------+----------------+
position: [0], limit: [15]
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 31 32 33 34 35 36 37 38 39 30 61 62 63 0d 0a 00 |1234567890abc...|
+--------+-------------------------------------------------+----------------+

可以看到

  • 响应文件读取成功的是另一个线程 Thread-5
  • 主线程并没有 IO 操作阻塞
守护线程

默认文件 AIO 使用的线程都是守护线程,所以最后要执行 System.in.read() 以避免守护线程意外结束

网络 AIO
public class AioServer {
    public static void main(String[] args) throws IOException {
        AsynchronousServerSocketChannel ssc = AsynchronousServerSocketChannel.open();
        ssc.bind(new InetSocketAddress(8080));
        ssc.accept(null, new AcceptHandler(ssc));
        System.in.read();
    }

    private static void closeChannel(AsynchronousSocketChannel sc) {
        try {
            System.out.printf("[%s] %s close\n", Thread.currentThread().getName(), sc.getRemoteAddress());
            sc.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    private static class ReadHandler implements CompletionHandler<Integer, ByteBuffer> {
        private final AsynchronousSocketChannel sc;

        public ReadHandler(AsynchronousSocketChannel sc) {
            this.sc = sc;
        }

        @Override
        public void completed(Integer result, ByteBuffer attachment) {
            try {
                if (result == -1) {
                    closeChannel(sc);
                    return;
                }
                System.out.printf("[%s] %s read\n", Thread.currentThread().getName(), 
                                  sc.getRemoteAddress());
                attachment.flip();
                System.out.println(Charset.defaultCharset().decode(attachment));
                attachment.clear();
                // 处理完第一个 read 时,需要再次调用 read 方法来处理下一个 read 事件
                sc.read(attachment, attachment, this);
            } catch (IOException e) {
                e.printStackTrace();
            }
        }

        @Override
        public void failed(Throwable exc, ByteBuffer attachment) {
            closeChannel(sc);
            exc.printStackTrace();
        }
    }

    private static class WriteHandler implements CompletionHandler<Integer, ByteBuffer> {
        private final AsynchronousSocketChannel sc;

        private WriteHandler(AsynchronousSocketChannel sc) {
            this.sc = sc;
        }

        @Override
        public void completed(Integer result, ByteBuffer attachment) {
            // 如果作为附件的 buffer 还有内容,需要再次 write 写出剩余内容
            if (attachment.hasRemaining()) {
                sc.write(attachment);
            }
        }

        @Override
        public void failed(Throwable exc, ByteBuffer attachment) {
            exc.printStackTrace();
            closeChannel(sc);
        }
    }

    private static class AcceptHandler implements CompletionHandler<AsynchronousSocketChannel, Object> {
        private final AsynchronousServerSocketChannel ssc;

        public AcceptHandler(AsynchronousServerSocketChannel ssc) {
            this.ssc = ssc;
        }

        @Override
        public void completed(AsynchronousSocketChannel sc, Object attachment) {
            try {
                System.out.printf("[%s] %s connected\n", Thread.currentThread().getName(), sc.getRemoteAddress());
            } catch (IOException e) {
                e.printStackTrace();
            }
            ByteBuffer buffer = ByteBuffer.allocate(16);
            // 读事件由 ReadHandler 处理
            sc.read(buffer, buffer, new ReadHandler(sc));
            // 写事件由 WriteHandler 处理
            sc.write(Charset.defaultCharset().encode("server hello!"), ByteBuffer.allocate(16), new WriteHandler(sc));
            // 处理完第一个 accpet 时,需要再次调用 accept 方法来处理下一个 accept 事件
            ssc.accept(null, this);
        }

        @Override
        public void failed(Throwable exc, Object attachment) {
            exc.printStackTrace();
        }
    }
}
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值