Linux-IO模型之NIO

C10K问题

BIO的弊端

accept阻塞
服务端新建线程调用系统调用clone阻塞

因为内核提供给我们调用的API是阻塞的,所以优化也就是选择新的API

NIO

在kernel层面,比如BIO中用的socket,可通过参数设置成非阻塞

使用man 2 socket 查看系统调用socket函数的设置
可以看到下面这句话,可以对socket设置成非阻塞

SOCK_NONBLOCK   Set the O_NONBLOCK file status flag on the new open file description.  Using this flag saves extra calls to
                       fcntl(2) to achieve the same result.

NIO在java层面是new io,在操作系统linux内核层面是non blocking io
NIO的java代码如下:

public class SocketServerNio {

    public static void main(String[] args) throws IOException {
        List<SocketChannel> clients = new ArrayList<>();

        ServerSocketChannel server = ServerSocketChannel.open();
        server.bind(new InetSocketAddress(9090));
        server.configureBlocking(false); //重点  对应到kernel socket函数的SOCK_NONBLOCK

        for (; ; ) {
        	Thread.sleep(1000); //每一秒循环一次,只是做演示用
            //因为NIO中不会阻塞,所以得放在loop里; 没有数据会返回null,对应内核中函数的-1,因为java是面向对象的
            SocketChannel client = server.accept();

            //accept  调用内核了:1,没有客户端连接进来,返回值?在BIO 的时候一直卡着,但是在NIO ,不卡着,返回-1,NULL
            //如果来客户端的连接,accept 返回的是这个客户端的fd  5,client  object
            //NONBLOCKING 就是代码能往下走了,只不过有不同的情况
            if (client == null) {
                System.out.println("null.................");
            } else {
                //重点  socket(服务端的listen socket<连接请求三次握手后,往我这里扔,我去通过accept 得到  连接的socket>,连接socket<连接后的数据读写使用的> )
                client.configureBlocking(false);
                final int port = client.socket().getPort();
                System.out.println("client...port: " + port);
                clients.add(client);
            }

            ByteBuffer buffer = ByteBuffer.allocate(4096);//4k的大小,堆内分配
//            ByteBuffer buffer = ByteBuffer.allocateDirect(4096);//4k的大小,堆外分配,堆外是jvm堆外,java进程的堆内,减少了一次数据copy

            //遍历已经链接进来的客户端能不能读写数据
            for (SocketChannel channel : clients) { //串行化,要优化为多线程
                final int read = channel.read(buffer); // >0  -1  0   //不会阻塞
                if (read > 0) {
                    buffer.flip();
                    byte[] bytes = new byte[buffer.limit()];
                    buffer.get(bytes);

                    String res = new String(bytes);
                    System.out.println(client.socket().getPort() + " : " + res);
                    buffer.clear();
                }
            }
        }
    }
}

执行上述代码查看效果,可以看到每一秒都在输出,说明并没有阻塞,只是此时还没有client连接进来,你也可以修改代码为阻塞查看输出有什么不同,改server.configureBlocking(true);

[root@optimize-node01 netio]# javac SocketServerNIO.java && strace -ff -o out java SocketServerNIO
null.................
null.................
null.................
null.................
null.................

在阻塞和非阻塞两种模式下观察out文件的输出,可以看出系统调用层面的阻塞和非阻塞

C10K压测客户端

/**
 * @desc 在不同的机器上启动此client,连接相同的server
 *       一个client会持有55000个socket四元组,server端四元组的数量=client数量*55000
 *       过程中会涉及调整server的内核参数
 * @auther itliu
 * @date 2020/7/7
 */
public class C10KClient {
    public static final InetSocketAddress REMOTE = new InetSocketAddress("10.0.0.5", 9090);

    public static void main(String[] args) {
        List<SocketChannel> objects = new ArrayList<>();

        for (int i = 10000; i < 65000; i++) {
            try {
                SocketChannel client1 = SocketChannel.open();

                client1.bind(new InetSocketAddress("10.0.0.1", i));
                client1.connect(REMOTE);
                boolean c1 = client1.isOpen();
                objects.add(client1);
            } catch (IOException e) {
                e.printStackTrace();
            }

        }
        System.out.println("clients " + objects.size());

        try {
            System.in.read();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

压测过程中会出现以下报错信息
因为启动了一个c10k客户端,当接收到4091个client连接后,出现报错

client...port: 14079
client...port: 14080
client...port: 14081
client...port: 14082
client...port: 14083
client...port: 14084
client...port: 14085
client...port: 14086
client...port: 14087
client...port: 14088
client...port: 14089
client...port: 14090
client...port: 14091
Exception in thread "main" java.io.IOException: Too many open files
	at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:419)
	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:247)
	at org.itliu.sysio.nio.SocketServerNIO.main(SocketServerNIO.java:28)

查看当前用户可以打开的文件数,可以看到open files是1024,可是我们上面确打开了4091个files,这是因为root用户的权限太大导致,可将服务端程序运行在其他用户下面。

[root@optimize-node01 netio]# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 7827
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 7827
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

修改open files 限制为500000

[root@optimize-node01 netio]# ulimit -SHn 500000
[root@optimize-node01 netio]# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 7827
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 500000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 7827
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

/proc/sys/fs/file-max 是kernel级别的
ulimited 是用户级别的

再次启动c10k客户端程序对服务端进行压测
总共对三个类别的服务端进行测试

  • BIO 参考上一篇文章中的server代码
  • NIO 当前文章中的server代码,请往上翻
  • 多路复用单线程,下面的server代码
package org.itliu.sysio.multiplexing;

import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.SelectionKey;
import java.nio.channels.Selector;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
import java.util.Iterator;
import java.util.Set;

/**
 * @desc
 * @auther itliu
 * @date 2020/7/7
 */
public class SocketMultiplexingSingleThreadV1 {
    private ServerSocketChannel server = null;
    private Selector selector = null;
    private int port = 9090;

    public void initServer() {
        try {
            server = ServerSocketChannel.open();
            server.configureBlocking(false);
            server.bind(new InetSocketAddress(port));

            selector = Selector.open();
            server.register(selector, SelectionKey.OP_ACCEPT);//server注册到多路选择器上,且只关注accept事件
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    public void start() {
        initServer();
        System.out.println("服务器启动了...");

        try {
//            for (;;) { //不断接收accept事件
            final Set<SelectionKey> keys = selector.keys();
            System.out.println("the key size is : " + keys.size());

            while (selector.select(500) > 0) {
                Set<SelectionKey> selectionKeys = selector.selectedKeys();//返回有状态的fd集合
                Iterator<SelectionKey> iter = selectionKeys.iterator();

                while (iter.hasNext()) {
                    SelectionKey key = iter.next();
                    iter.remove();

                    if (key.isAcceptable()) {
                        acceptHandler(key);
                    } else if (key.isReadable()) {
                        readHandler(key);
                    }
                }
            }
//            }
        } catch (Exception e) {
            e.printStackTrace();
        }

    }

    private void readHandler(SelectionKey key) {
        final SocketChannel client = (SocketChannel) key.channel();
        ByteBuffer buffer = (ByteBuffer) key.attachment();
        buffer.clear();
        int read = 0;
        try {
            for (; ; ) {
                read = client.read(buffer);
                if (read > 0) {
                    buffer.flip();
                    while (buffer.hasRemaining()) {
                        client.write(buffer);
                    }
                    buffer.clear();
                } else if (read == 0) {
                    continue;
                } else {
                    client.close();
                    break;
                }
            }

        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    private void acceptHandler(SelectionKey key) {
        try {
            ServerSocketChannel serverSocketChannel = (ServerSocketChannel) key.channel();
            final SocketChannel client = serverSocketChannel.accept();
            client.configureBlocking(false);

            ByteBuffer buffer = ByteBuffer.allocate(8192);
            client.register(selector, SelectionKey.OP_READ, buffer);
            System.out.println("-------------------------------------------");
            System.out.println("新客户端:" + client.getRemoteAddress());
            System.out.println("-------------------------------------------");
        } catch (IOException e) {
            e.printStackTrace();
        }

    }

    public static void main(String[] args) {
        SocketMultiplexingSingleThreadV1 service = new SocketMultiplexingSingleThreadV1();
        service.start();
    }
}

此处的多路复用器代码只做测试用,只是引出BIO->NIO->多路复用器,具体的讲解下篇文章继续

BIO 阻塞,创建连接缓慢
NIO 非阻塞,创建连接快了很多,系统调用次数少了,但是连接多了每次系统调用时传输的数据量大了
多路复用器,创建连接快了,只关注有事件的channel

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值