【Tomcat9源码分析】NIO连接器实现

转载请注明出处:http://blog.csdn.net/linxdcn/article/details/73527465


1 概述

如果你对Tomcat的整个框架、组件、请求流程不熟悉,建议你先阅读以下3篇Tomcat概述性的文章,再来看以下文章:

【Tomcat9源码分析】组件与框架概述
【Tomcat9源码分析】生命周期、启动、停止概述
【Tomcat9源码分析】请求过程概述

老版本Tomcat的Connector有三种运行模式bio、nio、apr,但在Tomcat9中只找到了nio、apr,bio模式因为效率太低已经被淘汰了。

  • nio模式是基于Java nio包实现的,能提供非阻塞I/O操作,拥有比传统I/O操作(bio)更好的并发运行性能。在Tomcat9中是默认模式。
  • apr模式(Apache Portable Runtime/Apache可移植运行时),是Apache HTTP服务器的支持库。可以简单地理解为Tomcat将以JNI的形式调用Apache HTTP服务器的核心动态链接库来处理文件读取或网络传输操作,从而大大地提高Tomcat对静态文件的处理性能

重温一下Connector组件的工作方式,下面对分析每一步骤。




2 Connector初始化与启动

public Connector(String protocol) {

    if ("HTTP/1.1".equals(protocol) || protocol == null) {
        if (aprConnector) {
            protocolHandlerClassName = "org.apache.coyote.http11.Http11AprProtocol";
        } else {
            // 默认class name
            protocolHandlerClassName = "org.apache.coyote.http11.Http11NioProtocol";
        }
    } 

    ProtocolHandler p = null;
    try {
        // 采用反射机制创建Http11NioProtocol对象
        Class<?> clazz = Class.forName(protocolHandlerClassName);
        p = (ProtocolHandler) clazz.getConstructor().newInstance();
    }
}
  1. Connector构造函数,默认采用org.apache.coyote.http11.Http11NioProtocol
  2. 新建CoyoteAdapter,并调用setAdapter()(未给出代码)
  3. 启动Http11NioProtocol对象

3 Http11NioProtocol启动

// 新建NioEndPoint
public Http11NioProtocol() {
    super(new NioEndpoint());

    // 父类的构造函数大致如下
    // this.endpoint = endpoint;
    // ConnectionHandler<S> cHandler = new ConnectionHandler<>(this);
    // setHandler(cHandler);
    // getEndpoint().setHandler(cHandler);
}

// 启动NioEndPoint
public void start() throws Exception {
    endpoint.start();
}

主要创建了两个对象:

  1. 新建NioEndPoint:负责接收请求
  2. 新建ConnectionHandler:负责处理请求
  3. 启动NioEndPoint

4 NioEndPoint

NioEndPoint包含了三个组件

  • Acceptor:负责监听请求
  • Poller:接收监听到的请求socket
  • SocketProcessor(Worker):处理socket,本质上委托ConnectionHandler处理
public class NioEndpoint {

    public void startInternal() throws Exception {
        // NioEndPoint最核心之处,启动Acceptor
        startAcceptorThreads();
    }

    protected final void startAcceptorThreads() {
        int count = getAcceptorThreadCount();
        acceptors = new ArrayList<>(count);

        // 创建了Acceptor线程池
        for (int i = 0; i < count; i++) {
            Acceptor<U> acceptor = new Acceptor<>(this);
            String threadName = getName() + "-Acceptor-" + i;
            acceptor.setThreadName(threadName);
            acceptors.add(acceptor);
            Thread t = new Thread(acceptor, threadName);
            t.setPriority(getAcceptorThreadPriority());
            t.setDaemon(getDaemon());
            t.start();
        }
    }
}

NioEndPoint的启动,最主要是创建Acceptor线程池,同时监听新请求。

4.1 Acceptor

public class Acceptor<U> implements Runnable {

    @Override
    public void run() {

        // 循环遍历直到接收到关闭命令 
        while (endpoint.isRunning()) {

            try {
                U socket = null;
                try {
                    // 1 接收新的请求,注意!这里采用的阻塞模式,多个Acceptor线程同时阻塞在此
                    socket = endpoint.serverSocketAccept();
                } catch (Exception ioe) {
                    // ...省略
                }

                // 2 设置socket,即调用NioEndPoint的setSocketOptions
                // 将socket添加到Poller池中某个poller
                if (endpoint.isRunning() && !endpoint.isPaused()) {
                    if (!endpoint.setSocketOptions(socket)) {
                        endpoint.closeSocket(socket);
                    }
                } else {
                    endpoint.destroySocket(socket);
                }
            } catch (Throwable t) {
                // ...省略
            }
        }
    }
}

public class NioEndpoint {

    protected boolean setSocketOptions(SocketChannel socket) {
        // Process the connection
        try {
            // 3 将SocketChannel设置为非阻塞模式
            socket.configureBlocking(false);
            // 3 设置Socket参数值,如Socket发送、接收的缓存大小、心跳检测等
            Socket sock = socket.socket();
            socketProperties.setProperties(sock);

            // NioChannel是SocketChannel的一个的包装类,NioEndPoint维护了一个NioChannel池
            NioChannel channel = nioChannels.pop();
            if (channel == null) {
                // 如果channel为空,则新建一个
            } else {
                channel.setIOChannel(socket);
                channel.reset();
            }

            // 4 从Poller池取一个poller,将NioChannel交给poller
            getPoller0().register(channel);
        } catch (Throwable t) {
            // ...省略
        }
        return true;
    }
}
  1. 调用NioEndPointserverSocketAccept接收新的请求,注意!这里采用的阻塞模式,多个Acceptor线程同时阻塞在此
  2. 设置接收到的socket,即调用NioEndPoint的setSocketOptions
  3. 将SocketChannel设置为非阻塞模式,并且设置Socket参数值,如Socket发送、接收的缓存大小、心跳检测等
  4. 将SocketChannel包装成NioChannel,并调用Poller池中的某个poller的register()方法,提交给poller

4.2 Poller

public class Poller implements Runnable {

    public void register(final NioChannel socket) {

        // 绑定socket跟poller
        socket.setPoller(this);

        // 获取一个空闲的KeyAttachment对象
        KeyAttachment key = keyCache.poll();   
        final KeyAttachment ka = key!=null?key:new KeyAttachment(socket); 

        // 从Poller的事件对象缓存中取出一个PollerEvent,并用socket初始化事件对象
        PollerEvent r = eventCache.pop();

        // 设置读操作为感兴趣的操作
        ka.interestOps(SelectionKey.OP_READ);
        if ( r==null) r = new PollerEvent(socket,ka,OP_REGISTER);
        else r.reset(socket,ka,OP_REGISTER);

        //加入pollerevent中
        addEvent(r);
    }   
}

register()方法比较简单,把socket与该poller关联,并为socket注册感兴趣的读操作,包装成PollerEvent,添加到PollerEvent池中。

Poller本身是继承Runnable的可执行线程,如下:

public class Poller implements Runnable {

    // 这就是NIO中用到的选择器,可以看出每一个Poller都会关联一个Selector  
    protected Selector selector;  
    // 待处理的事件队列,通过register添加
    protected ConcurrentLinkedQueue<Runnable> events = 
            new ConcurrentLinkedQueue<Runnable>(); 

    @Override
    public void run() {

        while (true) {

            boolean hasEvents = false;

            try {
                if (!close) {
                    hasEvents = events();
                    if (wakeupCounter.getAndSet(-1) > 0) {
                        // 1 wakeupCounter指event的数量,即有event
                        keyCount = selector.selectNow();
                    } else {
                        // 1 无event则调用select阻塞
                        keyCount = selector.select(selectorTimeout);
                    }
                    wakeupCounter.set(0);
                }

            } catch (Throwable x) {
                // ...省略
            }

            //either we timed out or we woke up, process events first
            if ( keyCount == 0 ) hasEvents = (hasEvents | events());

            Iterator<SelectionKey> iterator =
                keyCount > 0 ? selector.selectedKeys().iterator() : null;


            // 2 根据向selector中注册的key遍历channel中已经就绪的keys
            while (iterator != null && iterator.hasNext()) {
                SelectionKey sk = iterator.next();
                NioSocketWrapper attachment = (NioSocketWrapper)sk.attachment();

                if (attachment == null) {
                    iterator.remove();
                } else {
                    iterator.remove();
                    // 3 处理就绪key
                    processKey(sk, attachment);
                }
            }
        }
    }

    protected void processKey(SelectionKey sk, NioSocketWrapper attachment) {
        if (sk.isReadable() || sk.isWritable() ) {
            // 在通道上注销对已经发生事件的关注
            unreg(sk, attachment, sk.readyOps());
            boolean closeSocket = false;

            // 读事件
            if (sk.isReadable()) {
                // 3 具体的通道处理逻辑
                if (!processSocket(attachment, SocketEvent.OPEN_READ, true)) {
                    closeSocket = true;
                }
            }

            // 写事件
            if (!closeSocket && sk.isWritable()) {
                if (!processSocket(attachment, SocketEvent.OPEN_WRITE, true)) {
                    closeSocket = true;
                }
            }
        }
    }
}

public class NioEndPoint {

    public boolean processSocket(SocketWrapperBase<S> socketWrapper,
            SocketEvent event, boolean dispatch) {
        try {

            // 4 从SocketProcessor池中取出空闲的SocketProcessor,关联socketWrapper
            SocketProcessorBase<S> sc = processorCache.pop();
            if (sc == null) {
                sc = createSocketProcessor(socketWrapper, event);
            } else {
                sc.reset(socketWrapper, event);
            }

            // 4 提交运行SocketProcessor
            Executor executor = getExecutor();
            if (dispatch && executor != null) {
                executor.execute(sc);
            } else {
                sc.run();
            }
        } 
        return true;
    }
}
  1. 调用selectorselect()函数,监听就绪事件
  2. 根据向selector中注册的key遍历channel中已经就绪的keys,并处理key
  3. 处理key对应的channel,调用NioEndPointprocessSocket()
  4. 从SocketProcessor池中取出空闲的SocketProcessor,关联socketWrapper,提交运行SocketProcessor

4.3 SocketProcessor

protected class SocketProcessor extends SocketProcessorBase<NioChannel> {
    @Override
    protected void doRun() {

        // ...省略打断代码
        NioChannel socket = socketWrapper.getSocket();
        SelectionKey key = socket.getIOChannel().keyFor(socket.getPoller().getSelector());

        if (event == null) {
            // 最核心的是调用了ConnectionHandler的process方法
            state = getHandler().process(socketWrapper, SocketEvent.OPEN_READ);
        } else {
            state = getHandler().process(socketWrapper, event);
        }
    }
}

其实SocketProcessor的代码比较简单,本质上是调用了ConnectionHandlerprocess方法处理socket。


5 ConnectionHandler

protected static class ConnectionHandler<S> implements AbstractEndpoint.Handler<S> {
    @Override
    public SocketState process(SocketWrapperBase<S> wrapper, SocketEvent status) {

        S socket = wrapper.getSocket();

        // 1 获取socket对应的Http11NioProcessor对象,用于http协议的解析
        Processor processor = connections.get(socket);

        // 2 循环解析socket的内容,直到读完
        do {
            state = processor.process(wrapper, status);

            // ...省略超级大一段代码
        } while ( state == SocketState.UPGRADING);
    }
}
  1. 获取socket对应的Http11NioProcessor对象,用于http协议的解析
  2. 循环解析socket的内容,直到读完
  3. 后续就是包装成request和response交给CoyoteAdapter

6 总结

NIO连接器是基于Reactor模式进行设计和开发,Reactor模式基于事件驱动,特别适合处理海量的I/O事件。Reactor模型主要可以分为(详细可参考Netty系列之Netty线程模型):

  • 单线程模型(所有IO操作都在同一个NIO线程上面完成)
  • 多线程模型(有一组NIO线程处理IO操作)
  • 主从多线程模型(服务端用于接收客户端连接的不再是个1个单独的NIO线程,而是一个独立的NIO线程池)

Tomcat属于主从多线程模型,如下图所示:




转载请注明出处:http://blog.csdn.net/linxdcn/article/details/73527465

20-Jun-2017 19:15:58.698 信息 [SockJS-2] org.apache.coyote.AbstractProcessor.setErrorState An error occurred in processing while on a non-container thread. The connection will be closed immediately java.io.IOException: 断开的管道 at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.tomcat.util.net.NioChannel.write(NioChannel.java:134) at org.apache.tomcat.util.net.NioBlockingSelector.write(NioBlockingSelector.java:101) at org.apache.tomcat.util.net.NioSelectorPool.write(NioSelectorPool.java:157) at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.doWrite(NioEndpoint.java:1259) at org.apache.tomcat.util.net.SocketWrapperBase.doWrite(SocketWrapperBase.java:670) at org.apache.tomcat.util.net.SocketWrapperBase.flushBlocking(SocketWrapperBase.java:607) at org.apache.tomcat.util.net.SocketWrapperBase.flush(SocketWrapperBase.java:597) at org.apache.coyote.http11.Http11OutputBuffer.flushBuffer(Http11OutputBuffer.java:581) at org.apache.coyote.http11.Http11OutputBuffer.flush(Http11OutputBuffer.java:272) at org.apache.coyote.http11.Http11Processor.flush(Http11Processor.java:1506) at org.apache.coyote.AbstractProcessor.action(AbstractProcessor.java:279) at org.apache.coyote.Response.action(Response.java:174) at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:317) at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:284) at org.apache.catalina.connector.Response.flushBuffer(Response.java:543) at org.apache.catalina.connector.ResponseFacade.flushBuffer(ResponseFacade.java:312) at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176) at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176) at org.springframework.security.web.util.OnCommittedResponseWrapper.flushBuffer(OnCommittedResponseWrapper.java:158) at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:176) at org.springframework.security.web.util.OnCommittedResponseWrapper.flushBuffer(OnCommittedResponseWrapper.java:158) at org.springframework.http.server.ServletServerHttpResponse.flush(ServletServerHttpResponse.java:95) at org.springframework.web.socket.sockjs.transport.session.AbstractHttpSockJsSession.writeFrameInternal(AbstractHttpSockJsSession.java:355) at org.springframework.web.socket.sockjs.transport.session.AbstractSockJsSession.writeFrame(AbstractSockJsSession.java:325) at org.springframework.web.socket.sockjs.transport.session.AbstractSockJsSession.sendHeartbeat(AbstractSockJsSession.java:249) at org.springframework.web.socket.sockjs.transport.session.AbstractSockJsSession$1.run(AbstractSockJsSession.java:269) at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) ,window10环境正常,在linux环境就异常了
©️2020 CSDN 皮肤主题: 编程工作室 设计师:CSDN官方博客 返回首页