以NIO方式为例讲解一下tomcat处理一个http请求的过程 (servlet)
首先http请求本质上也是建立在socket连接之上的,因此大概处理路径不外乎
socket.accept() ---> socket流封装--->丢入Worker线程池处理----->
Container内部处理----> 最终交给Servlet处理----> 返回结果
tomcat 分为Connector和Container, Connector和Container通过适配器adaptor关联
Connector包含NioEndpoint(Acceptor,PollerEvent,Poller,SocketProcessor,ConnectionHandler),Http11Processor,Adaptor
/**
* Cache for poller events
*/
private SynchronizedStack<PollerEvent> eventCache;
Acceptor : socket.accept() 监听端口,处理与客户端的交互
PollerEvent: 事件队列,Socket流的封装
Poller: 从队列中取出事件,交给SocketProcessor处理
SocketProcessor: 将Socket事件交给Worker线程处理
各个模块的功能如下:
首先从NioEndpoint入手:
start() 启动方法
public final void start() throws Exception { if (bindState == BindState.UNBOUND) { bindWithCleanup(); bindState = BindState.BOUND_ON_START; } // 内部启动 startInternal(); }
startInternal()
各种组件的初始化
// 事件队列 if (socketProperties.getEventCache() != 0) { eventCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE, socketProperties.getEventCache()); }
--------------------------------------------------------------------------------------------------------------------
// worker线程池
// Create worker collection if (getExecutor() == null) { createExecutor(); } // 限制连接数 initializeConnectionLatch(); // Start poller thread poller = new Poller(); Thread pollerThread = new Thread(poller, getName() + "-ClientPoller"); pollerThread.setPriority(threadPriority); pollerThread.setDaemon(true); pollerThread.start(); // accept线程 startAcceptorThread();
Poller逻辑
private Selector selector; private final SynchronizedQueue<PollerEvent> events = new SynchronizedQueue<>();
run()方法
Iterator<SelectionKey> iterator = keyCount > 0 ? selector.selectedKeys().iterator() : null; // Walk through the collection of ready keys and dispatch // any active event. while (iterator != null && iterator.hasNext()) { SelectionKey sk = iterator.next(); NioSocketWrapper socketWrapper = (NioSocketWrapper) sk.attachment(); // Attachment may be null if another thread has called // cancelledKey() if (socketWrapper == null) { iterator.remove(); } else { iterator.remove(); // 处理socketWrapper processKey(sk, socketWrapper); } }
public boolean processSocket(SocketWrapperBase<S> socketWrapper, SocketEvent event, boolean dispatch) { try { .... SocketProcessorBase<S> sc = null; if (processorCache != null) { sc = processorCache.pop(); } // 创建SocketPRocessor if (sc == null) { sc = createSocketProcessor(socketWrapper, event); } else { sc.reset(socketWrapper, event); } Executor executor = getExecutor(); if (dispatch && executor != null) { // 线程池处理 executor.execute(sc); }
createSocketProcessor() 实例化SocketProcessor
@Override protected SocketProcessorBase<NioChannel> createSocketProcessor( SocketWrapperBase<NioChannel> socketWrapper, SocketEvent event) { return new SocketProcessor(socketWrapper, event); }
doRun()
// ConnectionHandler处理 state = getHandler().process(socketWrapper, event);
ConnectionHandler的process方法
SocketState state = SocketState.CLOSED; do { state = processor.process(wrapper, status);
processor为
AbstractProcessorLight
process
()
if (state == SocketState.OPEN) { // There may be pipe-lined data to read. If the data isn't // processed now, execution will exit this loop and call // release() which will recycle the processor (and input // buffer) deleting any pipe-lined data. To avoid this, // process it now. state = service(socketWrapper); }
Http11Processor的service()
getAdapter().service(request, response);
rp.setStage(org.apache.coyote.Constants.STAGE_SERVICE); if (!getAdapter().asyncDispatch(request, response, status)) { setErrorState(ErrorState.CLOSE_NOW, null); }
getAdapter()为CoyoteAdapter适配器,适配Connetor和Containner
connector.getService().getContainer().getPipeline().getFirst().invoke( request, response);
Container通过Engine--> Host--> Context-->Wrapper-->Servlet,最终交给Servlet处理
Tomcat业务线程池源码(AbstractEndpoint类)默认最少启动10个线程,最大200个
public void createExecutor() { internalExecutor = true; TaskQueue taskqueue = new TaskQueue(); TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-", daemon, getThreadPriority()); executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), 60, TimeUnit.SECONDS,taskqueue, tf); taskqueue.setParent( (ThreadPoolExecutor) executor); }