最近在用netty写一些东西,有点好奇tomcat和netty在性能上到底有多大差距,毕竟tomcat也支持nio,都是基于reactor模式的,8.0之后还支持异步servlet(虽然还用不起来),跟netty比应该不落下风,回头看一下tomcat NIO部分的源码。
上一章看的是tomcat初始化和start部分的源码,就接着上回的地方看,之前看到的是JIO的init方法,这回直接从NIO的init开始,初始化的部分两者都差不多。
AbstractEndpoint.start():
public final void start() throws Exception {
//若未初始化,则重新初始化
if (bindState == BindState.UNBOUND) {
bind();
bindState = BindState.BOUND_ON_START;
}
startInternal();
}
这里是一个门面模式,NIO的bind方法里有两个要注意i的地方:一个是acceptor线程被设为了1,这里和netty不同,netty支持单线程、多线程、主从reactor模式,而tomcat这里只给了一个acceptor线程。然后tomcat的poller线程数这里用的是自定义的LimitLatch,里面实现了AQS,用一个原子的count变量判断是否达到最大连接数,这里有点好奇为什么不用concurrent包的semaphere。
public void bind() throws Exception {
serverSock = ServerSocketChannel.open();
socketProperties.setProperties(serverSock.socket());
InetSocketAddress addr = (getAddress() != null ? new InetSocketAddress(getAddress(), getPort()) : new InetSocketAddress(getPort()));
serverSock.socket().bind(addr, getBacklog());
serverSock.configureBlocking(true); //mimic APR behavior
serverSock.socket().setSoTimeout(getSocketProperties().getSoTimeout());
// Initialize thread count defaults for acceptor, poller
if (acceptorThreadCount == 0) {
// FIXME: Doesn't seem to work that well with multiple accept threads
acceptorThreadCount = 1;
}
if (pollerThreadCount <= 0) {
//minimum one poller thread
pollerThreadCount = 1;
}
stopLatch = new CountDownLatch(pollerThreadCount);
// Initialize SSL if needed
if (isSSLEnabled()) {
SSLUtil sslUtil = handler.getSslImplementation().getSSLUtil(this);
sslContext = sslUtil.createSSLContext();
sslContext.init(wrap(sslUtil.getKeyManagers()),
sslUtil.getTrustManagers(), null);
SSLSessionContext sessionContext =
sslContext.getServerSessionContext();
if (sessionContext != null) {
sslUtil.configureSessionContext(sessionContext);
}
// Determine which cipher suites and protocols to enable
enabledCiphers = sslUtil.getEnableableCiphers(sslContext);
enabledProtocols = sslUtil.getEnableableProtocols(sslContext);
}
if (oomParachute > 0) reclaimParachute(true);
selectorPool.open();
}
继续往下看,NioEndpoint的startInternal方法:会先实例化工作线程池,tomcat用一个自定义的TaskQueue(继承自LinkedBlockingQueue)做工作线程的等待队列,里面持有一个ThreadPoolExecutor的引用。然后tomcat设置最大连接数的地方同样用的是LimitLatch。最后可以看到是先开启的poller线程池再开启的acceptor线程。
public void startInternal() throws Exception {
if (!running) {
running = true;
paused = false;
// Create worker collection
if (getExecutor() == null) {
createExecutor();
}
//设置最大连接数
initializeConnectionLatch();
// 开启消费者线程
pollers = new Poller[getPollerThreadCount()];
for (int i = 0; i < pollers.length; i++) {
pollers[i] = new Poller();
Thread pollerThread = new Thread(pollers[i], getName() + "-ClientPoller-" + i);
pollerThread.setPriority(threadPriority);
pollerThread.setDaemon(true);
pollerThread.start();
}
//开启生产者线程 请求IO线程
startAcceptorThreads();
}
}
public void createExecutor() {
internalExecutor = true;
TaskQueue taskqueue = new TaskQueue();
//以命名-是否为守护进程-优先级创建一个线程池Factory
TaskThreadFactory tf = new TaskThreadFactory(getName() + &#