总算是搭好了自己的博客,第一篇就写写最近关于netty的学习心得吧~
- netty初识
用过Dubbo的对netty应该都不陌生,Dubbo底层呢就用了netty 3,那么netty到底是个什么东西呢?
百度百科 “Netty是由JBOSS提供的一个java开源框架。Netty提供异步的、事件驱动的网络应用程序框架和工具,用以快速开发高性能、高可靠性的网络服务器和客户端程序。” 我觉得netty就是优化了nio的高性能网络通信框架,当然它也支持旧IO。
先看看jdk的nio吧,我们都知道nio是单线程非阻塞IO,底层由操作系统的Reactor多路复用实现。单线程这个特性,就决定了它无法高效的利用cpu资源。那么我来尝试开启多个线程来处理SelectKey。
public void listenSelector() throws IOException {
while (true){
int n = this.selector.select();
Iterator<SelectionKey> keys = selector.selectedKeys().iterator();
while (keys.hasNext()){
final SelectionKey key = keys.next();
keys.remove();
// handler(key);
new Thread(new Runnable() {
@Override
public void run() {
try {
handler(key);
} catch (IOException e) {
e.printStackTrace();
}
}
}).start();
}
}
}
public void handler(SelectionKey key) throws IOException {
if(key.isAcceptable()){
System.out.println("新的客户端连接。。。");
ServerSocketChannel server = (ServerSocketChannel) key.channel();
SocketChannel channel = server.accept();
channel.configureBlocking(false);
channel.register(this.selector,SelectionKey.OP_READ);
}else if(key.isReadable()){
SocketChannel channel = (SocketChannel) key.channel();
ByteBuffer buffer = ByteBuffer.allocate(1024);
int len = channel.read(buffer);
if(len > 0){
String msg = new String(buffer.array(),"GBK").trim();
System.out.println("接受的消息:"+msg);
}else {
System.out.println("客户端关闭");
key.cancel();
}
}
}
为什么会报空指针异常呢?
原因是 在listenSelector()方法里,轮询事件的时候,key.remove()方法并没有真正的做到移除,只有当处理该key的线程执行完,才会移除。所以执行到ServerSocketChannel server = (ServerSocketChannel) key.channel();
时,server为空。
那么怎么才能实现多线程非阻塞IO呢?先不谈aio,看看Netty是怎么做的吧。
- netty模块
我在学习netty时,下的是3.10.3版本。
它的目录结构就是这样,
bootstrap包下是一些启动类;
buffer包下对Nio的ByteBuffer做了包装和优化,ChannelBuffer就在该包下;
channel包处理server和client的连接通道
container是连接其他容器的一个类,比如连接spring
handler当然是处理类了,比如协议的编解码等
logging 日志咯
util 工具包
- netty架构
netty的架构图,翻译一下。
core核心部分: Extensible Event Model 可扩展的事件模型
Universal Communication API 统一的通信API,对应channal包
Zero-Copy-Capable Rich Byte Buffer 零拷贝的Buffer:(数据传输过程中,需要对协议的报文进行组合和拆分。netty通过CompositeByteBuf类,将多个 ByteBuf 合并为一个逻辑上的 ByteBuf, 避免了各个 ByteBuf 之间的拷贝)
以及Transport Services传输层服务,Protocol Support应用层的一些协议支持
- 线程模型
netty的高效之处,就在于它底层的主从多线程Reactor模型
它维护了两个TheadPool,Main Reactor Acceptor pool,对应源码中的NioServerBoss; Sub Reactor IO Thread Pool,对应源码中的NioWork。
简单来说,就是客户端发出连接请求后,通过main转发到sub,sub会在一个独立的线程池中运行,并维护了两个队列,每个线程都有一个独立的selector。main转发过来的请求会放到任务队列中,sub会去处理队列中的任务。读写请求就会交个对应的handler去处理。
事实上看到AbstractNioSelector中run()的代码,不难看出,boos和word都做了这几件事:
1:设置标志位 wakeup.set(false);
2:Selector(注册);
3: 处理任务队列(processTaskQueue());
4: 处理自己的业务(process(selector);)
- 源码
好了,终于要看源码了。
/**
* Created by 蚂蚁的宝藏 on 2018/1/22.
*/
public class Server {
public static void main(String[] args){
ServerBootstrap bootstrap = new ServerBootstrap();
ExecutorService boss = Executors.newCachedThreadPool();
ExecutorService worker = Executors.newCachedThreadPool();
bootstrap.setFactory(new NioServerSocketChannelFactory(boss,worker));
bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
@Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("decoder",new StringDecoder());
pipeline.addLast("encoder",new StringEncoder());
pipeline.addLast("myHandler",new MyServerMessageHandler());
return pipeline;
}
});
bootstrap.bind(new InetSocketAddress(7777));
System.out.println("服务启动");
}
}
从new NioServerSocketChannelFactory(boss,worker)
进去。
public NioServerSocketChannelFactory(
Executor bossExecutor, Executor workerExecutor) {
this(bossExecutor, workerExecutor, getMaxThreads(workerExecutor));
}
来看看getMaxThreads(workerExecutor)
private static int getMaxThreads(Executor executor) {
if (executor instanceof ThreadPoolExecutor) {
final int maxThreads = ((ThreadPoolExecutor) executor).getMaximumPoolSize();
return Math.min(maxThreads, SelectorUtil.DEFAULT_IO_THREADS);
}
return SelectorUtil.DEFAULT_IO_THREADS;
}
static final int DEFAULT_IO_THREADS = Runtime.getRuntime().availableProcessors() * 2;
我电脑4核来说,DEFAULT_IO_THREADS
就是8,那么cacheTheadPool这个线程池大小肯定大于8吧,所以这个方法只是对work线程池的大小做了限制,就是你不能比8大。
public NioServerSocketChannelFactory(
Executor bossExecutor, Executor workerExecutor,
int workerCount) {
this(bossExecutor, 1, workerExecutor, workerCount);
}
继续,可以看到,boss默认大小是1.
public NioServerSocketChannelFactory(
Executor bossExecutor, int bossCount, Executor workerExecutor,
int workerCount) {
this(bossExecutor, bossCount, new NioWorkerPool(workerExecutor, workerCount));
}
接下来,先对workPool进行了初始化。
public NioWorkerPool(Executor workerExecutor, int workerCount, ThreadNameDeterminer determiner) {
super(workerExecutor, workerCount, false);
this.determiner = determiner;
init();
}
protected void init() {
if (!initialized.compareAndSet(false, true)) {
throw new IllegalStateException("initialized already");
}
for (int i = 0; i < workers.length; i++) {
workers[i] = newWorker(workerExecutor);
}
waitForWorkerThreads();
}
做了什么操作呢,就是创建了8个work,看newWorker(workerExecutor)
做了啥
一路点,来到了最上层,
AbstractNioSelector(Executor executor, ThreadNameDeterminer determiner) {
this.executor = executor;
openSelector(determiner);
}
openSelector做了什么事?
private void openSelector(ThreadNameDeterminer determiner) {
try {
selector = SelectorUtil.open();
} catch (Throwable t) {
throw new ChannelException("Failed to create a selector.", t);
}
// Start the worker thread with the new Selector.
boolean success = false;
try {
DeadLockProofWorker.start(executor, newThreadRenamingRunnable(id, determiner));
success = true;
} finally {
if (!success) {
// Release the Selector if the execution fails.
try {
selector.close();
} catch (Throwable t) {
logger.warn("Failed to close a selector.", t);
}
selector = null;
// The method will return to the caller at this point.
}
}
assert selector != null && selector.isOpen();
}
没错吧,每个work都有一个selector,重点在DeadLockProofWorker.start(executor, newThreadRenamingRunnable(id, determiner));
public static void start(final Executor parent, final Runnable runnable) {
if (parent == null) {
throw new NullPointerException("parent");
}
if (runnable == null) {
throw new NullPointerException("runnable");
}
parent.execute(new Runnable() {
public void run() {
PARENT.set(parent);
try {
runnable.run();
} finally {
PARENT.remove();
}
}
});
}
workpool开启8个线程,调用了NioWork的run方法,但是NioWork没有run方法,再去找他的父类,在AbstractNioSelector中找到了run方法,代码太长,我把重点的挑出来
public void run() {
Selector selector = this.selector;
for (;;) {
wakenUp.set(false);
...
if (wakenUp.get()) {
wakenupFromLoop = true;
selector.wakeup();
} else {
wakenupFromLoop = false;
}
processTaskQueue();
...
process(selector);
...
}
}
这个方法是关键,selector赋值,标志位设false,处理任务队列,处理自己的业务。
如果wakenUp.get()为false,那么该线程就会阻塞,不会去处理任务队列和自己的业务。
既然如此,那么work线程什么时候唤醒呢?其实它是有boss唤醒的,待会看boss代码就知道了。
其实NioWork和NioBoss都是它的子类,所以都会调用它。区别就是processTaskQueue()
里处理的task和process(selector)
的处理有所不同。那么来看看work怎么处理任务队列的。
private void processTaskQueue() {
for (;;) {
final Runnable task = taskQueue.poll();
if (task == null) {
break;
}
task.run();
try {
cleanUpCancelledKeys();
} catch (IOException e) {
// Ignore
}
}
}
从队列里取task,然后执行,没毛病吧?这个task就是NioWork自己的内部类,来看看这个类里的run做了什么。
public void run() {
SocketAddress localAddress = channel.getLocalAddress();
SocketAddress remoteAddress = channel.getRemoteAddress();
if (localAddress == null || remoteAddress == null) {
if (future != null) {
future.setFailure(new ClosedChannelException());
}
close(channel, succeededFuture(channel));
return;
}
try {
if (server) {
channel.channel.configureBlocking(false);
}
channel.channel.register(
selector, channel.getInternalInterestOps(), channel);
if (future != null) {
channel.setConnected();
future.setSuccess();
}
if (server || !((NioClientSocketChannel) channel).boundManually) {
fireChannelBound(channel, localAddress);
}
fireChannelConnected(channel, remoteAddress);
} catch (IOException e) {
if (future != null) {
future.setFailure(e);
}
close(channel, succeededFuture(channel));
if (!(e instanceof ClosedChannelException)) {
throw new ChannelException(
"Failed to register a socket to the selector.", e);
}
}
}
看到channel.channel.configureBlocking(false);channel.channel.register(
这个,熟悉吧?不就nio的处理么?处理连接,注册等。
selector, channel.getInternalInterestOps(), channel);fireChannelConnected(channel, remoteAddress);
是对upstream上行事件的处理。
再来看看process(Selector selector)
protected void process(Selector selector) throws IOException {
Set<SelectionKey> selectedKeys = selector.selectedKeys();
if (selectedKeys.isEmpty()) {
return;
}
for (Iterator<SelectionKey> i = selectedKeys.iterator(); i.hasNext();) {
SelectionKey k = i.next();
i.remove();
try {
int readyOps = k.readyOps();
if ((readyOps & SelectionKey.OP_READ) != 0 || readyOps == 0) {
if (!read(k)) {
// Connection already closed - no need to handle write.
continue;
}
}
if ((readyOps & SelectionKey.OP_WRITE) != 0) {
writeFromSelectorLoop(k);
}
} catch (CancelledKeyException e) {
close(k);
}
if (cleanUpCancelledKeys()) {
break; // break the loop to avoid ConcurrentModificationException
}
}
}
熟悉吧?轮询selectKey,处理读写事件。
好了,接下来再看看NioServerBoss。
同样的,他构造方法的super点到顶层,来到了AbstractNioSelector,所以netty中,这个类做了很多工作。
AbstractNioSelector(Executor executor, ThreadNameDeterminer determiner) {
this.executor = executor;
openSelector(determiner);
}
同样的,维护了一个selector,然后开线程执行NioServerBoss的run方法,没找到,就执行AbstractNioSelector的run,。。。好吧,直接看boss的任务干了什么事。
public void run() {
boolean bound = false;
boolean registered = false;
try {
channel.socket.socket().bind(localAddress, channel.getConfig().getBacklog());
bound = true;
future.setSuccess();
fireChannelBound(channel, channel.getLocalAddress());
channel.socket.register(selector, SelectionKey.OP_ACCEPT, channel);
registered = true;
} catch (Throwable t) {
future.setFailure(t);
fireExceptionCaught(channel, t);
} finally {
if (!registered && bound) {
close(channel, future);
}
}
ok,没啥说的,注册监听客户端的连接事件。
来看boss的业务
for (Iterator<SelectionKey> i = selectedKeys.iterator(); i.hasNext();) {
SelectionKey k = i.next();
i.remove();
NioServerSocketChannel channel = (NioServerSocketChannel) k.attachment();
try {
// accept connections in a for loop until no new connection is ready
for (;;) {
SocketChannel acceptedSocket = channel.socket.accept();
if (acceptedSocket == null) {
break;
}
registerAcceptedChannel(channel, acceptedSocket, thread);
}
}
轮询key,把客户端的连接事件,传入registerAcceptedChannel(channel, acceptedSocket, thread)
这个方法,名字就是注册连接的通道的意思,来看看这个方法怎么做的。
它通过NioWorker worker = parent.workerPool.nextWorker();
通过一个原子int自增对读长度%操作拿到一个NioWork.
public E nextWorker() {
return (E) workers[Math.abs(workerIndex.getAndIncrement() % workers.length)];
}
worker.register(new NioAcceptedSocketChannel(
parent.getFactory(), pipeline, parent, sink
, acceptedSocket,
worker, currentThread), null);
然后
worker.register(new NioAcceptedSocketChannel(
parent.getFactory(), pipeline, parent, sink
, acceptedSocket,
worker, currentThread), null);
new了一个NioAcceptedSocketChannel,保存了pipeline,serverSocketChannel,acceptedSocket和当前线程信息,交给了worker。
public void register(Channel channel, ChannelFuture future) {
Runnable task = createRegisterTask(channel, future);
registerTask(task);
}
protected final void registerTask(Runnable task) {
taskQueue.add(task);
Selector selector = this.selector;
if (selector != null) {
if (wakenUp.compareAndSet(false, true)) {
selector.wakeup();
}
} else {
if (taskQueue.remove(task)) {
throw new RejectedExecutionException("Worker has already been shutdown");
}
}
}
registerTask就是设置标志位为true,唤醒work线程!!
所以boss就是通过这种注册的方式,改变了work的行为。
至此,work和boss都了解了。
是不是,netty的主从多线程模型通过简单的分析源码后就更清楚了,^_^。