tars源码解析二
准备知识
这节准备解析下tars的网络层和协议转换的源码,为了更好的理解这节,需要具备java网络层知识,主要是nio和Reactor模式。
nio的知识可以参考这个系列的文章
http://www.iteye.com/magazines/132-Java-NIO
这里简单说下Nio的概念,在bio模式下,因为socket读取数据是阻塞的,所以导致整个线程都卡在这里。在nio模式下,会用事件机制(表相是这样,底层是依赖操作系统的机制实现,linux的epoll,windows的iocp)用一个线程去托管n个socket,该线程不停的遍历所有socket的事件,如果某个socket读取好了数据发送通知通知该线程,该线程拿出该socket去处理(通俗的解释,不严格正确)
Reactor有几张图很重要,代表了几种模式
单个线程充当Reactor,业务也在该线程里处理
单个线程充当Reactor,业务放在线程池处理
Reactor拆出来2个,1个负责连接,1个负责读,业务放在线程池处理
这3张图理解了,基本上Ractor模式就掌握了
服务端的网络层
上一节我们忽略了整个网络层的细节,对于上层来说,就是这段的代码
ServantAdapter.bind
public void bind(AppService appService) throws IOException {
this.skeleton = (ServantHomeSkeleton) appService;
ServerConfig serverCfg = ConfigurationManager.getInstance().getServerConfig();
boolean keepAlive = true;
Codec codec = createCodec(serverCfg);
Processor processor = createProcessor(serverCfg);
Executor threadPool = ServantThreadPoolManager.get(servantAdapterConfig);
Endpoint endpoint = this.servantAdapterConfig.getEndpoint();
if (endpoint.type().equals("tcp")) {
this.selectorManager = new SelectorManager(Utils.getSelectorPoolSize(), new ServantProtocolFactory(codec), threadPool, processor, keepAlive, "server-tcp-reactor", false);
this.selectorManager.setTcpNoDelay(serverCfg.isTcpNoDelay());
this.selectorManager.start();
System.out.println("[SERVER] server starting at " + endpoint + "...");
ServerSocketChannel serverChannel = ServerSocketChannel.open();
serverChannel.socket().bind(new InetSocketAddress(endpoint.host(), endpoint.port()), 1024);
serverChannel.configureBlocking(false);
selectorManager.getReactor(0).registerChannel(serverChannel, SelectionKey.OP_ACCEPT);
System.out.println("[SERVER] server started at " + endpoint + "...");
} else if (endpoint.type().equals("udp")) {
this.selectorManager = new SelectorManager(1, new ServantProtocolFactory(codec), threadPool, processor, false, "server-udp-reactor", true);
this.selectorManager.start();
System.out.println("[SERVER] server starting at " + endpoint + "...");
DatagramChannel serverChannel = DatagramChannel.open();
DatagramSocket socket = serverChannel.socket();
socket.bind(new InetSocketAddress(endpoint.host(), endpoint.port()));
serverChannel.configureBlocking(false);
this.selectorManager.getReactor(0).registerChannel(serverChannel, SelectionKey.OP_READ);
System.out.println("[SERVER] servant started at " + endpoint + "...");
}
}
先看 SelectorManager初始化
获取线程池的数目,我们大部分确定线程数一般是cpu+1,但是这里对于cpu大于8的,做了特殊处理,原因猜测是因为如果cpu大于8的时候,不想把cpu都占满?
public class Utils {
public static int getSelectorPoolSize() {
int processors = Runtime.getRuntime().availableProcessors();
return processors > 8 ? 4 + (processors * 5 / 8) : processors + 1;
}
我们看下SelectorManager成员变量
public final class SelectorManager {
private final AtomicLong sets = new AtomicLong(0);
//这里是前面说的reactor模式里的reactor
private final Reactor[] reactorSet;
//这里是codec的工厂,理解简单点就是codec,默认的情况是TarsCodec这个类
private ProtocolFactory protocolFactory = null;
//这个treadPool是处理业务的线程池
private Executor threadPool = null;
//request<-> response的处理逻辑 默认是TarsServantProcessor
private Processor processor = null;
//select的个数,基本就是reactor的个数
private final int selectorPoolSize;
private volatile boolean started;
private boolean keepAlive;
private boolean isTcpNoDelay = false;
selectManager的构造函数,主要是赋值,构建Reactors
public SelectorManager(int selectorPoolSize, ProtocolFactory protocolFactory, Executor threadPool,
Processor processor, boolean keepAlive, String reactorNamePrefix, boolean udpMode) throws IOException {
if (udpMode) selectorPoolSize = 1;
this.selectorPoolSize = selectorPoolSize;
this.protocolFactory = protocolFactory;
this.threadPool = threadPool;
this.processor = processor;
this.keepAlive = keepAlive;
reactorSet = new Reactor[selectorPoolSize];
for (int i = 0; i < reactorSet.length; i++) {
reactorSet[i] = new Reactor(this, reactorNamePrefix + "-" + protocolFactory.getClass().getSimpleName().toLowerCase() + "-" + String.valueOf(i), udpMode);
}
}
整个网络层,tars提供了一个单独的net包来封装
我们看下Reactor,可以看到Reactor是个Thread,里面包含nio的一个重要组件select
public final class Reactor extends Thread {
protected volatile Selector selector = null;
private volatile boolean crashed = false;
private final Queue<Object[]> register = new LinkedBlockingQueue<Object[]>();
private final Queue<Session> unregister = new LinkedBlockingQueue<Session>();
private Acceptor acceptor = null;
public Reactor(SelectorManager selectorManager, String name) throws IOException {
this(selectorManager, name, false);
}
public Reactor(SelectorManager selectorManager, String name, boolean udpMode) throws IOException {
super(name);
if (udpMode) {
this.acceptor = new UDPAcceptor(selectorManager);
} else {
this.acceptor = new TCPAcceptor(selectorManager);
}
this.selector = Selector.open();
}
accpetor是处理网络事件的逻辑
public abstract class Acceptor {
protected SelectorManager selectorManager = null;
public Acceptor(SelectorManager selectorManager) {
this.selectorManager = selectorManager;
}
//处理网络OP_CONNECT的事件
public abstract void handleConnectEvent(SelectionKey key) throws IOException;
//处理网络OP_ACCEPT的事件
public abstract void handleAcceptEvent(SelectionKey key) throws IOException;
//处理网络OP_READ的事件
public abstract void handleReadEvent(SelectionKey key) throws IOException;
//处理网络OP_WRITE的事件
public abstract void handleWriteEvent(SelectionKey key) throws IOException;
}
我们稍后再看下TCPAcceptor具体的处理逻辑,回到上面
this.selectorManager.start();
这里启动了Reactor,其实就是调用Thread的run方法,这里基本属于nio的处理,可以简单理解,不停的循环遍历socket看看有没有事件抛出来,抛出来就处理。
public synchronized void start() {
if (this.started) {
return;
}
this.started = true;
for (Reactor reactor : this.reactorSet) {
reac