Apache Thrift 官网学习 三 Server详解

三 Server详解

Thrift提供的网络服务模型单线程多线程事件驱动,从另一个角度划分为:阻塞服务模型非阻塞服务模型

  • 阻塞服务模型:TSimpleServer、TThreadPoolServer。
  • 非阻塞服务模型:TNonblockingServer、THsHaServer和TThreadedSelectorServer。

TServer类的层次关系:

3.1 TServer

image.png

3.1.1 基本介绍

TServer定义了静态内部类Args,Args继承自抽象类AbstractServerArgs。AbstractServerArgs采用了建造者模式,向TServer提供各种工厂:

工厂属性工厂类型作用
ProcessorFactoryTProcessorFactory处理层工厂类,用于具体的TProcessor对象的创建
InputTransportFactoryTTransportFactory传输层输入工厂类,用于具体的TTransport对象的创建
OutputTransportFactoryTTransportFactory传输层输出工厂类,用于具体的TTransport对象的创建
InputProtocolFactoryTProtocolFactory协议层输入工厂类,用于具体的TProtocol对象的创建
OutputProtocolFactoryTProtocolFactory协议层输出工厂类,用于具体的TProtocol对象的创建
3.2.2 核心代码
  • 建造者模式概述:建造者模式(Builder Pattern)使用多个简单的对象一步一步构建成一个复杂的对象。这种类型的设计模式属于创建型模式,它提供了一种创建对象的最佳方式。
  • **主要解决:**主要解决在软件系统中,有时候面临着"一个复杂对象"的创建工作,其通常由各个部分的子对象用一定的算法构成;由于需求的变化,这个复杂对象的各个部分经常面临着剧烈的变化,但是将它们组合在一起的算法却相对稳定。
  • 应用实例: 1、去肯德基,汉堡、可乐、薯条、炸鸡翅等是不变的,而其组合是经常变化的,生成出所谓的"套餐"。 2、JAVA 中的 StringBuilder。
public static class Args extends AbstractServerArgs<Args> {
    public Args(TServerTransport transport) {
      super(transport);
    }
  }

  public static abstract class AbstractServerArgs<T extends AbstractServerArgs<T>> {
    final TServerTransport serverTransport;
    TProcessorFactory processorFactory;
    TTransportFactory inputTransportFactory = new TTransportFactory();
    TTransportFactory outputTransportFactory = new TTransportFactory();
    TProtocolFactory inputProtocolFactory = new TBinaryProtocol.Factory();
    TProtocolFactory outputProtocolFactory = new TBinaryProtocol.Factory();

    public AbstractServerArgs(TServerTransport transport) {
      serverTransport = transport;
    }

    public T processorFactory(TProcessorFactory factory) {
      this.processorFactory = factory;
      return (T) this;
    }

    public T processor(TProcessor processor) {
      this.processorFactory = new TProcessorFactory(processor);
      return (T) this;
    }

    public T transportFactory(TTransportFactory factory) {
      this.inputTransportFactory = factory;
      this.outputTransportFactory = factory;
      return (T) this;
    }

    public T inputTransportFactory(TTransportFactory factory) {
      this.inputTransportFactory = factory;
      return (T) this;
    }

    public T outputTransportFactory(TTransportFactory factory) {
      this.outputTransportFactory = factory;
      return (T) this;
    }

    public T protocolFactory(TProtocolFactory factory) {
      this.inputProtocolFactory = factory;
      this.outputProtocolFactory = factory;
      return (T) this;
    }

    public T inputProtocolFactory(TProtocolFactory factory) {
      this.inputProtocolFactory = factory;
      return (T) this;
    }

    public T outputProtocolFactory(TProtocolFactory factory) {
      this.outputProtocolFactory = factory;
      return (T) this;
    }
  }

  /**
   * Core processor 处理器
   */
  protected TProcessorFactory processorFactory_;

  /**
   * Server transport 运输层
   */
  protected TServerTransport serverTransport_;

  /**
   * Input Transport Factory 输入运输层工厂
   */
  protected TTransportFactory inputTransportFactory_;

  /**
   * Output Transport Factory 输出运输层工厂
   */
  protected TTransportFactory outputTransportFactory_;

  /**
   * Input Protocol Factory  输入协议工厂
   */
  protected TProtocolFactory inputProtocolFactory_;

  /**
   * Output Protocol Factory 输出协议工厂
   */
  protected TProtocolFactory outputProtocolFactory_;

  private volatile boolean isServing;

  protected TServerEventHandler eventHandler_;

  // Flag for stopping the server
  // Please see THRIFT-1795 for the usage of this flag
  protected volatile boolean stopped_ = false;

  protected TServer(AbstractServerArgs args) {
    processorFactory_ = args.processorFactory;
    serverTransport_ = args.serverTransport;
    inputTransportFactory_ = args.inputTransportFactory;
    outputTransportFactory_ = args.outputTransportFactory;
    inputProtocolFactory_ = args.inputProtocolFactory;
    outputProtocolFactory_ = args.outputProtocolFactory;
  }

实例:

package com.example.thrift.server;
import com.example.thrift.PersonServiceImpl;
import com.example.thrift.thrift.personservice;
import org.apache.thrift.TProcessorFactory;
import org.apache.thrift.protocol.TCompactProtocol;
import org.apache.thrift.server.THsHaServer;
import org.apache.thrift.server.TServer;
import org.apache.thrift.transport.TNonblockingServerSocket;
import org.apache.thrift.transport.TTransportException;
import org.apache.thrift.transport.layered.TFastFramedTransport;

/**
 * @Author shu
 * @Description 服务端
 **/
public class ThriftServer {
    public static void main(String[] args) throws TTransportException {
        // 建立连接
        TNonblockingServerSocket serverSocket =new TNonblockingServerSocket(8803);
        // 建立高可用server
        THsHaServer.Args arg=new THsHaServer.Args(serverSocket).maxWorkerThreads(4).minWorkerThreads(2);
        // 处理器
        personservice.Processor<PersonServiceImpl> processor =new personservice.Processor<>(new PersonServiceImpl());
        // 设置协议处理器
        arg.protocolFactory(new TCompactProtocol.Factory());
        // 设置传输处理器
        arg.transportFactory(new TFastFramedTransport.Factory());
        // 处理器工厂
        arg.processorFactory(new TProcessorFactory(processor));
        // 开始执行
        TServer tServer = new THsHaServer(arg);
        System.out.println("Running Simple Server");
        tServer.serve();
    }
}

  • TServer的三个方法:serve()、stop()和isServing()。serve()用于启动服务,stop()用于关闭服务,isServing()用于检测服务的起停状态。
  • TServer的不同实现类的启动方式不一样,因此serve()定义为抽象方法。不是所有的服务都需要优雅的退出, 因此stop()方法没有被定义为抽象。
/**
   * The run method fires up the server and gets things going.
   */
  public abstract void serve();

  /**
   * Stop the server. This is optional on a per-implementation basis. Not
   * all servers are required to be cleanly stoppable.
   */
  public void stop() {}

  public boolean isServing() {
    return isServing;
  }

  protected void setServing(boolean serving) {
    isServing = serving;
  }

  public void setServerEventHandler(TServerEventHandler eventHandler) {
    eventHandler_ = eventHandler;
  }

  public TServerEventHandler getEventHandler() {
    return eventHandler_;
  }

  public boolean getShouldStop() {
    return this.stopped_;
  }

  public void setShouldStop(boolean shouldStop) {
    this.stopped_ = shouldStop;
  }

上面可知Tserver是一个接口,我们更应该关注他的实现类
image.png

3.2 TSimpleServer

  • 用于测试的简单单线程服务器,实际上我们很少用它,单线程,局限性太大。
  • TSimpleServer的工作模式采用最简单的阻塞IO,实现方法简洁明了,便于理解,但是一次只能接收和处理一个socket连接,效率比较低。


源码分析:

 public void serve() {
    try {
        // 首先监听Transport
      serverTransport_.listen();
    } catch (TTransportException ttx) {
      LOGGER.error("Error occurred during listening.", ttx);
      return;
    }

    // Run the preServe event
    if (eventHandler_ != null) {
      eventHandler_.preServe();
    }

    setServing(true);
	
     // 不停的遍历,客服端是否准备就绪,阻塞当前线程
    while (!stopped_) {
      TTransport client = null;
      TProcessor processor = null;
      TTransport inputTransport = null;
      TTransport outputTransport = null;
      TProtocol inputProtocol = null;
      TProtocol outputProtocol = null;
      ServerContext connectionContext = null;
      try {
          // 收到客服端的连接,这里的accept理解,可以去学习下Java的Nio
        client = serverTransport_.accept();
        if (client != null) {
            // 处理器
          processor = processorFactory_.getProcessor(client);
            // 输入运输层对象
          inputTransport = inputTransportFactory_.getTransport(client);
            // 输出运输层对象
          outputTransport = outputTransportFactory_.getTransport(client);
            // 输入协议对象
          inputProtocol = inputProtocolFactory_.getProtocol(inputTransport);
            // 输出协议对象
          outputProtocol = outputProtocolFactory_.getProtocol(outputTransport);
          if (eventHandler_ != null) {
              // 创建一个客服端上下文
            connectionContext = eventHandler_.createContext(inputProtocol, outputProtocol);
          }
          while (true) {
            if (eventHandler_ != null) {
                // 当客户端即将调用处理器时调用。回调thrift中的process方法
              eventHandler_.processContext(connectionContext, inputTransport, outputTransport);
            }
            // IO流操作
            processor.process(inputProtocol, outputProtocol);
          }
        }
      } catch (TTransportException ttx) {
        // Client died, just move on
        LOGGER.debug("Client Transportation Exception", ttx);
      } catch (TException tx) {
        if (!stopped_) {
          LOGGER.error("Thrift error occurred during processing of message.", tx);
        }
      } catch (Exception x) {
        if (!stopped_) {
          LOGGER.error("Error occurred during processing of message.", x);
        }
      }

      if (eventHandler_ != null) {
        eventHandler_.deleteContext(connectionContext, inputProtocol, outputProtocol);
      }

      if (inputTransport != null) {
        inputTransport.close();
      }

      if (outputTransport != null) {
        outputTransport.close();
      }

    }
    setServing(false);
  }
  • 设置TServerSocket的listen()方法启动连接监听
  • 阻塞的方式接受客户端地连接请求,每进入一个连接即为其创建一个通道TTransport对象。
  • 为客户端创建处理器对象输入传输通道对象输出传输通道对象输入协议对象输出协议对象
  • 通过TServerEventHandler对象处理具体的业务请求。
  • 在进行Io流操作,最后关闭通道

3.3 TNonblockingServer

  • 非阻塞 TServer 实现。就调用而言,这允许所有连接的客户端之间的公平性。
  • 该服务器本质上是单线程的。如果您想要一个有限的线程池以及调用公平,请参阅 THsHaServer。
  • 要使用此服务器,您必须在最外层传输处使用 TFramedTransport,否则此服务器将无法确定何时已从线路读取整个方法调用。客户端还必须使用 TFramedTransport。
  • 所有的socket都被注册到selector中,在一个线程中通过seletor循环监控所有的socket。
  • 每次selector循环结束时,处理所有的处于就绪状态的socket,对于有数据到来的socket进行数据读取操作,对于有数据发送的socket则进行数据发送操作,对于监听socket则产生一个新业务socket并将其注册到selector上。


源码分析:

 /**	开始接受连接和处理调用
   * Begin accepting connections and processing invocations.
   */
  public void serve() {
    // start any IO threads 开始IO线程
    if (!startThreads()) {
      return;
    }

    // start listening, or exit 开始监听客服端连接
    if (!startListening()) {
      return;
    }

    setServing(true);

    // this will block while we serve 等待关闭
    waitForShutdown();

    setServing(false);

    // do a little cleanup 关闭监听
    stopListening();
  }
  /** 
   * 启动线程处理客服端消息,向Selector 注册消息
   * Start the selector thread to deal with accepts and client messages.
   *
   * @return true if everything went ok, false if we couldn't start for some
   * reason.
   */
  @Override
  protected boolean startThreads() {
    // start the selector
    try {
      selectAcceptThread_ = new SelectAcceptThread((TNonblockingServerTransport)serverTransport_);
        // 开启线程
      selectAcceptThread_.start();
      return true;
    } catch (IOException e) {
      LOGGER.error("Failed to start selector thread!", e);
      return false;
    }
  }


	/** 设置线程处理accepts, reads, writes事件
     * Set up the thread that will handle the non-blocking accepts, reads, and
     * writes.
     */
    public SelectAcceptThread(final TNonblockingServerTransport serverTransport)
    throws IOException {
      this.serverTransport = serverTransport;
        // 向selector注册事件,返回key值
      serverTransport.registerSelector(selector);
    }


	/** 开启线程
     * The work loop. Handles both selecting (all IO operations) and managing
     * the selection preferences of all existing connections.
     */
    public void run() {
      try {
        if (eventHandler_ != null) {
          eventHandler_.preServe();
        }

        while (!stopped_) {
            // 选择那种处理事件
          select();
          processInterestChanges();
        }
        for (SelectionKey selectionKey : selector.keys()) {
          cleanupSelectionKey(selectionKey);
        }
      } catch (Throwable t) {
        LOGGER.error("run() exiting due to uncaught error", t);
      } finally {
        try {
          selector.close();
        } catch (IOException e) {
          LOGGER.error("Got an IOException while closing selector!", e);
        }
        stopped_ = true;
      }
    }

	/**
    *选择何种处理事件
    **/
	private void select() {
      try {
        // wait for io events. 事件就绪
        selector.select(); 
          // 遍历向selector注册的事件类型
        // process the io events we received
        Iterator<SelectionKey> selectedKeys = selector.selectedKeys().iterator();
        while (!stopped_ && selectedKeys.hasNext()) {
          SelectionKey key = selectedKeys.next();
          selectedKeys.remove();

          // skip if not valid
          if (!key.isValid()) {
            cleanupSelectionKey(key);
            continue;
          }

          // if the key is marked Accept, then it has to be the server
          // transport.
          if (key.isAcceptable()) {
              // Accept事件
            handleAccept();
          } else if (key.isReadable()) {
              // 读事件
            // deal with reads
            handleRead(key);
          } else if (key.isWritable()) {
              // 写事件
            // deal with writes
            handleWrite(key);
          } else {
            LOGGER.warn("Unexpected state in select! " + key.interestOps());
          }
        }
      } catch (IOException e) {
        LOGGER.warn("Got an IOException while selecting!", e);
      }
    }


	/**
     * 处理Accept事件,建立连接
     */
    private void handleAccept() throws IOException {
      SelectionKey clientKey = null;
      TNonblockingTransport client = null;
      try {
        // accept the connection
        client = serverTransport.accept();
        clientKey = client.registerSelector(selector, SelectionKey.OP_READ);

        // add this key to the map
          // 创建一个帧缓冲区
          FrameBuffer frameBuffer = createFrameBuffer(client, clientKey, SelectAcceptThread.this);

          clientKey.attach(frameBuffer);
      } catch (TTransportException tte) {
        // something went wrong accepting.
        LOGGER.warn("Exception trying to accept!", tte);
        if (clientKey != null) cleanupSelectionKey(clientKey);
        if (client != null) client.close();
      }
    }
  } // SelectAcceptThread
}


image.png

  • 缺点:由上面可知:一个线程来控制accept,read,write事件,当某一个事件迟迟不结束时,会造成效率低下
  • 优点TNonblockingServer采用非阻塞IO,对accept/read/write等IO事件进行监控处理,同时监控多个socket的状态变化。

3.4 THsHaServer(常用)

image.png

  • TNonblockingServer 对半同步/半异步服务器的扩展。与 TNonblockingServer 一样,它依赖于 TFramedTransport 的使用。
  • 鉴于TNonblockingServer的缺点,THsHaServer继承于TNonblockingServer,引入了线程池提高了任务处理的并发能力。THsHaServer是半同步半异步(Half-Sync/Half-Async)的处理模式,Half-Aysnc用于IO事件处理(Accept/Read/Write),Half-Sync用于业务handler对rpc的同步处理上。

工作流程:

  • 这里的线程池操作主要是针对读事件操作

源码解析:

  /**
   * Create the server with the specified Args configuration
   * 创建一个线程池
   */
  public THsHaServer(Args args) {
    super(args);
      // 创建线程池
    invoker = args.executorService == null ? createInvokerPool(args) : args.executorService;
    this.args = args;
  }


 /**
   * Helper to create an invoker pool
   */
  protected static ExecutorService createInvokerPool(Args options) {
    int minWorkerThreads = options.minWorkerThreads;
    int maxWorkerThreads = options.maxWorkerThreads;
    int stopTimeoutVal = options.stopTimeoutVal;
    TimeUnit stopTimeoutUnit = options.stopTimeoutUnit;

    LinkedBlockingQueue<Runnable> queue = new LinkedBlockingQueue<Runnable>();
    ExecutorService invoker = new ThreadPoolExecutor(minWorkerThreads,
      maxWorkerThreads, stopTimeoutVal, stopTimeoutUnit, queue);

    return invoker;
  }
  • 其余方法与TNonblockingServer一样,区别在于处理读事件时
/**
     * Do the work required to read from a readable client. If the frame is
     * fully read, then invoke the method call.
     */
    protected void handleRead(SelectionKey key) {
      FrameBuffer buffer = (FrameBuffer) key.attachment();
      if (!buffer.read()) {
        cleanupSelectionKey(key);
        return;
      }

      // if the buffer's frame read is complete, invoke the method.
      if (buffer.isFrameFullyRead()) {
        if (!requestInvoke(buffer)) {
          cleanupSelectionKey(key);
        }
      }
        
 protected abstract boolean requestInvoke(FrameBuffer frameBuffer);

image.png

  /**
   * We override the standard invoke method here to queue the invocation for
   * invoker service instead of immediately invoking. The thread pool takes care
   * of the rest.
   */
  @Override
  protected boolean requestInvoke(FrameBuffer frameBuffer) {
    try {
      Runnable invocation = getRunnable(frameBuffer);
        // 把线程添加到刚才添加的线程池
      invoker.execute(invocation);
      return true;
    } catch (RejectedExecutionException rx) {
      LOGGER.warn("ExecutorService rejected execution!", rx);
      return false;
    }
  }
  • HsHaServer与TNonblockingServer模式相比,THsHaServer在完成数据读取之后,将业务处理过程交由一个线程池来完成,主线程直接返回进行下一次循环操作,效率大大提升。
  • 主线程仍然需要完成所有socket的监听接收数据读取数据写入操作。当并发请求数较大时,且发送数据量较多时,监听socket上新连接请求不能被及时接受。

3.5 TThreadPoolServer

  • TThreadPoolServer模式采用阻塞socket方式工作,主线程负责阻塞式监听是否有新socket到来,具体的业务处理交由一个线程池来处理。


源码分析:

/**
* 开始方法
**/
public void serve() {
    if (!preServe()) {
      return;
    }

    execute();

    executorService_.shutdownNow();

    if (!waitForShutdown()) {
      LOGGER.error("Shutdown is not done after " + stopTimeoutVal + stopTimeoutUnit);
    }

    setServing(false);
  }


protected void execute() {
    while (!stopped_) {
      try {
          // 收到客服端信息
        TTransport client = serverTransport_.accept();
        try {
          //把他交给线程池处理 
          executorService_.execute(new WorkerProcess(client));
        } catch (RejectedExecutionException ree) {
          if (!stopped_) {
            LOGGER.warn("ThreadPool is saturated with incoming requests. Closing latest connection.");
          }
          client.close();
        }
      } catch (TTransportException ttx) {
        if (!stopped_) {
          LOGGER.warn("Transport error occurred during acceptance of message", ttx);
        }
      }
    }
  }
  • 拆分了监听线程(Accept Thread)和处理客户端连接工作线程(Worker Thread),数据读取业务处理都交给线程池处理。因此在并发量较大时新连接也能够被及时接受。
  • 线程池模式比较适合服务器端能预知最多有多少个客户端并发的情况,这时每个请求都能被业务线程池及时处理,性能也非常高。
  • 线程池模式的处理能力受限于线程池的工作能力,当并发请求数大于线程池中的线程数时,新请求也只能排队等待

3.6 TThreadedSelectorServer(常用)

  • 看之前可以看下什么是Reactor模式,方便对原理的理解:https://www.yuque.com/docs/share/e69e6452-6cd2-4205-94a0-787b8c19619f?# 《Reactor模式》
  • TThreadedSelectorServer是对THsHaServer的一种扩充,它将selector中的读写IO事件(read/write)从主线程中分离出来。同时引入worker工作线程池,它也是种Half-Sync/Half-Async的服务模型。
  • 具有单独的线程池来处理非阻塞 I/O 的半同步/半异步服务器。
  • 接受在单个线程上处理,并且可配置数量的非阻塞选择器线程管理客户端连接的读取和写入。同步工作线程池处理请求的处理,当瓶颈是单个选择器线程处理 I/O 上的 CPU 时,在多核环境中的性能优于 TNonblockingServer/THsHaServer。
  • 此外,由于接受处理与读/写和调用分离,服务器有更好的能力来处理来自新连接的背压(例如,忙时停止接受)。与 TNonblockingServer 一样,它依赖于 TFramedTransport 的使用
  • TThreadedSelectorServer模式是目前Thrift提供的最高级的线程服务模型,它内部有如果几个部分构成:
  1. 一个AcceptThread线程对象,专门用于处理监听socket上的新连接。
  2. 若干个SelectorThread对象专门用于处理业务socket的网络I/O读写操作,所有网络数据的读写均是有这些线程来完成。
  3. 一个负载均衡器SelectorThreadLoadBalancer对象,主要用于AcceptThread线程接收到一个新socket连接请求时,决定将这个新连接请求分配给哪个SelectorThread线程
  4. 一个ExecutorService类型的工作线程池,在SelectorThread线程中,监听到有业务socket中有调用请求过来,则将请求数据读取之后,交给ExecutorService线程池中的线程完成此次调用的具体执行。主要用于处理每个rpc请求的handler回调处理(这部分是同步的)。


AcceptThread源码分析

	/**
   * Start the accept and selector threads running to deal with clients.
   *
   * @return true if everything went ok, false if we couldn't start for some
   *         reason.
   */
	// 与前面一样开始调用该方法
  @Override
  protected boolean startThreads() {
    try {
      for (int i = 0; i < args.selectorThreads; ++i) {
        selectorThreads.add(new SelectorThread(args.acceptQueueSizePerThread));
      }
        // 该线程专门处理连接事件
      acceptThread = new AcceptThread((TNonblockingServerTransport) serverTransport_,
        createSelectorThreadLoadBalancer(selectorThreads));
      for (SelectorThread thread : selectorThreads) {
        thread.start();
      }
      acceptThread.start();
      return true;
    } catch (IOException e) {
      LOGGER.error("Failed to start threads!", e);
      return false;
    }
  }

 /**
     * The work loop. Selects on the server transport and accepts. If there was
     * a server transport that had blocking accepts, and returned on blocking
     * client transports, that should be used instead
     */
	// AcceptThread运行方法
    public void run() {
      try {
        if (eventHandler_ != null) {
          eventHandler_.preServe();
        }
        while (!stopped_) {
            // 调用select()方法
          select();
        }
      } catch (Throwable t) {
        LOGGER.error("run() on AcceptThread exiting due to uncaught error", t);
      } finally {
        try {
          acceptSelector.close();
        } catch (IOException e) {
          LOGGER.error("Got an IOException while closing accept selector!", e);
        }
        // This will wake up the selector threads
        TThreadedSelectorServer.this.stop();
      }
    }
 /**
     * Select and process IO events appropriately: If there are connections to
     * be accepted, accept them.
     */
    private void select() {
      try {
          //事件准备是否完毕
        // wait for connect events.
        acceptSelector.select();
          
		//遍历selector中的key
        // process the io events we received
        Iterator<SelectionKey> selectedKeys = acceptSelector.selectedKeys().iterator();
        while (!stopped_ && selectedKeys.hasNext()) {
          SelectionKey key = selectedKeys.next();
          selectedKeys.remove();
          // skip if not valid
          if (!key.isValid()) {
            continue;
          }
            // 对比与前面的不同,这里子处理acccpet事件
          if (key.isAcceptable()) {
              // 处理连接事件,交给SelectorThread处理
            handleAccept();
          } else {
            LOGGER.warn("Unexpected state in select! " + key.interestOps());
          }
        }
      } catch (IOException e) {
        LOGGER.warn("Got an IOException while selecting!", e);
      }
    }
 /**
     * Accept a new connection.
     */
    private void handleAccept() {
      final TNonblockingTransport client = doAccept();
      if (client != null) {
        // Pass this connection to a selector thread
        final SelectorThread targetThread = threadChooser.nextThread();

        if (args.acceptPolicy == Args.AcceptPolicy.FAST_ACCEPT || invoker == null) {
          doAddAccept(targetThread, client);
        } else {
          // FAIR_ACCEPT
          try {
            invoker.submit(new Runnable() {
              public void run() {
                  // 交给SelectorThread处理
                doAddAccept(targetThread, client);
              }
            });
          } catch (RejectedExecutionException rx) {
            LOGGER.warn("ExecutorService rejected accept registration!", rx);
            // close immediately
            client.close();
          }
        }
      }
        
    
        
        
 private void doAddAccept(SelectorThread thread, TNonblockingTransport client) {		
      if (!thread.addAcceptedConnection(client)) {
        client.close();
      }
  
 }
 
 /**
     * Hands off an accepted connection to be handled by this thread. This
     * method will block if the queue for new connections is at capacity.
     *
     * @param accepted
     *          The connection that has been accepted.
     * @return true if the connection has been successfully added.
     */
    public boolean addAcceptedConnection(TNonblockingTransport accepted) {
      try {
          //添加到队列中,交给其他线程调用
        acceptedQueue.put(accepted);
      } catch (InterruptedException e) {
        LOGGER.warn("Interrupted while adding accepted connection!", e);
        return false;
      }
      selector.wakeup();
      return true;
    }
        
        
  • AcceptThread继承于Thread,可以看出包含三个重要的属性:非阻塞式传输通道(TNonblockingServerTransport)、NIO选择器(acceptSelector)和选择器线程负载均衡器(threadChooser)。
  • 查看AcceptThread的run()方法,可以看出accept线程一旦启动,就会不停地调用select()方法
  • 查看select()方法,acceptSelector选择器等待IO事件的到来,拿到SelectionKey即检查是不是accept事件。如果是,通过handleAccept()方法接收一个新来的连接;否则,如果是IO读写事件,AcceptThread不作任何处理,交由SelectorThread完成。
  • 在handleAccept()方法中,先通过doAccept()去拿连接通道,然后Selector线程负载均衡器选择一个Selector线程,完成接下来的IO读写事件
  • 接下来继续查看doAddAccept()方法的实现,毫无悬念,它进一步调用了SelectorThread的addAcceptedConnection()方法,把非阻塞传输通道对象传递给选择器线程做进一步的IO读写操作

SelectorThread源码分析

	/**
     * The work loop. Handles selecting (read/write IO), dispatching, and
     * managing the selection preferences of all existing connections.
     */
	// SelectorThread运行方法
    public void run() {
      try {
        while (!stopped_) {
          select();
          processAcceptedConnections();
          processInterestChanges();
        }
        for (SelectionKey selectionKey : selector.keys()) {
          cleanupSelectionKey(selectionKey);
        }
      } catch (Throwable t) {
        LOGGER.error("run() on SelectorThread exiting due to uncaught error", t);
      } finally {
        try {
          selector.close();
        } catch (IOException e) {
          LOGGER.error("Got an IOException while closing selector!", e);
        }
        // This will wake up the accept thread and the other selector threads
        TThreadedSelectorServer.this.stop();
      }
    }
 /**
     * Select and process IO events appropriately: If there are existing
     * connections with data waiting to be read, read it, buffering until a
     * whole frame has been read. If there are any pending responses, buffer
     * them until their target client is available, and then send the data.
     */
    private void select() {
      try {

        doSelect();
          
        // process the io events we received
        Iterator<SelectionKey> selectedKeys = selector.selectedKeys().iterator();
        while (!stopped_ && selectedKeys.hasNext()) {
          SelectionKey key = selectedKeys.next();
          selectedKeys.remove();

          // skip if not valid
          if (!key.isValid()) {
            cleanupSelectionKey(key);
            continue;
          }

          if (key.isReadable()) {
              // 处理读事件
            // deal with reads
            handleRead(key);
          } else if (key.isWritable()) {
            // deal with writes
              // 处理写事件
            handleWrite(key);
          } else {
            LOGGER.warn("Unexpected state in select! " + key.interestOps());
          }
        }
      } catch (IOException e) {
        LOGGER.warn("Got an IOException while selecting!", e);
      }
    }
 /**
     * Do the work required to read from a readable client. If the frame is
     * fully read, then invoke the method call.
     */
    protected void handleRead(SelectionKey key) {
      FrameBuffer buffer = (FrameBuffer) key.attachment();
      if (!buffer.read()) {
        cleanupSelectionKey(key);
        return;
      }

      // if the buffer's frame read is complete, invoke the method.
      if (buffer.isFrameFullyRead()) {
        if (!requestInvoke(buffer)) {
          cleanupSelectionKey(key);
        }
      }
    }


image.png

 /**
   * We override the standard invoke method here to queue the invocation for
   * invoker service instead of immediately invoking. If there is no thread
   * pool, handle the invocation inline on this thread
   */
  @Override
  protected boolean requestInvoke(FrameBuffer frameBuffer) {
    Runnable invocation = getRunnable(frameBuffer);
    if (invoker != null) {
      try {
        invoker.execute(invocation);
        return true;
      } catch (RejectedExecutionException rx) {
        LOGGER.warn("ExecutorService rejected execution!", rx);
        return false;
      }
    } else {
      // Invoke on the caller's thread
      invocation.run();
      return true;
    }
  }
  • 尝试从SelectorThread的阻塞队列acceptedQueue中获取一个连接的传输通道。如果获取成功,调用registerAccepted()方法;否则,进入下一次循环。

  • registerAccepted()方法将传输通道底层的连接注册到NIO的选择器selector上面,获取到一个SelectionKey。

  • 创建一个FrameBuffer对象,并绑定到获取的SelectionKey上面,用于数据传输时的中间读写缓存

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

长安不及十里

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值