thrift端口未被占用连接不上_从Thrift到I/O多路复用

Thrift

什么是Thrift?

Thrift是一个轻量级、跨语言的远程服务调用框架,支持C++、Java、Python、PHP、Ruby等。通过代码生成引擎自动生成RPC接口Thrift IDL文件对应的各种主流语言的RPC服务端/客户端模板代码,省去自定义和维护接口编解码、消息传输、服务器多线程模型等基础工作,服务端只需要编写接口实现类,客户端根据服务对象调用远端服务。

Thrift架构

Thrift架构从下往上为传输层、协议层、处理层和服务层。

  • 传输层主要负责从网络中读/写数据,定义了网络传输协议。
  • 协议层定义了数据传输格式,负责处理网络数据的序列化和反序列化。
  • 处理层由IDL生成,封装具体的底层网络传输和序列化方式,并委托给用户实现的Handle进行处理。
  • 服务层提供网络I/O服务模型。
3158112db0df1160a1da1362aed9baf2.png

Thrift协议有哪些?

Thrift可以让用户选择客户端与服务端之间传输通信协议的类别。

  • TBinaryProtocol:使用二进制编码格式传输,Thrift的默认传输协议。
  • TCompactProtocol:使用压缩格式传输。
  • TJSONProtocol:使用JSON格式传输。
  • TDebugProtocol:使用文本格式传输,便于debug。
  • TSimpleJSONProtocol:提供JSON只写的协议,适用于通过脚本语言解析。

Thrift传输层有哪些?

  • TSocket:阻塞式I/O,用在客户端。
  • TServerSocket:非阻塞式I/O,用于服务器端监听TSocket。
  • TNonblockingSocket:非阻塞式I/O,用于构建异步客户端。
  • TMemoryInputTransport:封装了一个字节数组byte[]做输入流。
  • TFramedTransport:非阻塞式I/O,按块的大小进行传输(类似于NIO)。

Thrift服务器端有哪些?

e433d7579ad69494d5ad0aa6d76a36b4.png

TServer定义了静态内部类Args,Args继承自抽象类AbstractServerArgs。AbstractServerArgs采用了建造者模式,向TServer提供各种工厂。

cd24dcb119f1a48e318e590fa88af7eb.png

TServer核心方法:

serve():启动服务。serve()为抽象方法,不同实现类的启动方式不一样,可各自实现。stop():关闭服务。isServing():检测服务状态(启动/关闭)。setServing(boolean serving):设置服务状态。

TSimpleServer

  1. 特点

单线程,阻塞I/O。

  1. 设计思想

由主线程负责监听、读/写、处理请求(一次只能接收和处理一个socket连接)。

941517c8aa634d0a2c1cc235fbf7bd65.png
  1. 使用

客户端:

public class HelloClient {    private static final Logger LOGGER = Logger.getLogger(HelloClient.class.getName());    public static void main(String[] args) {        TTransport transport = null;        try {            //传输层使用阻塞I/O            transport = new TSocket("127.0.0.1", 9090);            transport.open();            //使用二进制协议传输数据            TProtocol protocol = new TBinaryProtocol(transport);            //使用同步客户端            GreetingService.Client client = new GreetingService.Client(protocol);            String name = "XuDT";            LOGGER.info("HelloClient 请求参数[name]=" + name);            //调用接口            String result = client.sayHello(name);            LOGGER.info("Server 返回结果为" + result);        } catch (TException e) {            e.printStackTrace();        } finally {            transport.close();        }    }}

服务端:

public class SimpleServer {    private static final Logger LOGGER = Logger.getLogger(SimpleServer.class.getName());    public static void main(String[] args) {        try {            //监听端口9090            TServerSocket serverTransport = new TServerSocket(9090);            //使用二进制协议传输数据            TBinaryProtocol.Factory proFactory = new TBinaryProtocol.Factory();            //关联处理器与HelloService服务实现            TProcessor processor = new HelloService.Processor(new HelloServiceImpl());            TSimpleServer.Args serverArgs = new TSimpleServer.Args(serverTransport);            serverArgs.processor(processor);            serverArgs.protocolFactory(proFactory);            //使用TSimpleServer服务端            TServer server = new TSimpleServer(serverArgs);            LOGGER.info("Start SimpleServer on port 9090...");            //启动服务            server.serve();        } catch (TTransportException e) {            e.printStackTrace();        }    }}

Processor为HelloService的内部类,调用HelloService.Processor(new HelloServiceImpl())会生成一个processMap,key为接口名称,value为该方法调用对象,后续TBaseProcessor.process()就是通过对processMap进行processMap.get(接口名称)操作获取接口。

  1. TSimpleServer源码分析

TSimpleServer继承自TServer,实现了TServer的serve()和stop()方法。

public class TSimpleServer extends TServer {  private static final Logger LOGGER = LoggerFactory.getLogger(TSimpleServer.class.getName());  public TSimpleServer(AbstractServerArgs args) {    super(args);  }  /**   * 启动服务   */  public void serve() {    try {      //监听端口      serverTransport_.listen();    } catch (TTransportException ttx) {      LOGGER.error("Error occurred during listening.", ttx);      return;    }    // Run the preServe event    if (eventHandler_ != null) {      eventHandler_.preServe();    }    //开启服务    setServing(true);    //循环等待请求    while (!stopped_) {      TTransport client = null;      TProcessor processor = null;      TTransport inputTransport = null;      TTransport outputTransport = null;      TProtocol inputProtocol = null;      TProtocol outputProtocol = null;      ServerContext connectionContext = null;      try {        //接受连接        client = serverTransport_.accept();        if (client != null) {          //TProcessorFactory处理器          processor = processorFactory_.getProcessor(client);          //获取客户端输入通道          inputTransport = inputTransportFactory_.getTransport(client);          //获取客户端输出通道          outputTransport = outputTransportFactory_.getTransport(client);          //获取客户端输入协议          inputProtocol = inputProtocolFactory_.getProtocol(inputTransport);          //获取客户端输出协议          outputProtocol = outputProtocolFactory_.getProtocol(outputTransport);          if (eventHandler_ != null) {            connectionContext = eventHandler_.createContext(inputProtocol, outputProtocol);          }          //处理请求          while (true) {            if (eventHandler_ != null) {              eventHandler_.processContext(connectionContext, inputTransport, outputTransport);            }            //处理业务请求            processor.process(inputProtocol, outputProtocol);          }        }      } catch (TTransportException ttx) {        // Client died, just move on      } catch (TException tx) {        if (!stopped_) {          LOGGER.error("Thrift error occurred during processing of message.", tx);        }      } catch (Exception x) {        if (!stopped_) {          LOGGER.error("Error occurred during processing of message.", x);        }      }      if (eventHandler_ != null) {        //删除事件        eventHandler_.deleteContext(connectionContext, inputProtocol, outputProtocol);      }      //关闭输入通道      if (inputTransport != null) {        inputTransport.close();      }      //关闭输出通道      if (outputTransport != null) {        outputTransport.close();      }    }    //关闭服务    setServing(false);  } /**   * 停止服务   */  public void stop() {    stopped_ = true;    serverTransport_.interrupt();  }}

TBaseProcessor.process():处理请求信息,调用处理方法

  public void process(TProtocol in, TProtocol out) throws TException {    //获取请求信息:参数、调用函数名等    TMessage msg = in.readMessageBegin();    //根据函数名获取处理函数    ProcessFunction fn = processMap.get(msg.name);    //异常处理    if (fn == null) {      TProtocolUtil.skip(in, TType.STRUCT);      in.readMessageEnd();      TApplicationException x = new TApplicationException(TApplicationException.UNKNOWN_METHOD, "Invalid method name: '"+msg.name+"'");      out.writeMessageBegin(new TMessage(msg.name, TMessageType.EXCEPTION, msg.seqid));      x.write(out);      out.writeMessageEnd();      out.getTransport().flush();    } else {      //处理请求      fn.process(msg.seqid, in, out, iface);    }  }

ProcessFunction是一个抽象类,子类也是根据IDL自动生成,与IDL中的函数一一对应,为代理处理器。

ProcessFunction.process():调用接口处理业务请求并返回结果

public final void process(int seqid, TProtocol iprot, TProtocol oprot, I iface) throws TException {    //获取一个空的参数封装    T args = getEmptyArgsInstance();    try {      //从inputProtocol中获取参数赋给args      args.read(iprot);    } catch (TProtocolException e) {      //异常处理    }    iprot.readMessageEnd();    TSerializable result = null;    byte msgType = TMessageType.REPLY;    try {      //根据参数args调用接口      result = getResult(iface, args);    } catch (TTransportException ex) {      //异常处理    }    if(!isOneway()) {      //输出调用结果到outputProtocol      oprot.writeMessageBegin(new TMessage(getMethodName(), msgType, seqid));      result.write(oprot);      oprot.writeMessageEnd();      oprot.getTransport().flush();    }  }
  1. 时序图
52e54dcf1e0ac70bb1436cddbaf8f0ce.png
  1. 不足

一次只能处理一个socket连接,效率低。

TSimpleServer效率低,那有什么办法能提高效率呢?

TThreadPoolServer

  1. 特点

阻塞I/O,主线程负责阻塞监听socket连接,具体的业务处理交由一个线程池来处理。

  1. 设计思想

主线程负责阻塞监听是否有新连接,当有新连接进来时,将其封装成一个WorkerProcess对象提交到线程池,由WorkerProcess的run()方法进行业务处理后将结果返回给客户端。

线程池默认最小线程数为5,最大线程数为Integer.MAX_VALUE。

3619587c288dfd2e8489107f7a71485f.png
  1. 使用

客户端同TSimpleServer。

服务端:

public class ThreadPoolServer {    private static final Logger LOGGER = Logger.getLogger(ThreadPoolServer.class.getName());    public static void main(String[] args) {        try {            //监听端口9090            TServerSocket serverTransport = new TServerSocket(9090);            //使用二进制协议传输数据            TBinaryProtocol.Factory proFactory = new TBinaryProtocol.Factory();            //关联处理器与HelloService服务实现            TProcessor processor = new HelloService.Processor(new HelloServiceImpl());            TThreadPoolServer.Args serverArgs = new TThreadPoolServer.Args(serverTransport);            serverArgs.processor(processor);            serverArgs.protocolFactory(proFactory);            //使用TThreadPoolServer服务端            TServer server = new TThreadPoolServer(serverArgs);            LOGGER.info("Start ThreadPoolServer on port 9090...");            //启动服务            server.serve();        } catch (TTransportException e) {            e.printStackTrace();        }    }}
  1. 源码分析

TThreadPoolServer继承自TServer,实现了TServer的serve()和stop()方法。

public class TThreadPoolServer extends TServer {  private static final Logger LOGGER = LoggerFactory.getLogger(TThreadPoolServer.class.getName());  //线程池参数  public static class Args extends AbstractServerArgs {    public int minWorkerThreads = 5;    public int maxWorkerThreads = Integer.MAX_VALUE;    public ExecutorService executorService;    public int stopTimeoutVal = 60;    public TimeUnit stopTimeoutUnit = TimeUnit.SECONDS;    public int requestTimeout = 20;    public TimeUnit requestTimeoutUnit = TimeUnit.SECONDS;    public int beBackoffSlotLength = 100;    public TimeUnit beBackoffSlotLengthUnit = TimeUnit.MILLISECONDS;    public Args(TServerTransport transport) {      super(transport);    }    public Args minWorkerThreads(int n) {      minWorkerThreads = n;      return this;    }    public Args maxWorkerThreads(int n) {      maxWorkerThreads = n;      return this;    }    public Args stopTimeoutVal(int n) {      stopTimeoutVal = n;      return this;    }    public Args stopTimeoutUnit(TimeUnit tu) {      stopTimeoutUnit = tu;      return this;    }    public Args requestTimeout(int n) {      requestTimeout = n;      return this;    }    public Args requestTimeoutUnit(TimeUnit tu) {      requestTimeoutUnit = tu;      return this;    }    //Binary exponential backoff slot length    public Args beBackoffSlotLength(int n) {      beBackoffSlotLength = n;      return this;    }    //Binary exponential backoff slot time unit    public Args beBackoffSlotLengthUnit(TimeUnit tu) {      beBackoffSlotLengthUnit = tu;      return this;    }    public Args executorService(ExecutorService executorService) {      this.executorService = executorService;      return this;    }  }  // Executor service for handling client connections  private ExecutorService executorService_;  private final TimeUnit stopTimeoutUnit;  private final long stopTimeoutVal;  private final TimeUnit requestTimeoutUnit;  private final long requestTimeout;  private final long beBackoffSlotInMillis;  private Random random = new Random(System.currentTimeMillis());    //TThreadPoolServer构造函数会实例化一个线程池  public TThreadPoolServer(Args args) {    super(args);    stopTimeoutUnit = args.stopTimeoutUnit;    stopTimeoutVal = args.stopTimeoutVal;    requestTimeoutUnit = args.requestTimeoutUnit;    requestTimeout = args.requestTimeout;    beBackoffSlotInMillis = args.beBackoffSlotLengthUnit.toMillis(args.beBackoffSlotLength);    //实例化线程池(可以选择自己创建线程池后以参数形式传进来或TThreadPoolServer创建)    executorService_ = args.executorService != null ?        args.executorService : createDefaultExecutorService(args);  }  //创建线程池  private static ExecutorService createDefaultExecutorService(Args args) {    //线程池等待队列    SynchronousQueue executorQueue =      new SynchronousQueue();    return new ThreadPoolExecutor(args.minWorkerThreads,                                  args.maxWorkerThreads,                                  args.stopTimeoutVal,                                  args.stopTimeoutUnit,                                  executorQueue);  }  protected ExecutorService getExecutorService() {    return executorService_;  }    //开启服务器进行监听  protected boolean preServe() {  try {    //监听端口9090      serverTransport_.listen();    } catch (TTransportException ttx) {      LOGGER.error("Error occurred during listening.", ttx);      return false;    }    // Run the preServe event    if (eventHandler_ != null) {      eventHandler_.preServe();    }    stopped_ = false;    //开启服务    setServing(true);        return true;  }  //启动服务  public void serve() {  if (!preServe()) {  return;  }    //处理请求  execute();  //服务停止后关闭线程池  waitForShutdown();    //关闭服务    setServing(false);  }    //处理请求  protected void execute() {    int failureCount = 0;    //循环等待请求    while (!stopped_) {      try {        //接受连接        TTransport client = serverTransport_.accept();        //将客户端请求封装成一个WorkerProcess对象后丢给线程池进行处理        WorkerProcess wp = new WorkerProcess(client);        //记录加入线程池的重试次数        int retryCount = 0;        //剩余的重试时间        long remainTimeInMillis = requestTimeoutUnit.toMillis(requestTimeout);        while(true) {          try {            //提交线程池处理请求            executorService_.execute(wp);            break;          } catch(Throwable t) {            //抛异常则重试            if (t instanceof RejectedExecutionException) {              retryCount++;              try {                if (remainTimeInMillis > 0) {                  //do a truncated 20 binary exponential backoff sleep                  long sleepTimeInMillis = ((long) (random.nextDouble() *                      (1L << Math.min(retryCount, 20)))) * beBackoffSlotInMillis;                  sleepTimeInMillis = Math.min(sleepTimeInMillis, remainTimeInMillis);                  TimeUnit.MILLISECONDS.sleep(sleepTimeInMillis);                  remainTimeInMillis = remainTimeInMillis - sleepTimeInMillis;                } else {                  client.close();                  wp = null;                  LOGGER.warn("Task has been rejected by ExecutorService " + retryCount                      + " times till timedout, reason: " + t);                  break;                }              } catch (InterruptedException e) {                LOGGER.warn("Interrupted while waiting to place client on executor queue.");                Thread.currentThread().interrupt();                break;              }            } else if (t instanceof Error) {              LOGGER.error("ExecutorService threw error: " + t, t);              throw (Error)t;            } else {              //for other possible runtime errors from ExecutorService, should also not kill serve              LOGGER.warn("ExecutorService threw error: " + t, t);              break;            }          }        }      } catch (TTransportException ttx) {        if (!stopped_) {          ++failureCount;          LOGGER.warn("Transport error occurred during acceptance of message.", ttx);        }      }    }  }    //服务停止后关闭线程池  protected void waitForShutdown() {    //不再接受新的线程,等待之前提交的线程都处理完毕后关闭线程池  executorService_.shutdown();    long timeoutMS = stopTimeoutUnit.toMillis(stopTimeoutVal);    long now = System.currentTimeMillis();    while (timeoutMS >= 0) {      try {        //阻塞,唤醒条件:所有任务执行完毕且shutdown请求被调用或timeoutMS时间到达或当前线程被中断        executorService_.awaitTermination(timeoutMS, TimeUnit.MILLISECONDS);        break;      } catch (InterruptedException ix) {        long newnow = System.currentTimeMillis();        timeoutMS -= (newnow - now);        now = newnow;      }    }  }  public void stop() {    stopped_ = true;    serverTransport_.interrupt();  }  //WorkerProcess实现Runnable,在run()方法中进行具体的业务处理  private class WorkerProcess implements Runnable {    private TTransport client_;    private WorkerProcess(TTransport client) {      client_ = client;    }    //具体的业务处理(其实就是将TSimpleServer中业务处理部分剥离出来放到run()方法中)    public void run() {      TProcessor processor = null;      TTransport inputTransport = null;      TTransport outputTransport = null;      TProtocol inputProtocol = null;      TProtocol outputProtocol = null;      TServerEventHandler eventHandler = null;      ServerContext connectionContext = null;      try {        processor = processorFactory_.getProcessor(client_);        inputTransport = inputTransportFactory_.getTransport(client_);        outputTransport = outputTransportFactory_.getTransport(client_);        inputProtocol = inputProtocolFactory_.getProtocol(inputTransport);        outputProtocol = outputProtocolFactory_.getProtocol(outputTransport);        eventHandler = getEventHandler();        if (eventHandler != null) {          connectionContext = eventHandler.createContext(inputProtocol, outputProtocol);        }               while (true) {            if (eventHandler != null) {              eventHandler.processContext(connectionContext, inputTransport, outputTransport);            }            if (stopped_) {              break;            }            processor.process(inputProtocol, outputProtocol);        }      } catch (Exception x) {        if (!isIgnorableException(x)) {          LOGGER.error((x instanceof TException? "Thrift " : "") + "Error occurred during processing of message.", x);        }      } finally {        if (eventHandler != null) {          eventHandler.deleteContext(connectionContext, inputProtocol, outputProtocol);        }        if (inputTransport != null) {          inputTransport.close();        }        if (outputTransport != null) {          outputTransport.close();        }        if (client_.isOpen()) {          client_.close();        }      }    }    ... ...  }}
  1. 优点

TThreadPoolServer在TSimpleServer的基础上引入了线程池,拆分了监听线程和业务处理的工作线程,主线程负责监听连接,通过线程池负责具体的业务处理,在并发量增加时也能够及时接受连接。 适合服务端能预知最多有多少个客户端并发的情况。

  1. 不足

TThreadPoolServer依然是阻塞的方式接受客户端连接,并且业务处理能力受限于线程池,当并发请求量超过线程池的线程数量时,新请求只能够阻塞等待。

TThreadPoolServer优化了TSimpleServer的业务处理能力,还能继续如何优化呢?

TNonblockingServer

  1. 特点

单线程,非阻塞I/O,采用NIO的模式,借助Channel/Selector实现。

  1. 设计思想

TNonblockingServer创建一个SelectAcceptThread线程用于监听多个socket并处理。通过将socket注册到Selector上实现一个线程监听多个socket,每次Selector.select()循环结束返回所有就绪socket,对于可读的socket触发回调函数让服务端进行相应的处理后将处理结果返回给客户端,对于可写的socket进行数据写入操作,对于客户端连接请求进行接受并注册到Selector上。

817b509c32be628d384a3a4d6e949646.png
  1. 使用

客户端:

public class HelloClient {    private static final Logger LOGGER = Logger.getLogger(HelloClient.class.getName());    public static void main(String[] args) {        TTransport transport = null;        try {            //传输层使用非阻塞I/O            transport = new TFramedTransport.Factory().getTransport(new TSocket("127.0.0.1", 9090));            transport.open();            //使用二进制协议传输数据            TProtocol protocol = new TBinaryProtocol(transport);            //使用同步客户端            HelloService.Client client = new HelloService.Client(protocol);            String name = "XuDT";            LOGGER.info("HelloClient 请求参数[name]=" + name);            //调用接口            String result = client.sayHello(name);            LOGGER.info("Server 返回结果为" + result);        } catch (TException e) {            e.printStackTrace();        } finally {            transport.close();        }    }}

服务端:

public class NonblockingServer {    private static final Logger LOGGER = Logger.getLogger(NonblockingServer.class.getName());    public static void main(String[] args) {        try {            //监听端口9090            TNonblockingServerSocket serverTransport = new TNonblockingServerSocket(9090);            //使用二进制协议传输数据            TBinaryProtocol.Factory proFactory = new TBinaryProtocol.Factory();            //关联处理器与HelloService服务实现            TProcessor processor = new HelloService.Processor(new HelloServiceImpl());            TNonblockingServer.Args serverArgs = new TNonblockingServer.Args(serverTransport);            serverArgs.processor(processor);            serverArgs.protocolFactory(proFactory);            serverArgs.transportFactory(new TFramedTransport.Factory());            //使用TNonblockingServer服务端            TServer server = new TNonblockingServer(serverArgs);            LOGGER.info("Start NonblockingServer on port 9090...");            //启动服务            server.serve();        } catch (TTransportException e) {            e.printStackTrace();        }    }}

TNonblockingServer传输层只能使用TFramedTransport。

  1. 源码分析

TNonblockingServer继承了AbstractNonblockingServer,AbstractNonblockingServer继承了TServer。 AbstractNonblockingServer:

public abstract class AbstractNonblockingServer extends TServer {  protected final Logger LOGGER = LoggerFactory.getLogger(getClass().getName());  public static abstract class AbstractNonblockingServerArgs> extends AbstractServerArgs {    public long maxReadBufferBytes = 256 * 1024 * 1024;    public AbstractNonblockingServerArgs(TNonblockingServerTransport transport) {      super(transport);      transportFactory(new TFramedTransport.Factory());    }  }  final long MAX_READ_BUFFER_BYTES;  final AtomicLong readBufferBytesAllocated = new AtomicLong(0);  public AbstractNonblockingServer(AbstractNonblockingServerArgs args) {    super(args);    MAX_READ_BUFFER_BYTES = args.maxReadBufferBytes;  }  /**   * 启动服务   */  public void serve() {    //启动SelectAcceptThread线程    if (!startThreads()) {      return;    }    //启动监听    if (!startListening()) {      return;    }    //开启服务    setServing(true);    //阻塞等待请求并处理    waitForShutdown();    //关闭服务    setServing(false);    //停止监听,关闭ServerSocket    stopListening();  }  protected abstract boolean startThreads();  protected abstract void waitForShutdown();  protected boolean startListening() {    try {      serverTransport_.listen();      return true;    } catch (TTransportException ttx) {      LOGGER.error("Failed to start listening on server socket!", ttx);      return false;    }  }  protected void stopListening() {    serverTransport_.close();  }  protected abstract boolean requestInvoke(FrameBuffer frameBuffer);  /**   * AbstractSelectThread继承Thread   */  protected abstract class AbstractSelectThread extends Thread {    protected Selector selector;    protected final Set selectInterestChanges = new HashSet();    public AbstractSelectThread() throws IOException {      this.selector = SelectorProvider.provider().openSelector();    }    public void wakeupSelector() {      selector.wakeup();    }    public void requestSelectInterestChange(FrameBuffer frameBuffer) {      synchronized (selectInterestChanges) {        selectInterestChanges.add(frameBuffer);      }      selector.wakeup();    }    protected void processInterestChanges() {      synchronized (selectInterestChanges) {        for (FrameBuffer fb : selectInterestChanges) {          fb.changeSelectInterests();        }        selectInterestChanges.clear();      }    }    //大部分情况下在startThreads()方法中会调用该方法用于读取客户端数据    protected void handleRead(SelectionKey key) {      FrameBuffer buffer = (FrameBuffer) key.attachment();      //读取客户端数据失败,则清除该selectionKey      if (!buffer.read()) {        cleanupSelectionKey(key);        return;      }      //读取客户端数据成功      if (buffer.isFrameFullyRead()) {        //触发回调(调用服务端相应的方法)        if (!requestInvoke(buffer)) {          //清除该selectionKey          cleanupSelectionKey(key);        }      }    }        //向客户端写入数据    protected void handleWrite(SelectionKey key) {      FrameBuffer buffer = (FrameBuffer) key.attachment();      if (!buffer.write()) {        cleanupSelectionKey(key);      }    }        //清除selectionKey    protected void cleanupSelectionKey(SelectionKey key) {      FrameBuffer buffer = (FrameBuffer) key.attachment();      if (buffer != null) {        buffer.close();      }      key.cancel();    }  }   ... ...}

TNonblockingServer:

public class TNonblockingServer extends AbstractNonblockingServer {  public static class Args extends AbstractNonblockingServerArgs {    public Args(TNonblockingServerTransport transport) {      super(transport);    }  }  private SelectAcceptThread selectAcceptThread_;  public TNonblockingServer(AbstractNonblockingServerArgs args) {    super(args);  }  /**   * 重写AbstractNonblockingServer的startThreads()方法,开启线程处理客户端请求   */  @Override  protected boolean startThreads() {    try {      //实例化selectAcceptThread_      selectAcceptThread_ = new SelectAcceptThread((TNonblockingServerTransport)serverTransport_);      //启动线程      selectAcceptThread_.start();      return true;    } catch (IOException e) {      LOGGER.error("Failed to start selector thread!", e);      return false;    }  }  @Override  protected void waitForShutdown() {    joinSelector();  }  /**   * 启动Selector监听线程   */  protected void joinSelector() {    try {      //主线程阻塞等待selectAcceptThread线程返回      selectAcceptThread_.join();    } catch (InterruptedException e) {      LOGGER.debug("Interrupted while waiting for accept thread", e);      Thread.currentThread().interrupt();    }  }  /**   * 停止服务   */  @Override  public void stop() {    stopped_ = true;    if (selectAcceptThread_ != null) {      selectAcceptThread_.wakeupSelector();    }  }  /**   * 客户端可读时触发的回调方法,具体回调操作定义在AbstractNonblockingServer的内部类FrameBuffer.invoke(),invoke()通过处理器TProcessorFactory的processorFactory_.getProcessor(inTrans_).process(inProt_, outProt_)方法进行连接数据读取、接口调用、处理结果返回客户端等   */  @Override  protected boolean requestInvoke(FrameBuffer frameBuffer) {    frameBuffer.invoke();    return true;  }  public boolean isStopped() {    return selectAcceptThread_.isStopped();  }  /**   * SelectAcceptThread继承了AbstractSelectThread,AbstractSelectThread继承了Thread,AbstractSelectThread是AbstractNonblockingServer的内部类   */  protected class SelectAcceptThread extends AbstractSelectThread {    //服务端通道serverTransport    private final TNonblockingServerTransport serverTransport;    public SelectAcceptThread(final TNonblockingServerTransport serverTransport)    throws IOException {      this.serverTransport = serverTransport;      //将服务端通道serverTransport注册到Selector实现一个线程监听多个通道,Selector定义在AbstractSelectThread      serverTransport.registerSelector(selector);    }    public boolean isStopped() {      return stopped_;    }    /**     * 处理请求     */    public void run() {      try {        if (eventHandler_ != null) {          eventHandler_.preServe();        }        //循环等待请求        while (!stopped_) {          //阻塞监听所有注册在Selector上的socket,并处理就绪socket          select();          processInterestChanges();        }        //服务端停止后删除Selector中的所有监听的selectionKey        for (SelectionKey selectionKey : selector.keys()) {          cleanupSelectionKey(selectionKey);        }      } catch (Throwable t) {        LOGGER.error("run() exiting due to uncaught error", t);      } finally {        try {          selector.close();        } catch (IOException e) {          LOGGER.error("Got an IOException while closing selector!", e);        }        stopped_ = true;      }    }    /**     * 阻塞监听所有注册在Selector上的socket     */    private void select() {      try {        //阻塞监听请求,每次select()结束获取所有就绪socket        selector.select();        //获取所有就绪socket        Iterator selectedKeys = selector.selectedKeys().iterator();        //处理就绪socket        while (!stopped_ && selectedKeys.hasNext()) {          SelectionKey key = selectedKeys.next();          selectedKeys.remove();          // skip if not valid          if (!key.isValid()) {            cleanupSelectionKey(key);            continue;          }          //如果是客户端连接请求,则调用handleAccept()          //如果是读请求,则调用handleRead(key)          //如果是写请求,则调用handleWrite(key)          if (key.isAcceptable()) {            handleAccept();          } else if (key.isReadable()) {            handleRead(key);          } else if (key.isWritable()) {            handleWrite(key);          } else {            LOGGER.warn("Unexpected state in select! " + key.interestOps());          }        }      } catch (IOException e) {        LOGGER.warn("Got an IOException while selecting!", e);      }    }    protected FrameBuffer createFrameBuffer(final TNonblockingTransport trans,        final SelectionKey selectionKey,        final AbstractSelectThread selectThread) {        return processorFactory_.isAsyncProcessor() ?                  new AsyncFrameBuffer(trans, selectionKey, selectThread) :                  new FrameBuffer(trans, selectionKey, selectThread);    }    /**     * 处理客户端连接请求     */    private void handleAccept() throws IOException {      SelectionKey clientKey = null;      TNonblockingTransport client = null;      try {        //接受客户端连接请求        client = (TNonblockingTransport)serverTransport.accept();        //将客户端连接注册到Selector上,为了后续处理该客户端连接上的请求        clientKey = client.registerSelector(selector, SelectionKey.OP_READ);        //实例化FrameBuffer        FrameBuffer frameBuffer = createFrameBuffer(client, clientKey, SelectAcceptThread.this);        //将frameBuffer添加到clientKey,后续handleRead(key)和handleWrite(key)读写数据都是基于这个frameBuffer        clientKey.attach(frameBuffer);      } catch (TTransportException tte) {        // something went wrong accepting.        LOGGER.warn("Exception trying to accept!", tte);        if (clientKey != null) cleanupSelectionKey(clientKey);        if (client != null) client.close();      }    }  } }
  1. 优点

通过I/O多路复用实现了一个线程监听多个socket。

  1. 不足

TNonblockingServer使用单线程阻塞进行数据读写、业务处理,业务处理复杂/耗时的时候会导致服务阻塞,效率不高。

TThreadPoolServer通过拆分监听线程和工作线程提高效率,是否也能够用一样的思路去优化TNonblockingServer呢?

THsHaServer

  1. 特点
  2. 设计思想
  3. 使用
  4. 源码分析
  5. 优点
  6. 不足

TThreadedSelectorServer

  1. 特点
  2. 设计思想
  3. 使用
  4. 源码分析
  5. 优点
  6. 不足

Thrift使用示例

  1. 创建服务接口文件(HelloService.thrift):HelloService服务包含1个sayHello方法。
namespace java com.xudt.thrift.serviceservice HelloService {  string sayHello(1:string name)}复制代码
  1. http://thrift.apache.org/download下载Thrift IDL编译器。
  2. 将thrift exe和HelloService.thrift放在同一个文件夹下,cmd进入该文件夹并执行命令thrift-0.13.0.exe -gen java hello.thrift,生成服务接口的HelloService.java文件。
  3. 创建一个thrift-demo Maven工程为父模块,再创建4个子模块:
  • thrift-demo-interface:存放HelloService.thrift服务文件产生的HelloService.java代码。
  • thrift-demo-service:实现服务接口。
  • thrift-demo-server:服务器端。
  • thrift-demo-client:客户端。

生成服务接口的.java文件包含:

  • 同步服务端Iface
  • 异步服务端AsyncIface
  • 同步客户端Client
  • 异步客户端AsyncClient
@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)", date = "2020-07-04")public class HelloService {  /**   * 同步服务端   */  public interface Iface {    public String sayHello(String name) throws org.apache.thrift.TException;  }  /**   * 异步服务端   */  public interface AsyncIface {    public void sayHello(String name, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;  }  /**   * 同步客户端   */  public static class Client extends org.apache.thrift.TServiceClient implements Iface {    public static class Factory implements org.apache.thrift.TServiceClientFactory {      public Factory() {}      public Client getClient(org.apache.thrift.protocol.TProtocol prot) {        return new Client(prot);      }      public Client getClient(org.apache.thrift.protocol.TProtocol iprot, org.apache.thrift.protocol.TProtocol oprot) {        return new Client(iprot, oprot);      }    }    public Client(org.apache.thrift.protocol.TProtocol prot) {      super(prot, prot);    }    public Client(org.apache.thrift.protocol.TProtocol iprot, org.apache.thrift.protocol.TProtocol oprot) {      super(iprot, oprot);    }        //调用sayHello接口并接收处理结果    public String sayHello(String name) throws org.apache.thrift.TException {      send_sayHello(name);      return recv_sayHello();    }    //发送接口调用请求    public void send_sayHello(String name) throws org.apache.thrift.TException {      sayHello_args args = new sayHello_args();      args.setName(name);      sendBase("sayHello", args);    }    //接收接口调用结果    public String recv_sayHello() throws org.apache.thrift.TException {      sayHello_result result = new sayHello_result();      receiveBase(result, "sayHello");      if (result.isSetSuccess()) {        return result.success;      }      throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "sayHello failed: unknown result");    }  }    /**     * 处理器     */    public static class Processor extends org.apache.thrift.TBaseProcessor implements org.apache.thrift.TProcessor {    private static final org.slf4j.Logger _LOGGER = org.slf4j.LoggerFactory.getLogger(Processor.class.getName());    //初始化processMap    public Processor(I iface) {      super(iface, getProcessMap(new java.util.HashMap>()));    }    protected Processor(I iface, java.util.Map> processMap) {      super(iface, getProcessMap(processMap));    }    private static  java.util.Map> getProcessMap(java.util.Map> processMap) {      processMap.put("sayHello", new sayHello());      return processMap;    }    public static class sayHello extends org.apache.thrift.ProcessFunction { public sayHello() { super("sayHello"); } public sayHello_args getEmptyArgsInstance() { return new sayHello_args(); } protected boolean isOneway() { return false; } @Override protected boolean rethrowUnhandledExceptions() { return false; } //调用接口(真正调用到HelloService的实现类HelloServiceImpl中sayHello()) public sayHello_result getResult(I iface, sayHello_args args) throws org.apache.thrift.TException { sayHello_result result = new sayHello_result(); result.success = iface.sayHello(args.name); return result; } } } ... ...}

NIO

I/O多路复用


作者:XuDT
链接:https://juejin.im/post/5f1ed9835188252e817c9403

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值