thrift 入门

thrift mac安装

地址:http://thrift.apache.org/docs/install/os_x

boost 安装

可直接到boost下载或brew install下载

brew install boost
libevent 安装

可直接到libevent下载或brew install下载

brew install libevent

bison安装

brew install bison

mac默认为2.3版本太低,需要更新为安装后的最新版本

export PATH=/usr/local/Cellar/bison/3.0.4_1/bin:$PATH
source ~/.bash_profile

下载thrift,编译

http://archive.apache.org/dist/thrift

生成Makefile

python 编译报错,可以考虑指定pyhon路径,并且排除不需要的语言(本机电脑莫名的thrift给我安装PHP脚本)

./configure LDFLAGS='-L/usr/local/opt/openssl/lib' CPPFLAGS='-I/usr/local/opt/openssl/include'  --prefix=/usr/local/ --with-boost=/usr/local --with-libevent=/usr/local --with-python=/usr/local/ --without-php
编译,安装
make
make install

插曲,编译时报

fatal error: 'openssl/opensslv.h' file not found

使用如下方式解决

brew install openssl
./configure LDFLAGS='-L/usr/local/opt/openssl/lib' CPPFLAGS='-I/usr/local/opt/openssl/include'

以上方式使用后,我才找到有直接更简单方式安装,本人暂未验证

brew install thrift

thrift 开发

thrift 堆栈结构

参见:http://www.cnblogs.com/cyfonly/p/6059374.html

image

image

其中代码框架层是根据 Thrift 定义的服务接口描述文件生成的客户端和服务器端代码框架,数据读写操作层是根据 Thrift 文件生成代码实现数据的读写操作。
特别注意thrift 是不限于语言,可操作的语言有phthon、Java、nodejs等等。不同的开发语言可以使用同一个thrift生成不同的开发语言接口,只要共用一个thrift实现,就可以实现不同语言的RPC调用,这一点非常类似于grpc,相比spring cloud feign 仅仅只能使用java rpc 调用好了不少。

传输协议

  • TBinaryProtocol:是Thrift的默认协议,使用二进制编码格式进行数据传输,基本上直接发送原始数据
  • TCompactProtocol:压缩的、密集的数据传输协议,基于Variable-length quantity的zigzag 编码格式
  • TJSONProtocol:以JSON (JavaScript Object Notation)数据编码协议进行数据传输
  • TDebugProtocol:常常用以编码人员测试,以文本的形式展现方便阅读

Server类型

  • TSimpleServer:TSimplerServer 接受一个连接,处理连接请求,直到客户端关闭了连接,它才回去接受一个新的连接。正因为它只在一个单独的线程中以阻塞 I/O 的方式完成这些工作,所以它只能服务一个客户端连接,其他所有客户端在被服务器端接受之前都只能等待。
    仅开发环境使用。
  • TNonblockingServer:相比TSimplerServer单线程阻塞I/O试处理,TNonblockingServer使用了非阻塞I/O式处理,可支持连接多个客户端,但本质还是一个一个客户端连接处理,效率并不会有太大提升,其他客户端会被“饿死”
  • TThreadPoolServer:有一个专用的线程用来接受连接 一旦接受了一个连接,它就会被放入 ThreadPoolExecutor 中的一个 worker 线程里处理。 worker 线程被绑定到特定的客户端连接上,直到它关闭。一旦连接关闭,该 worker 线程就又回到了线程池中。 你可以配置线程池的最小、最大线程数,默认值分别是5(最小)和 Integer.MAX_VALUE(最大)。
    如果有1万个并发的客户端连接,你就需要运行1万个线程。系统资源允许大量连接可以使用该方式。个人建议不建议,毕竟大部分情况资源有限。
  • TThreadedSelectorServer:允许你用多个线程来处理网络 I/O。它维护了两个线程池,一个用来处理网络 I/O,另一个用来进行请求的处理。当网络 I/O 是瓶颈的时候,TThreadedSelectorServer 比 THsHaServer 的表现要好。
    建议方式

Hello World

总体思路,

  • 创建Hello.thrift文件
  • thrift创建生成Java文件
  • 创建client和server 项目,拷贝Java文件至该两个项目
  • 启动client和server项目
创建thrift文件
  1. 创建Hello.thrift文件,注意namespace可写多种语言
namespace java service.demo
service Hello{
    string helloString(1:string para)
}
  1. 执行thrift命令
thrift -r -gen java Hello.thrift
  1. 创建server 项目
    POM依赖如下,注意LOG要放在thrift前
    <dependencies>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.8.0-beta2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.thrift</groupId>
            <artifactId>libthrift</artifactId>
            <version>0.11.0</version>
        </dependency>
    </dependencies>

实现Hello.Iface,应答client发送消息

public class HelloServiceImpl implements Hello.Iface {
    public String helloString(String name) throws TException {
        return "Hello " + name;
    }
}

创建启动Server,并且监听端口

  public static void main(String[] args) {
        try {
            System.out.println("服务端开启....");
            TProcessor tprocessor = new Hello.Processor<Hello.Iface>(new HelloServiceImpl());
            // 简单的单线程服务模型
            TServerSocket serverTransport = new TServerSocket(9898);
            TServer.Args tArgs = new TServer.Args(serverTransport);
            tArgs.processor(tprocessor);
            tArgs.protocolFactory(new TBinaryProtocol.Factory());
            TServer server = new TSimpleServer(tArgs);
            server.serve();
        }catch (TTransportException e) {
            e.printStackTrace();
        }
    }

4、创建Client项目
POM依赖

    <dependencies>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.8.0-beta2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.thrift</groupId>
            <artifactId>libthrift</artifactId>
            <version>0.11.0</version>
        </dependency>
    </dependencies>

发送消息

    public static void main(String[] args) {
        System.out.println("客户端启动....");
        TTransport transport = null;
        try {
            transport = new TSocket("localhost", 9898, 30000);
            // 协议要和服务端一致
            TProtocol protocol = new TBinaryProtocol(transport);
            Hello.Client client = new Hello.Client(protocol);
            transport.open();
            String result = client.helloString("world");
            System.out.println(result);
        } catch (TTransportException e) {
            e.printStackTrace();
        } catch (TException e) {
            e.printStackTrace();
        } finally {
            if (null != transport) {
                transport.close();
            }
        }
    }

5、启动server,client项目,发送消息。

数据类型

Thrift 脚本可定义的数据类型包括以下几种类型:

  • 基本类型:
    • bool:布尔值,true 或 false,对应 Java 的 boolean
    • byte:8 位有符号整数,对应 Java 的 byte
    • i16:16 位有符号整数,对应 Java 的 short
    • i32:32 位有符号整数,对应 Java 的 int
    • i64:64 位有符号整数,对应 Java 的 long
    • double:64 位浮点数,对应 Java 的 double
    • string:未知编码文本或二进制字符串,对应 Java 的 String
  • 结构体类型:
    • struct:定义公共的对象,类似于 C 语言中的结构体定义,在 Java 中是一个 JavaBean
  • 容器类型:
    • list:对应 Java 的 ArrayList
    • set:对应 Java 的 HashSet
    • map:对应 Java 的 HashMap
  • 异常类型:
    • exception:对应 Java 的 Exception
  • 服务类型:
    • service:对应服务的类

使用TThreadedSelectorServer 接收消息

参考 https://blog.csdn.net/u011642663/article/details/56026048
TThreadedSelectorServer模式是目前Thrift提供的最高级的模式,它内部有如果几个部分构成:

  1. 一个AcceptThread线程对象,专门用于处理监听socket上的新连接;
  2. 若干个SelectorThread对象专门用于处理业务socket的网络I/O操作,所有网络数据的读写均是有这些线程来完成;
  3. 一个负载均衡器SelectorThreadLoadBalancer对象,主要用于AcceptThread线程接收到一个新socket连接请求时,决定将这个新连接请求分配给哪个SelectorThread线程。
  4. 一个ExecutorService类型的工作线程池,在SelectorThread线程中,监听到有业务socket中有调用请求过来,则将请求读取之后,交个ExecutorService线程池中的线程完成此次调用的具体执行; image

如上图所示,TThreadedSelectorServer模式中有一个专门的线程AcceptThread用于处理新连接请求,因此能够及时响应大量并发连接请求;另外它将网络I/O操作分散到多个SelectorThread线程中来完成,因此能够快速对网络I/O进行读写操作,能够很好地应对网络I/O较多的情况;TThreadedSelectorServer对于大部分应用场景性能都不会差,因此,如果实在不知道选择哪种工作模式,使用TThreadedSelectorServer就可以。

Thrift测试结构

本次使用头行分配三层形式的thrift结构,test_thrift_performance.thrift如下

namespace java com.thrift.test.reply


service TestPerformance {
    TestReply SayHello (1:string name )
}

struct HeaderReply {
    1: list<LineReply> field1;
    2: i32 field2 ;
    3: string field3 ;
    4: string field4 ;
    5: i32 field5 ;
    6: string field6 ;
    7: i32 field7 ;
    8: string field8 ;
    9: i32 field9 ;
    10: string field10 ;
    11: i32 field11 ;
    12: string field12 ;
    13: i32 field13 ;
    14: string field14 ;
    15: i32 field15 ;
    16: string field16;
    17: i32 field17 ;
    18: string field18 ;

}

struct LineReply {
    1: list<DistributionReply>  field1 ;
    2: i32 field2 ;
    3: string field3 ;
    4: string field4 ;
    5: i32 field5 ;
    6: string field6 ;
    7: i32 field7 ;
    8: string field8 ;
    9: i32 field9 ;
    10: string field10 ;
    11: i32 field11 ;
    12: string field12 ;
    13: i32 field13 ;
    14: string field14 ;
    15: i32 field15 ;
    16: string field16;
    17: i32 field17 ;
    18: string field18 ;
}

struct DistributionReply {
    1: string field1;
    2: i32 field2 ;
    3: string field3 ;
    4: string field4 ;
    5: i32 field5 ;
    6: string field6 ;
    7: i32 field7 ;
    8: string field8 ;
    9: i32 field9 ;
    10: string field10 ;
    11: i32 field11 ;
    12: string field12 ;
    13: i32 field13 ;
    14: string field14 ;
    15: i32 field15 ;
    16: string field16;
    17: i32 field17 ;
    18: string field18 ;
}

struct TestReply {
    1: HeaderReply field1;
    2: i32 field2;
    3: string field3;
    4: string name;
}

使用maven插件编译

使用maven 插件直接生成thrift 文件,插件地址 https://github.com/dtrott/maven-thrift-plugin.git。注意更改thrift执行路径,并且确保本机电脑已经按照thrift。
在项目source目录下创建thrift文件夹,将test_thrift_performance.thrift放置在该文件夹下,添加如下配置

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.thrift.test</groupId>
    <artifactId>thrift-client-test</artifactId>
    <version>1.0.0-SNAPSHOT</version>
    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <compiler-plugin.version>3.7.0</compiler-plugin.version>
        <thrift.version>0.11.0</thrift.version>
    </properties>
    
    <dependencies>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.8.0-beta2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.thrift</groupId>
            <artifactId>libthrift</artifactId>
            <version>${thrift.version}</version>
        </dependency>

    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>${compiler-plugin.version}</version>
                <configuration>
                    <encoding>${project.build.sourceEncoding}</encoding>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.thrift.tools</groupId>
                <artifactId>maven-thrift-plugin</artifactId>
                <version>0.1.11</version>
                <configuration>
                    <thriftExecutable>/usr/local/bin/thrift</thriftExecutable>
                    <!-- 不添加将会报 java hashcode option 错误,因为该插件执行的命令为thrift -r -gen java:hashcode Hello.thrift
                     根本没有hashcode选项有可能因为历史版本原因
                    -->
                    <!--https://github.com/dtrott/maven-thrift-plugin/blob/thrift-maven-plugin-0.1.12/src/main/java/org/apache/thrift/maven/AbstractThriftMojo.java-->
                    <generator>java</generator>
                </configuration>
                <executions>
                    <execution>
                        <id>thrift-sources</id>
                        <phase>generate-sources</phase>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>thrift-test-sources</id>
                        <phase>generate-test-sources</phase>
                        <goals>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

执行命令,将会在generated-sources下生成对应java代码

mvn clean install 

创建Thrift客户端

创建Thrift客户端,注意transport 为TNonblockingSocket

  System.out.println("客户端启动....");
        TNonblockingSocket transport = null;
        try {
            Date beginDate = new Date();
            System.out.println("begin:" + beginDate);
            AsyncMethodCallback asyncMethodCallback;
            TestPerformance.AsyncClient client;
            CountDownLatch countDownLatch = new CountDownLatch(THREAD_NUM);
            TAsyncClientManager clientManager = new TAsyncClientManager();
            TProtocolFactory protocolFactory = new TBinaryProtocol.Factory();
            //TTransportFactory transportFactory = new TFramedTransport.Factory(maxFrameSize);

            transport = new TNonblockingSocket("localhost", 9898,100000);
            client = new TestPerformance.AsyncClient(protocolFactory, clientManager, transport);
            asyncMethodCallback = new TestPerformanceCallback(countDownLatch,transport);
            //client = new TestPerformance.AsyncClient(protocolFactory, clientManager, transport);
            client.SayHello("world" + ":" + i, asyncMethodCallback);

            countDownLatch.await();
            Date endDate = new Date();

            System.out.println("end:" + endDate);

            System.out.println("time:" + (endDate.getTime() - beginDate.getTime()) / 1000 + " s");


        } catch (TTransportException e) {
            e.printStackTrace();
        } catch (TException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            if (null != transport) {
                transport.close();
            }
        }

因为是异步回调,添加callback,当然也可以直接在上面代码直接声明回调。


public class TestPerformanceCallback implements AsyncMethodCallback<TestReply> {
    private CountDownLatch countDownLatch;
private TNonblockingSocket transport;
    public TestPerformanceCallback() {
    }

    public TestPerformanceCallback(CountDownLatch countDownLatch) {
        this.countDownLatch = countDownLatch;
    }

    public TestPerformanceCallback(CountDownLatch countDownLatch, TNonblockingSocket transport) {
        this.countDownLatch = countDownLatch;
        this.transport = transport;
    }

    public void onComplete(TestReply response) {
        System.out.println(response.getName());
        countDownLatch.countDown();
    }

    public void onError(Exception exception) {
        System.out.println(exception);
        countDownLatch.countDown();
    }
}

创建服务端

注意服务端为createThreadedSelectorServer

package com.thrift.test;

import com.thrift.test.reply.TestPerformance;
import com.thrift.test.service.impl.TestPerformanceImpl;
import org.apache.thrift.TProcessor;
import org.apache.thrift.TProcessorFactory;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.apache.thrift.server.TServer;
import org.apache.thrift.transport.TTransportException;

/**
 * Demo class
 *
 * @author junqiang.xiao@hand-china.com
 * @date 2018/4/27
 */
public class TThreadedSelectorServerTest {
    public static void main(String[] args) {
        int port = 9898;
        System.out.println("服务端开启....");
        TProcessor tprocessor = new TestPerformance.Processor<TestPerformance.Iface>(new TestPerformanceImpl());
        TProcessorFactory tProcessorFactory = new TProcessorFactory(tprocessor);

        try {
            TServer server = ThriftServerUtils.createThreadedSelectorServer(tProcessorFactory, new TBinaryProtocol.Factory(), port, 100000,
                    1024 * 1024, 16 * 1024 * 1024, 5,
                    4);
            server.serve();
        } catch (TTransportException e) {
            e.printStackTrace();
        }
    }
}

ThriftServerUtils已经封装好各种客户端创建方式

package com.thrift.test;

import org.apache.thrift.TProcessorFactory;
import org.apache.thrift.protocol.TProtocolFactory;
import org.apache.thrift.server.THsHaServer;
import org.apache.thrift.server.TNonblockingServer;
import org.apache.thrift.server.TThreadPoolServer;
import org.apache.thrift.server.TThreadedSelectorServer;
import org.apache.thrift.transport.*;
import org.apache.thrift.server.TThreadedSelectorServer.Args.AcceptPolicy;
import java.util.concurrent.TimeUnit;

/**
 * Thrift Server class
 *
 * @author junqiang.xiao@hand-china.com
 * @date 2018/4/27
 */
public class ThriftServerUtils {
    public final static int DEFAULT_CLIENT_TIMEOUT_MS = 10000;
    public final static int DEFAULT_MAX_FRAMESIZE = 1024 * 1024;
    public final static int DEFAULT_TOTAL_MAX_READ_BUFFERSIZE = 16 * 1024 * 1024;
    public final static int DEFAULT_NUM_SELECTOR_THREADS = 2;
    public final static int DEFAULT_NUM_WORKER_THREADS;
    static {
        DEFAULT_NUM_WORKER_THREADS = Math.max(4, Runtime.getRuntime().availableProcessors());
    }

    /**
     * Creates a {@link TThreadPoolServer} server.
     *
     * <ul>
     * <li>1 dedicated thread for accepting connections</li>
     * <li>Once a connection is accepted, it gets scheduled to be processed by a
     * worker thread. The worker thread is tied to the specific client
     * connection until it's closed. Once the connection is closed, the worker
     * thread goes back to the thread pool.</li>
     * </ul>
     *
     * @param processorFactory
     * @param protocolFactory
     * @param port
     *            port number on which the Thrift server will listen
     * @param clientTimeoutMillisecs
     * @param maxFrameSize
     *            max size (in bytes) of a transport frame, supply {@code <=0}
     *            value to let the method choose a default {@code maxFrameSize}
     *            value (which is 1Mb)
     * @param maxWorkerThreads
     *            max number of worker threads, supply {@code <=0} value to let
     *            the method choose a default {@code maxWorkerThreads} value
     *            (which is
     *            {@code Math.max(4, Runtime.getRuntime().availableProcessors())}
     *            )
     * @return
     * @throws TTransportException
     */
    public static TThreadPoolServer createThreadPoolServer(TProcessorFactory processorFactory,
                                                           TProtocolFactory protocolFactory, int port, int clientTimeoutMillisecs,
                                                           int maxFrameSize, int maxWorkerThreads) throws TTransportException {
        if (clientTimeoutMillisecs <= 0) {
            clientTimeoutMillisecs = DEFAULT_CLIENT_TIMEOUT_MS;
        }
        if (maxFrameSize <= 0) {
            maxFrameSize = DEFAULT_MAX_FRAMESIZE;
        }
        if (maxWorkerThreads <= 0) {
            maxWorkerThreads = DEFAULT_NUM_WORKER_THREADS;
        }

        TServerTransport transport = new TServerSocket(port, clientTimeoutMillisecs);
        TTransportFactory transportFactory = new TFramedTransport.Factory(maxFrameSize);
        TThreadPoolServer.Args args = new TThreadPoolServer.Args(transport)
                .processorFactory(processorFactory).protocolFactory(protocolFactory)
                .transportFactory(transportFactory).minWorkerThreads(1)
                .maxWorkerThreads(maxWorkerThreads);
        TThreadPoolServer server = new TThreadPoolServer(args);
        return server;
    }

    /**
     * Creates a {@link TNonblockingServer} server.
     *
     * <p>
     * Non-blocking Thrift server that uses {@code java.nio.channels.Selector}
     * to handle multiple clients. However, messages are processed by the same
     * thread that calls {@code select()}.
     * </p>
     *
     * @param processorFactory
     * @param protocolFactory
     * @param port
     *            port number on which the Thrift server will listen
     * @param clientTimeoutMillisecs
     * @param maxFrameSize
     *            max size (in bytes) of a transport frame, supply {@code <=0}
     *            value to let the method choose a default {@code maxFrameSize}
     *            value (which is 1Mb)
     * @param maxReadBufferSize
     *            max size (in bytes) of read buffer, supply {@code <=0} value
     *            to let the method choose a default {@code maxReadBufferSize}
     *            value (which is 16Mb)
     * @return
     * @throws TTransportException
     */
    public static TNonblockingServer createNonBlockingServer(TProcessorFactory processorFactory,
                                                             TProtocolFactory protocolFactory, int port, int clientTimeoutMillisecs,
                                                             int maxFrameSize, long maxReadBufferSize) throws TTransportException {
        if (clientTimeoutMillisecs <= 0) {
            clientTimeoutMillisecs = DEFAULT_CLIENT_TIMEOUT_MS;
        }
        if (maxFrameSize <= 0) {
            maxFrameSize = DEFAULT_MAX_FRAMESIZE;
        }
        if (maxReadBufferSize <= 0) {
            maxReadBufferSize = DEFAULT_TOTAL_MAX_READ_BUFFERSIZE;
        }

        TNonblockingServerTransport transport = new TNonblockingServerSocket(port,
                clientTimeoutMillisecs);
        TTransportFactory transportFactory = new TFramedTransport.Factory(maxFrameSize);
        TNonblockingServer.Args args = new TNonblockingServer.Args(transport)
                .processorFactory(processorFactory).protocolFactory(protocolFactory)
                .transportFactory(transportFactory);
        args.maxReadBufferBytes = maxReadBufferSize;
        TNonblockingServer server = new TNonblockingServer(args);
        return server;
    }

    /**
     * Creates a {@link THsHaServer} server.
     *
     * <p>
     * Similar to {@link TNonblockingServer} but a separate pool of worker
     * threads is used to handle message processing.
     * </p>
     *
     * @param processorFactory
     * @param protocolFactory
     * @param port
     *            port number on which the Thrift server will listen
     * @param clientTimeoutMillisecs
     * @param maxFrameSize
     *            max size (in bytes) of a transport frame, supply {@code <=0}
     *            value to let the method choose a default {@code maxFrameSize}
     *            value (which is 1Mb)
     * @param maxReadBufferSize
     *            max size (in bytes) of read buffer, supply {@code <=0} value
     *            to let the method choose a default {@code maxReadBufferSize}
     *            value (which is 16Mb)
     * @param numWorkerThreads
     *            number of worker threads, supply {@code <=0} value to let the
     *            method choose a default {@code numWorkerThreads} value (which
     *            is
     *            {@code Math.max(4, Runtime.getRuntime().availableProcessors())}
     *            )
     * @return
     * @throws TTransportException
     */
    public static THsHaServer createHaHsServer(TProcessorFactory processorFactory,
                                               TProtocolFactory protocolFactory, int port, int clientTimeoutMillisecs,
                                               int maxFrameSize, long maxReadBufferSize, int numWorkerThreads)
            throws TTransportException {
        if (clientTimeoutMillisecs <= 0) {
            clientTimeoutMillisecs = DEFAULT_CLIENT_TIMEOUT_MS;
        }
        if (maxFrameSize <= 0) {
            maxFrameSize = DEFAULT_MAX_FRAMESIZE;
        }
        if (maxReadBufferSize <= 0) {
            maxReadBufferSize = DEFAULT_TOTAL_MAX_READ_BUFFERSIZE;
        }
        if (numWorkerThreads <= 0) {
            numWorkerThreads = DEFAULT_NUM_WORKER_THREADS;
        }

        TNonblockingServerTransport transport = new TNonblockingServerSocket(port,
                clientTimeoutMillisecs);
        TTransportFactory transportFactory = new TFramedTransport.Factory(maxFrameSize);
        THsHaServer.Args args = new THsHaServer.Args(transport).processorFactory(processorFactory)
                .protocolFactory(protocolFactory).transportFactory(transportFactory)
                .workerThreads(numWorkerThreads).stopTimeoutVal(60)
                .stopTimeoutUnit(TimeUnit.SECONDS);
        args.maxReadBufferBytes = maxReadBufferSize;
        THsHaServer server = new THsHaServer(args);
        return server;
    }

    /**
     * Creates a {@link TThreadedSelectorServer} server.
     *
     * <p>
     * Similar to {@link THsHaServer} but its use 2 thread pools: one for
     * handling network I/O (e.g. accepting client connections), one for
     * handling message such like {@link THsHaServer}.
     * </p>
     *
     * @param processorFactory
     * @param protocolFactory
     * @param port
     *            port number on which the Thrift server will listen
     * @param clientTimeoutMillisecs
     * @param maxFrameSize
     *            max size (in bytes) of a transport frame, supply {@code <=0}
     *            value to let the method choose a default {@code maxFrameSize}
     *            value (which is 1Mb)
     * @param maxReadBufferSize
     *            max size (in bytes) of read buffer, supply {@code <=0} value
     *            to let the method choose a default {@code maxReadBufferSize}
     *            value (which is 16Mb)
     * @param numSelectorThreads
     *            number of selector threads, supply {@code <=0} value to let
     *            the method choose a default {@code numSelectorThreads} value
     *            (which is {@code 2} )
     * @param numWorkerThreads
     *            number of worker threads, supply {@code <=0} value to let the
     *            method choose a default {@code numWorkerThreads} value (which
     *            is
     *            {@code Math.max(4, Runtime.getRuntime().availableProcessors())}
     *            )
     * @return
     * @throws TTransportException
     */
    public static TThreadedSelectorServer createThreadedSelectorServer(
            TProcessorFactory processorFactory, TProtocolFactory protocolFactory, int port,
            int clientTimeoutMillisecs, int maxFrameSize, long maxReadBufferSize,
            int numSelectorThreads, int numWorkerThreads) throws TTransportException {
        if (clientTimeoutMillisecs <= 0) {
            clientTimeoutMillisecs = DEFAULT_CLIENT_TIMEOUT_MS;
        }
        if (maxFrameSize <= 0) {
            maxFrameSize = DEFAULT_MAX_FRAMESIZE;
        }
        if (maxReadBufferSize <= 0) {
            maxReadBufferSize = DEFAULT_TOTAL_MAX_READ_BUFFERSIZE;
        }
        if (numSelectorThreads <= 0) {
            numSelectorThreads = DEFAULT_NUM_SELECTOR_THREADS;
        }
        if (numWorkerThreads <= 0) {
            numWorkerThreads = DEFAULT_NUM_WORKER_THREADS;
        }

        TNonblockingServerTransport transport = new TNonblockingServerSocket(port,
                clientTimeoutMillisecs);
        TTransportFactory transportFactory = new TFramedTransport.Factory(maxFrameSize);
        TThreadedSelectorServer.Args args = new TThreadedSelectorServer.Args(transport)
                .processorFactory(processorFactory).protocolFactory(protocolFactory)
                .transportFactory(transportFactory).workerThreads(numWorkerThreads)
                .acceptPolicy(AcceptPolicy.FAIR_ACCEPT).acceptQueueSizePerThread(100000)
                .selectorThreads(numSelectorThreads);
        args.maxReadBufferBytes = maxReadBufferSize;
        TThreadedSelectorServer server = new TThreadedSelectorServer(args);
        return server;
    }
}

启动服务端、客户端,查看执行效果

本例修改,五个线程一起发送请求

客户端启动....
begin:Tue May 08 13:40:27 CST 2018
world:2
world:3
world:1
world:0
world:4
end:Tue May 08 13:40:28 CST 2018
time:0 s
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值