TThreadedServer vs. TNonblockingServer

原文转载于:https://github.com/

TThreadedServer vs. TNonblockingServer

Introduction

Which Thrift RPC server should MapKeeper use, TThreadedServer or TNonblockingServer? This benchmark compares 2 Thrift C++ RPC servers using StubServer. The focus of this benchmark is to test these 2 servers on a multi-core servers with a limited number (<1000) of concurrent client connections.

TThreadedServer

TThreadedServer spawns a new thread for each client connection, and each thread remains alive until the client connection is closed. This means that if there are 1000 concurrent client connections, TThreadedServer needs to run 1000 threads simultaneously.

TNonblockingServer

TNonblockingServer has one thread dedicated for network I/O. The same thread can also process requests, or you can create a separate pool of worker threads for request processing. The server can handle many concurrent connections with a small number of threads since it doesn’t need to spawn a new thread for each connection.

TThreadPoolServer (not benchmarked here)

TThreadPoolServer is similar to TThreadedServer; each client connection gets its own dedicated server thread. It’s different from TThreadedServer in 2 ways:

  1. Server thread goes back to the thread pool after client closes the connection for reuse.
  2. There is a limit on the number of threads. The thread pool won’t grow beyond the limit.

Client hangs if there is no more thread available in the thread pool. It’s much more difficult to use compared to the other 2 servers.

Configurations

Hardware

  • 2 x Xeon E5620 2.40GHz (HT enabled, 8 cores, 16 threads)

Operating System

  • RHEL Server 5.4, Linux 2.6.18-164.2.1.el5 x86_64, 64-bit

Software

  • Thrift 0.6.1
  • TNonblockingServer thread pool size: 32 threads
  • Client and server run on the same box.

YCSB Workload

  • Number of client threads: 300
  • Number of requests: 10 million
  • Request size: ~60 bytes
  • Response size: ~30 bytes

Results

In this benchmark, TThreadedServer performs much better than TNonblockingServer. CPU is maxed out with TThreadedServer, while TNonblockingServer only uses about 20% of CPU time. I’m guessing it’s because the I/O thread is being the bottleneck and worker threads are not getting enough things to do .

Conclusion

TThreadedServer seems like a better fit for MapKeeper since I’m not planning to support thousands of concurrent connections (yet). TNonblockingServer might be a better choice when you face the C10K problem, but you need to make sure the I/O thread doesn’t become the bottleneck. It would be an interesting project to add a new type of Thrift server with a single accept() thread and multiple worker threads handling network I/O and request processing. There is already an open JIRA for this feature in Java. Is anybody interested in working on a similar feature in C++?

以下是一个简单的实现,包括客户端和服务器端的代码。需要注意的是,这只是一个简单的示例,实际生产环境中需要考虑很多问题,比如错误处理、安全性、性能等等。 客户端代码: ```python import sys from PyQt5 import QtWidgets, QtGui from thrift.transport import TSocket from thrift.transport import TTransport from thrift.protocol import TBinaryProtocol from ImageTransfer import ImageTransfer class MyWindow(QtWidgets.QWidget): def __init__(self): super().__init__() # 创建文件选择对话框,选择要发送的图片文件 filePath, _ = QtWidgets.QFileDialog.getOpenFileName(self, "Open Image File", "", "Images (*.png *.xpm *.jpg *.bmp)") if not filePath: sys.exit() # 创建 RPC 客户端 transport = TSocket.TSocket("localhost", 9090) transport = TTransport.TBufferedTransport(transport) protocol = TBinaryProtocol.TBinaryProtocol(transport) self.client = ImageTransfer.Client(protocol) try: # 连接 RPC 服务器 transport.open() # 打开图片文件 with open(filePath, "rb") as f: imageData = f.read() # 发送图片文件名和数据到服务器 self.client.uploadImage(filePath.split("/")[-1], imageData) # 关闭连接 transport.close() QtWidgets.QMessageBox.information(self, "Info", "Image sent successfully.") except TTransport.TTransportException as e: QtWidgets.QMessageBox.critical(self, "Error", "Failed to send image: " + str(e)) sys.exit() if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) window = MyWindow() sys.exit(app.exec_()) ``` 服务器端代码: ```python import sys from thrift.transport import TSocket from thrift.transport import TTransport from thrift.protocol import TBinaryProtocol from thrift.server import TServer from ImageTransfer import ImageTransfer class ImageTransferHandler(ImageTransfer.Iface): def uploadImage(self, fileName, imageData): # 保存图片文件 with open(fileName, "wb") as f: f.write(imageData) print("Image saved to file", fileName) if __name__ == "__main__": try: # 创建 RPC 服务器 handler = ImageTransferHandler() processor = ImageTransfer.Processor(handler) transport = TSocket.TServerSocket(port=9090) tfactory = TTransport.TBufferedTransportFactory() pfactory = TBinaryProtocol.TBinaryProtocolFactory() server = TServer.TSimpleServer(processor, transport, tfactory, pfactory) print("Server started") # 启动服务器 server.serve() except Exception as e: print("Error:", e) sys.exit(1) ``` 需要注意的是,这里使用了 Thrift 的 TSimpleServer,它是一个简单的单线程服务器,适合测试和演示。在实际生产环境中,应该使用更强大的服务器,比如 TThreadedServer 或 TThreadPoolServer
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值