事件处理模式
在《面向模式的软件体系架构卷2:用于并发和网络化对象模式》中,总结了对于当前比较流行的事件处理模式的四种基本模式,分别是反应器模式、主动器模式、异步完成标记和接收器-连接器模式。
- 反应器模式,该模式引入的结构将事件驱动的应用可以多路分解并分配从一个或者多个客户机发送应用的服务请求,该模式逆转了应用程序中的控制流,也就是好莱坞原则(不要打电话给我们、我们会打电话给你的),即当有事件准备完成之后就会通过应用程序,有事件准备好可以执行,然后应用程序调用对应的回调函数来处理对应的事件,这样应用程序只需要实现具体事件处理程序来配合多路分解机制和分配机制,虽然该模式相对直观但是该模式还是收到了一定的性能限制,特别是它还是不能同时支持大量的客户机或者耗时长的客户机请求,因为它在事件多路分解层串行化了所有的事件处理程序的处理过程,处理性能并不是很高,当然现在也有好多反应器模式的变种来提高处理性能。
- 主动器模式,是事件驱动应用能有效的多路分解和分配由完成的异步操作所触发的服务请求,在一定情况下,它获得了并发的性能优势,在该模式中,客户机和完成处理程序所代表的应用程序称为主动性主体。与被动地等待指示事件的到达并作出响应的反应器模式不同,主动器模式中的客户机和完成处理程序通过在一个异步操作处理器中主动地初始化一个或者多个异步操作请求,引起应用程序内部的控制流和数据流,异步操作完成后,异步操作处理器和指定的主动器组件协作,将产生的完成事件多路分解给相关的完成处理程序,并分配这些处理程序的回调处理方法,完成处理程序处理一个完成事件后,就主动地激活一个异步操作请求。限制就是异步操作还需要操作系统支持,如果操作系统不支持则需要通过多线程等其他方式来模拟实现。
- 异步完成标记模式,使应用程序能对它在服务中调用异步操作而引起的响应进行有效地多路分解和处理,从而提高异步处理的效率,主要是对主动器模式中任务的多路分解的优化。
- 接收器-接收器模式,该模式经常和反应器模式结合使用,将网络化系统中同级服务的连接和协作初始化与随后进行的处理分开,该模式允许应用程序配置它们的连接拓扑结构,进行这种配置不依赖于应用程序所提供的服务。
本文主要就是介绍最常用的反应器模式,该模式主要就是通过将事件进行多路分解然后通过不同的回调函数处理不同的事件。
反应器模式
当前的C10k问题,高性能的服务端的实现模型,基本上都是选用的反应器模式来实现,通过多路IO复用来进行处理请求的处理,在处理网络请求的过程中主要就是通过connect、accept、read、write等操作接受网络请求,接受网络数据,发送处理结果等操作,首先来查看最基础的反应器模式的实现方法。
单线程反应器模式
该时序图就是简易的描述了反应器模式所有事件的执行都是通过读事件或者写事件进行驱动的。
单线程模式下的服务端代码相对比较简单,如下所示;
import selectors
import socket
selector = selectors.DefaultSelector()
def application():
return "test response"
class RequestHandler(object):
def __init__(self, stream, address, server):
self.application = application
self.stream = stream
self.stream.setblocking(False)
self.address = address
self.server = server
self._recv_buff = ""
self._write_buff = b""
self.state = selectors.EVENT_READ
selector.register(self.stream, selectors.EVENT_READ, self._handle_event)
def parse_request(self):
try:
response = self.application()
resp = "HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nContent-Length: {0}\r\n\r\n{1}".format(len(response), response)
except Exception as e:
response = "error"
resp = "HTTP/1.1 500 OK\r\nContent-Type: text/plain\r\nContent-Length: {0}\r\n\r\n{1}".format(len(response), response)
self._write_buff += resp.encode(encoding="utf-8")
def _handle_event(self, fd, mask):
if mask & selectors.EVENT_READ:
self._handle_read()
elif mask & selectors.EVENT_WRITE:
self._handle_write()
state = 0
if self._recv_buff:
state |= selectors.EVENT_READ
if self._write_buff:
state |= selectors.EVENT_WRITE
# print(" state and check state ", state, self.state, mask)
if state != 0 and state != self.state:
self.state = state
self.modify_state(state)
def _handle_read(self):
data = self.stream.recv(1024)
if data:
self._recv_buff += data.decode("utf-8")
self.parse_request()
else:
self._handle_close()
def modify_state(self, state):
selector.modify(self.stream, state, self._handle_event)
def _handle_write(self):
while self._write_buff:
try:
length = self.stream.send(self._write_buff)
self._write_buff = self._write_buff[length:]
except Exception as e:
print("write error {0}".format(e))
def _handle_close(self):
print("handle close")
selector.unregister(self.stream)
try:
self.stream.close()
except Exception:
pass
class Server(object):
address_family = socket.AF_INET
socket_type = socket.SOCK_STREAM
request_queue_size = 5
def __init__(self, server_bind, handle_class=RequestHandler):
self.__shutdown_request = False
self.allow_reuse_address = True
self.socket = None
self.handle_class = handle_class
self.server_address = server_bind
self.socket = socket.socket(self.address_family,
self.socket_type)
self.server_bind()
def server_bind(self):
if self.allow_reuse_address:
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socket.bind(self.server_address)
self.server_address = self.socket.getsockname()
self.socket.listen(self.request_queue_size)
def serve_forever(self, poll_interval=0.5):
selector.register(self.socket, selectors.EVENT_READ, self._handle_request_noblock)
while True:
ready = selector.select(poll_interval)
if self.__shutdown_request:
break
for key, mask in ready:
callback = key.data
callback(key.fileobj, mask)
def _handle_request_noblock(self, fd, mask):
try:
conn, address = self.socket.accept()
except Exception:
return
try:
self.handle_class(conn, address, self)
except Exception as e:
print(" handle_class Error {0}".format(e))
def main():
server = Server(("127.0.0.1", 5555))
server.serve_forever()
if __name__ == '__main__':
main()
这段代码就是简单的单线程的反应器模式的简单实现,当运行该脚本之后,在终端中或者浏览器中访问http://127.0.0.1:5555就会得到如下返回;
curl 127.0.0.1:5555
test response
这行返回数据就是脚本中application函数返回的内容,因为该脚本只是做原理性的说明,故没有按照http协议的标准来解析数据只是做了简单的数据返回而已,从该端结构也可看出所有的响应请求都是阻塞执行,事件的请求都是阻塞在异步事件驱动框架中进行。进行压测查看一下性能。
wrk -t4 -c1024 -d90s -T5 --latency http://127.0.0.1:5555
Running 2m test @ http://127.0.0.1:5555
4 threads and 1024 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 59.51ms 14.10ms 225.26ms 75.76%
Req/Sec 3.88k 448.64 5.62k 72.11%
Latency Distribution
50% 63.10ms
75% 66.94ms
90% 73.14ms
99% 83.63ms
1388445 requests in 1.50m, 103.28MB read
Socket errors: connect 0, read 1887, write 35, timeout 0
Requests/sec: 15421.53
Transfer/sec: 1.15MB
多线程单事件驱动改进
在单线程反应器模式中,由于一个线程进行事件的驱动,并在驱动的过程中来处理业务逻辑,此时我们尝试改造成多个线程等待进来的请求,事件驱动模式还是单事件驱动。
import selectors
import socket
import queue
from threading import Thread
selector = selectors.DefaultSelector()
def application():
return "test response"
class RequestHandler(object):
def __init__(self, stream, address, server):
self.application = application
self.stream = stream
self.stream.setblocking(False)
self.address = address
self.server = server
self._recv_buff = ""
self._write_buff = b""
self.state = selectors.EVENT_READ
selector.register(self.stream, selectors.EVENT_READ, self._handle_event)
def parse_request(self):
try:
response = self.application()
resp = "HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nContent-Length: {0}\r\n\r\n{1}".format(len(response), response)
except Exception as e:
response = "error"
resp = "HTTP/1.1 500 OK\r\nContent-Type: text/plain\r\nContent-Length: {0}\r\n\r\n{1}".format(len(response), response)
self._write_buff += resp.encode(encoding="utf-8")
def _handle_event(self, fd, mask):
if mask & selectors.EVENT_READ:
self._handle_read()
elif mask & selectors.EVENT_WRITE:
self._handle_write()
state = 0
if self._recv_buff:
state |= selectors.EVENT_READ
if self._write_buff:
state |= selectors.EVENT_WRITE
# print(" state and check state ", state, self.state, mask)
if state != 0 and state != self.state:
self.state = state
self.modify_state(state)
def _handle_read(self):
data = self.stream.recv(1024)
if data:
self._recv_buff += data.decode("utf-8")
self.parse_request()
else:
self._handle_close()
def modify_state(self, state):
selector.modify(self.stream, state, self._handle_event)
def _handle_write(self):
while self._write_buff:
try:
length = self.stream.send(self._write_buff)
self._write_buff = self._write_buff[length:]
except Exception as e:
print("write error {0}".format(e))
def _handle_close(self):
print("handle close")
selector.unregister(self.stream)
try:
self.stream.close()
except Exception:
pass
class Server(object):
address_family = socket.AF_INET
socket_type = socket.SOCK_STREAM
request_queue_size = 5
def __init__(self, server_bind, handle_class=RequestHandler):
self.__shutdown_request = False
self.allow_reuse_address = True
self.socket = None
self.handle_class = handle_class
self.server_address = server_bind
self.socket = socket.socket(self.address_family,
self.socket_type)
self.server_bind()
self.work_queue = queue.Queue()
self.start_worker()
def start_worker(self):
for i in range(10):
t = Thread(target=self.spawn_worker, args=(i, ))
t.start()
def spawn_worker(self, num):
while not self.__shutdown_request:
try:
conn, address = self.work_queue.get()
except Exception as e:
print("spawn_worrker get {0}".format(e))
return
print("worker thread num : {0}".format(num))
try:
self.handle_class(conn, address, self)
except Exception as e:
print(" handle_class Error {0}".format(e))
def server_bind(self):
if self.allow_reuse_address:
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socket.bind(self.server_address)
self.server_address = self.socket.getsockname()
self.socket.listen(self.request_queue_size)
def serve_forever(self, poll_interval=0.5):
selector.register(self.socket, selectors.EVENT_READ, self._handle_request_noblock)
while True:
ready = selector.select(poll_interval)
if self.__shutdown_request:
break
for key, mask in ready:
callback = key.data
callback(key.fileobj, mask)
def _handle_request_noblock(self, fd, mask):
try:
conn, address = self.socket.accept()
except Exception:
return
self.work_queue.put((conn, address))
def main():
server = Server(("127.0.0.1", 5555))
server.serve_forever()
if __name__ == '__main__':
main()
通过加入线程池解决了并发响应客户端数据的性能,但由于python本身在多线程中有GIL锁的存在故利用线程池的解决方案可能性能未必有很好的提升,而且在响应方案中由于加入了线程安全的队列,这也加重了在多线程条件下的抢占的开销,改进后的压测数据如下所示,通过对比可知加入多线程的解决方案的性能还略有下降。
wrk -t4 -c1024 -d90s -T5 --latency http://127.0.0.1:5555
Running 2m test @ http://127.0.0.1:5555
4 threads and 1024 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 58.89ms 17.19ms 242.36ms 74.64%
Req/Sec 3.77k 446.73 5.53k 70.19%
Latency Distribution
50% 64.36ms
75% 68.54ms
90% 74.92ms
99% 85.79ms
1349814 requests in 1.50m, 100.41MB read
Socket errors: connect 0, read 1970, write 60, timeout 0
Requests/sec: 14987.46
Transfer/sec: 1.11MB
多事件驱动加多线程处理的反应器模式
在该模式中,新增加多个事件驱动模式,主事件驱动只需要接受新接受的连接请求,剩余的连接的事件驱动都由子事件驱动来进行交互,从而比单事件驱动提高了事件驱动的效率。
import selectors
import socket
import queue
from threading import Thread, Lock
import random
selector = selectors.DefaultSelector()
def application():
return "test response"
class RequestHandler(object):
def __init__(self, stream, address, server, sel):
self.application = application
self.stream = stream
self.stream.setblocking(False)
self.address = address
self.server = server
self._recv_buff = ""
self._write_buff = b""
self.state = selectors.EVENT_READ
self.sel = sel
self.sel.register(self.stream, selectors.EVENT_READ, self._handle_event)
def parse_request(self):
try:
response = self.application()
resp = "HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nContent-Length: {0}\r\n\r\n{1}".format(len(response), response)
except Exception as e:
response = "error"
resp = "HTTP/1.1 500 OK\r\nContent-Type: text/plain\r\nContent-Length: {0}\r\n\r\n{1}".format(len(response), response)
self._write_buff += resp.encode(encoding="utf-8")
def _handle_event(self, fd, mask):
if mask & selectors.EVENT_READ:
self._handle_read()
elif mask & selectors.EVENT_WRITE:
self._handle_write()
state = 0
if self._recv_buff:
state |= selectors.EVENT_READ
if self._write_buff:
state |= selectors.EVENT_WRITE
# print(" state and check state ", state, self.state, mask)
if state != 0 and state != self.state:
self.state = state
self.modify_state(state)
def _handle_read(self):
data = self.stream.recv(1024)
if data:
self._recv_buff += data.decode("utf-8")
self.parse_request()
else:
self._handle_close()
def modify_state(self, state):
self.sel.modify(self.stream, state, self._handle_event)
def _handle_write(self):
while self._write_buff:
try:
length = self.stream.send(self._write_buff)
self._write_buff = self._write_buff[length:]
except Exception as e:
print("write error {0}".format(e))
def _handle_close(self):
print("handle close")
self.sel.unregister(self.stream)
try:
self.stream.close()
except Exception:
pass
class Server(object):
address_family = socket.AF_INET
socket_type = socket.SOCK_STREAM
request_queue_size = 5
def __init__(self, server_bind, handle_class=RequestHandler):
self.__shutdown_request = False
self.allow_reuse_address = True
self.socket = None
self.handle_class = handle_class
self.server_address = server_bind
self.socket = socket.socket(self.address_family,
self.socket_type)
self.server_bind()
self.work_queue = queue.Queue()
self.start_worker()
self.sels = []
self.lock = Lock()
self.start_sels()
def start_sels(self):
for i in range(5):
t = Thread(target=self.sub_forever)
t.start()
def start_worker(self):
for i in range(10):
t = Thread(target=self.spawn_worker, args=(i, ))
t.start()
def spawn_worker(self, num):
while not self.__shutdown_request:
try:
conn, address = self.work_queue.get()
except Exception as e:
print("spawn_worrker get {0}".format(e))
return
print("worker thread num : {0}".format(num))
rand_index_sel = random.randint(0, 4)
print("random sels index : {0}".format(rand_index_sel))
sel = self.sels[rand_index_sel]
try:
self.handle_class(conn, address, self, sel)
except Exception as e:
print(" handle_class Error {0}".format(e))
def server_bind(self):
if self.allow_reuse_address:
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socket.bind(self.server_address)
self.server_address = self.socket.getsockname()
self.socket.listen(self.request_queue_size)
def sub_forever(self, poll_interval=0.5):
selector_sub = selectors.DefaultSelector()
print("start sub_selector_sub")
with self.lock:
self.sels.append(selector_sub)
print("current sels ", self.sels)
while True:
ready = selector_sub.select(poll_interval)
if self.__shutdown_request:
break
for key, mask in ready:
print("sub ready : {0}".format(key))
callback = key.data
callback(key.fileobj, mask)
def serve_forever(self, poll_interval=0.5):
selector.register(self.socket, selectors.EVENT_READ, self._handle_request_noblock)
while True:
ready = selector.select(poll_interval)
if self.__shutdown_request:
break
for key, mask in ready:
print("main selector events : {0}".format(key))
callback = key.data
callback(key.fileobj, mask)
def _handle_request_noblock(self, fd, mask):
try:
conn, address = self.socket.accept()
except Exception:
return
self.work_queue.put((conn, address))
def main():
server = Server(("127.0.0.1", 5555))
server.serve_forever()
if __name__ == '__main__':
main()
在该模式的改进下,主要通过新增多个子线程来,并在每个子线程中初始化一个事件驱动并单独执行事件驱动,每个子事件驱动互相独立,从而提高了事件驱动的响应效率。
wrk -t4 -c1024 -d90s -T5 --latency http://127.0.0.1:5555
Running 2m test @ http://127.0.0.1:5555
4 threads and 1024 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 181.72ms 57.96ms 471.36ms 72.76%
Req/Sec 1.19k 280.01 2.27k 71.15%
Latency Distribution
50% 194.47ms
75% 216.44ms
90% 239.99ms
99% 303.04ms
425025 requests in 1.50m, 31.62MB read
Socket errors: connect 0, read 2101, write 221, timeout 0
Requests/sec: 4719.33
Transfer/sec: 359.48KB
从压测的效果来看,无疑简单粗暴的修改为这种形式效果很差,选用这种模式需要优化的点还有很多,而且因为在Python中新加了几个线程来执行,无疑更加重了调度的成本,后续有时间可继续优化该模式的响应性能。
总结
本文主要是总结了反应器模式常用的一些示例,几种不同的模式下的响应都不相同,故所面对的响应性能也有所差别,本文主要是原理性的示例而已,其中具体的优化的措施或者示例代码有不对的地方并没有做过多的考虑,单线程的反应器模式是目前应用比较广泛的一种模式,例如Redis的事件驱动也采用该种模式。。由于本人才疏学浅,如有错误请批评指正。