pawel@pawel-VPCEH390X ~/p/l/benchmarker> ./bench.py <_gatheringfuture> Task was destroyed but it is pending! task: wait_for= cb=[gather.._done_callback(0)() at /usr/local/lib/python3.5/asyncio/tasks.py:602]> Task was destroyed but it is pending! task: wait_for= cb=[gather.._done_callback(1)() at /usr/local/lib/python3.5/asyncio/tasks.py:602]> Task was destroyed but it is pending! task: wait_for= cb=[gather.._done_callback(2)() at /usr/local/lib/python3.5/asyncio/tasks.py:602]> Task was destroyed but it is pending! task: wait_for= cb=[gather.._done_callback(3)() at /usr/local/lib/python3.5/asyncio/tasks.py:602]>
发生了什么?查看本地日志,你会发现没有任何请求到达服务器,实际上没有任何请求发生。打印信息首先打印<_gathering pending>对象,然后警告等待的任务被销毁。又一次的,你忘记了await。
修改
responses = asyncio.gather(*tasks)
到
responses = await asyncio.gather(*tasks)
即可解决问题。
经验:任何时候,你在等待什么的时候,记得使用await。
同步 vs 异步
重头戏来了。我们来验证异步是否值得(编码麻烦)。看看同步与异步(client)效率上的区别。异步每分钟能够发起多少请求。
为此,我们首先配置一个异步的aiohttp服务器端。这个服务端将获取全部的html文本, 来自Marry Shelley的Frankenstein。在每个响应中,它将添加随机的延时。有的为0,最大值为3s。类似真正的app。有些app的响应延时为固定值,一般而言,每个响应的延时是不同的。
服务器代码如下:
#!/usr/local/bin/python3.5 import asyncio from datetime import datetime from aiohttp import web import random # set seed to ensure async and sync client get same distribution of delay values # and tests are fair random.seed(1) async def hello(request): name = request.match_info.get("name", "foo") n = datetime.now().isoformat() delay = random.randint(0, 3) await asyncio.sleep(delay) headers = {"content_type": "text/html", "delay": str(delay)} # opening file is not async here, so it may block, to improve # efficiency of this you can consider using asyncio Executors # that will delegate file operation to separate thread or process # and improve performance # https://docs.python.org/3/library/asyncio-eventloop.html#executor # https://pymotw.com/3/asyncio/executors.html with open("frank.html", "rb") as html_body: print("{}: {} delay: {}".format(n, request.path, delay)) response = web.Response(body=html_body.read(), headers=headers) return response app = web.Application() app.router.add_route("GET", "/{name}", hello) web.run_app(app)
同步客户端代码如下:
import requests r = 100 url = "http://localhost:8080/{}" for i in range(r): res = requests.get(url.format(i)) delay = res.headers.get("DELAY") d = res.headers.get("DATE") print("{}:{} delay {}".format(d, res.url, delay))
在我的机器上,上面的代码耗时2分45s。而异步代码只需要3.48s。
有趣的是,异步代码耗时无限接近最长的延时(server的配置)。如果你观察打印信息,你会发现异步客户端的优势有多么巨大。有的响应为0延迟,有的为3s。同步模式下,客户端会阻塞、等待,你的机器什么都不做。异步客户端不会浪费时间,当有延迟发生时,它将去做其他的事情。在日志中,你也会发现这个现象。首先是0延迟的响应,然后当它们到达后,你将看到1s的延迟,最后是最大延迟的响应。
极限测试
现在我们知道异步表现更好,让我们尝试去找到它的极限,同时尝试让它崩溃。我将发送1000异步请求。我很好奇我的客户端能够处理多少数量的请求。
> time python3 bench.py 2.68user 0.24system 0:07.14elapsed 40%CPU (0avgtext+0avgdata 53704maxresident)k 0inputs+0outputs (0major+14156minor)pagefaults 0swaps
1000个请求,花费了7s。相当不错的成绩。然后10K呢?很不幸,失败了:
responses are <_gatheringfuture> Traceback (most recent call last): File "/home/pawel/.local/lib/python3.5/site-packages/aiohttp/connector.py", line 581, in _create_connection File "/usr/local/lib/python3.5/asyncio/base_events.py", line 651, in create_connection File "/usr/local/lib/python3.5/asyncio/base_events.py", line 618, in create_connection File "/usr/local/lib/python3.5/socket.py", line 134, in __init__ OS Error: [Errno 24] Too many open files
traceback显示,open files太多了,可能代表着open sockets太多。为什么叫文件?Sockets(套接字)仅仅是文件描述符,操作系统有数量的限制。多少才叫太多呢?我查看Python源码,然后发现这个值为1024.怎么样绕过这个问题?一个粗暴的办法是增加这个数值,但是听起来并不高明。更好的办法是,加入一些同步机制,限制并发数量。于是我在asyncio.Semaphore()中加入最大任务限制为1000.