写了个简单的协程爬虫爬取 B 站用户信息,代码如下:
import requests
import re
import json
import datetime
import asyncio
def get_info(uid):
url_info = "http://space.bilibili.com/ajax/member/GetInfo?mid=" #基本信息
uid = str(uid)
return loop.run_in_executor(None, requests.get, url_info+uid)
async def user_info(num):
for uid in range(num, num+10):
info = await get_info(uid)
info = json.loads(info.text)["data"]
try:
# print(datetime.datetime.fromtimestamp(info['regtime']))
print("ok", uid)
print(info)
except UnicodeEncodeError as e:
print("UnicodeEncodeError:", e)
except TypeError:
print(info)
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(asyncio.wait([user_info(x) for x in range(1, 1000, 10)]))
except Exception as e:
print("Error:", e)
爬取 1000 条需要 50 秒左右,而且带宽占用也只有 220Kbps 左右的样子,有没有什么办法提高爬取的速度? B 站用户有 3800 万左右。
谢谢指教。
ps:1. 没机器做分布式
2. 我知道多进程,但我想问问协程能不能更有效率一点。