python aiohttp asyncio 协程发起请求

使用协程发起请求,速度会快很多,我增加了重试机制,自定义并发数,接收响应内容

# -*- coding: utf-8 -*-
import aiohttp
import asyncio
from loguru import logger

headers = {
  'accept': 'application/json',
  'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8',
  'cache-control': 'no-cache',
  'content-type': 'application/json; charset=utf-8',
  'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36'
}

async def fetch(session, sem, url):
    for attempt in range(3):
        try:
            async with sem:  # 使用信号量限制并发量
                async with session.get(url=url, headers=headers, timeout=10, proxy='http://****@http-dyn.abuyun.com:9020') as response:
                    response_text = await response.text()
                    if response.status != 200:
                        raise Exception(f'requests status error:{response.status}')
                    else:
                        return url, response_text
        except Exception as e:
            logger.error(f"Error on attempt {attempt + 1}:{url} {e}")
            await asyncio.sleep(1)  # 等待一秒后重试
    return url, ''

async def main(domain_urls, concurrency=5):
    sem = asyncio.Semaphore(concurrency)  # 创建信号量
    async with aiohttp.ClientSession() as session:
        tasks = [fetch(session, sem, url) for url in domain_urls]
        for url, response_text in await asyncio.gather(*tasks):
            print(url)

loop = asyncio.get_event_loop()
urls = ['https://www.baidu.com', ...]
concurrency = 5 #并发量
loop.run_until_complete(main(urls, concurrency))
  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值