Python爬取视频是利用多线程快还是利用协程快?

最近在学习python爬虫相关技术,简单了解了多线程和协程的概念,跟着网上大佬们学写了几个小爬虫玩儿,基本弄懂了如何爬取文字,图片和简单的视频,突然就想测试一下,爬取视频到底是多线程较快还是利用协程较快。于是做了一个简单的测试:爬取一页糗事百科的视频,大概有25个视频,分别用单线程、多线程和协程,探一下高低。

下面贴出代码,核心部分都差不多,因为是初学者,代码有些稚嫩,请大佬们勿喷。

单线程:

 import requests
 from lxml import etree
 import time
 ​
 def getVideo(url):
     headers = {
         "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
     }
     html = requests.get(url=url, headers = headers).text
     tree = etree.HTML(html)
     div_list = tree.xpath('//*[@id="content"]/div/div[2]')
     for div in div_list:
         video_src_list = div.xpath('./div/video/source/@src')
         # print(video_src)
         for video_src in video_src_list:
             name = video_src.rsplit("/",1)[1]
             # print(name)
             with open(f"video/{name}",mode='wb') as f:
                 f.write(requests.get("http:"+video_src,headers = headers).content)
                 print(f"{name}下载完成!")
 if __name__ == '__main__':
     t1 = time.time()
     url = 'https://www.qiushibaike.com/video/'
     getVideo(url) #单线程下载
     print(time.time()-t1)

多线程:

 
import requests
 from lxml import etree
 from concurrent.futures import ThreadPoolExecutor
 import time
 ​
 def getVideoSrcList(url):
     video_src_list = []
     headers = {
         "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
     }
     html = requests.get(url=url, headers = headers).text
     tree = etree.HTML(html)
     div_list = tree.xpath('//*[@id="content"]/div/div[2]')
     for div in div_list:
         video_src_list = div.xpath('./div/video/source/@src')
     return video_src_list
 def getVideo(url):
     name = url.rsplit("/",1)[1]
     with open(f"video/{name}",mode='wb') as f:
         f.write(requests.get("http:" + url).content)
         print(f"{name}下载完成!")
 if __name__ == '__main__':
     t1 = time.time()
     url = 'https://www.qiushibaike.com/video/'
     with ThreadPoolExecutor(50) as t: #开启50个线程的线程池
         for src in getVideoSrcList(url):
             t.submit(getVideo,url = src) #提交多线程下载
     print("over")
     print(time.time()-t1)

协程

 import asyncio
 import aiohttp
 import aiofiles
 import time
 import requests
 from lxml import etree
 async def getVideo(url):
     tasks = []
     headers = {
         "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36"
     }
     html = requests.get(url=url, headers = headers).text
     tree = etree.HTML(html)
     div_list = tree.xpath('//*[@id="content"]/div/div[2]')
     for div in div_list:
         video_src_list = div.xpath('./div/video/source/@src')
         for video_src in video_src_list:
             name = video_src.rsplit("/",1)[1]
             #准备异步任务
             tasks.append(asyncio.create_task(download(name,video_src)))
         await asyncio.wait(tasks)
 ​
 async def download(name,src):
     async with aiohttp.ClientSession() as session:
         # 观察src字串,发现src是没有http:的,所以拼接处理
         async with session.get("http:"+ src) as reqs:
             async with aiofiles.open(f"video/{name}",mode='wb') as f:
                 # 异步保存视频 ,这里有个坑,如果用reqs.content的话会报错,只能用reqs.read()
                 await f.write(await reqs.read())
 if __name__ == '__main__':
     t1 = time.time()
     url = 'https://www.qiushibaike.com/video/'
     # asyncio.run(getVideo(url))
     loop = asyncio.get_event_loop()
     loop.run_until_complete(getVideo(url))
     loop.close()
     print(time.time()-t1)

协程在写的过程中踩了不少坑,费了很大的劲儿研究,终于能顺利的跑通了,确实不容易。我特意在代码里面加了时间,测算每一种的时常,发现多线程用时最少,协程次之,但线程用时最多,如下图:

多线程用时:

协程用时

单线程:

欢迎大佬们评论区指导!也可以私信我指导

  • 2
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值