前言
一、异步爬虫
get 方法时一个阻塞的方法, 只有当这个阻塞方法请求完成以后, 才会执行其他代码
import requests
url = 'https://www.baidu.com/s?wd=ip'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36',
}
urls = [
'url1',
'url2',
'url3',
]
def get_data(url):
print('正在爬取', url)
response = requests.get(url=url, headers=headers)
if response.status_code == 200:
return response.content
def parser_content(content):
print(len(content))
for url in urls:
content = get_data(url)
parser_content(content)
二、异步爬虫的方式
1.线程池的基本使用
- 导入线程池模块对应的类 :
from multiprocessing.dummy import Pool
2. 实例化一个线程池对象
pool = Pool("线程数")
.
3. map方法
pool.map('函数', '可迭代对象')
.
import time
from multiprocessing.dummy import Pool
urls = ['a', 'b', 'c']
def get_page(str):
print('正在下载', str)
time.sleep(2)
print(str, '下载成功')
start_time = time.time()
# 实例化一个线程池对象
pool = Pool(4)
# 使用线程池处理
pool.map(get_page, urls)
end_time = time.time()
print('共耗时', end_time-start_time, '秒')