爬取每个选手的百度百科图片,并保存
爬虫流程
爬虫代码(课程作业)
def crawl_pic_urls():
'''
爬取每个选手的百度百科图片,并保存
'''
with open('work/'+ today + '.json', 'r', encoding='UTF-8') as file:
json_array = json.loads(file.read())
# 模拟浏览器登录
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
}
for star in json_array:
name = star['name']
link = star['link']
#!!!请在以下完成对每个选手图片的爬取,将所有图片url存储在一个列表pic_urls中!!!
response = requests.get(link, headers=headers)
soup = BeautifulSoup(response.text, 'lxml')
new_link = f"https://baike.baidu.com{soup.find('div', {'class': 'summary-pic'}).a.get('href')}"
response = requests.get(new_link, headers=headers)
soup = BeautifulSoup(response.text, 'lxml')
images = [image.get('src') for image in soup.find_all('img')]
pic_urls = [image for image in images]
#!!!根据图片链接列表pic_urls, 下载所有图片,保存在以name命名的文件夹中!!!
down_pic(name,pic_urls)
requests.get 里面部分源码
注意: 当使用requests.get 的时候一定要加上headers,大部分的网站都有反爬的措施,加上headers可以模拟成用浏览器登录。写headers的时候一定要加上headers=headers,不然会默认第一个参数。
通用爬虫代码
import requests
from bs4 import BeautifulSoup
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'lxml')
print(soup)