下边四个小案例主要用来熟悉requests的用法,从爬取整张页面开始,随后熟悉get()和post()方法的参数,涉及了动态加载的内容和json数据的获取。
import requests
import json
# 1. requests爬取整张页面
url = 'https://www.baidu.com/'
response = requests.get(url = url)
page_content = response.text
with open('./baidu.html', 'w', encoding='utf-8') as fp:
fp.write(page_content)
print('页面爬取成功!')
# 2. 爬取通过关键词搜索得到的页面
url = 'https://www.baidu.com/s'
headers = {
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:98.0) Gecko/20100101 Firefox/98.0'
}
kw = input('输入需要搜索的内容:')
params = {
'wd':kw
}
response = requests.get(url = url, params = params, headers = headers)
page_content = response.text
filename = kw + '.html'
with open(filename, 'w', encoding='utf-8') as fp:
fp.write(page_content)
print('页面爬取成功!')
# 3. 破解百度翻译
post_url = 'https://fanyi.baidu.com/sug'
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36'
}
word = input('enter a word:')
data = {
'kw': word
}
response = requests.post(url=post_url, data=data, headers=headers)
# 5.获取响应数据:json()方法返回的是obj(如果确认响应数据是json类型的,才可以使用json())
dic_obj = response.json()
# 持久化存储
fileName = word + '.json'
fp = open(fileName, 'w', encoding='utf-8')
json.dump(dic_obj, fp=fp, ensure_ascii=False)
print('over!!!')
# 4. 爬取豆瓣电影
url = 'https://movie.douban.com/j/chart/top_list'
param = {
'type': '24',
'interval_id': '100:90',
'action':'',
'start': '0',#从库中的第几部电影去取
'limit': '20',#一次取出的个数
}
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36'
}
response = requests.get(url=url,params=param,headers=headers)
list_data = response.json()
fp = open('./douban.json','w',encoding='utf-8')
json.dump(list_data,fp=fp,ensure_ascii=False)
print('over!!!')