本次实验很简单,主要联练习requests、Beautifulsoup、re库的联合使用。
一、通用代码框架-爬取百度首页
import requests
url = "https://www.baidu.com"
def getHtmlText(url):
try:
response = requests.get(url)
response.raise_for_status()
print(response.encoding, response.apparent_encoding)
response.encoding = response.apparent_encoding # 正常写法
return response.text[:500]
except:
return "产生异常"
print(getHtmlText(url))
二、亚马逊书籍网页爬取
关键:用request方法是要用header参数加上标准身份表示,否则需要通过身份验证。首先直接访问的结果如下:
import requests
import re
url = "https://www.amazon.cn/dp/B01ION3VWI/ref=sr_1_2?__mk_zh_CN=%E4%BA%9A%E9%A9%AC%E9%80%8A%E7%BD%91%E7%AB%99&keywords=python&qid=1561963273&s=books&sr=1-2"
# kv = {
"user-agent": "Mozilla/5.0"} # 标准身份标识
r = requests.get(url)
r.encoding = r.apparent_encoding
print(r.request.headers)
book_info = re.findall("[\u4e00-\u9fa5]+", r.text)
for text in book_info:
print(text)
输出:
{
'User-Agent': 'python-requests/2.22.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
请输入您在下方看到的字符
抱歉
我们只是想确认一下当前访问者并非自动程序
为了达到最佳效果
请确保您浏览器上的
已启用
请输入您在这个图片中看到的字符
加上标准头部后:
import requests
import re
url = "https://www.amazon.cn/dp/B01ION3VWI/ref=sr_1_2?__mk_zh_CN=%E4%BA%9A%E9%A9%AC%E9%80%8A%E7%BD%91%E7%AB%99&keywords=python&qid=1561963273&s=books&sr=1-2"
kv = {
"user-agent": "Mozilla/5.0"} # 标准身份标识
# r = requests.get(url)
r = requests.get(url, headers=kv)
r.encoding = r.apparent_encoding
print(r.request.headers)
book_info = re.