(二)网络爬虫的盗亦有道
(1)网络爬虫的限制
-
来源审查:检查来访HTTP协议头的User-Agent域
-
发布公告:Robots协议 —— 网站根目录下的robots.txt文件
(三)Requests库网络爬取实战
(1)京东商品页面的爬取
import requests
def getHTMLText(url):
try:
Headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36',
}
r = requests.get(url,headers=Headers,timeout=30)
r.raise_for_status()
r.encoding = r.apparent_encoding
return r.text[:1000]
except:
return "产生异常"
if __name__ == "__main__":
url = "https://item.jd.com/100004323294.html"
print(getHTMLText(url))
(2)亚马逊商品页面的爬取
import requests
def getHTMLText(url):
try:
Headers ={
'User-Agent': 'Mozilla/5.0 (Windo