环境:Python3.6 + BeautfulSoup4
爬取目标:京东手机图片https://list.jd.com/list.html?cat=9987,653,655
思路
首先打开目标网页https://list.jd.com/list.html?cat=9987,653,655 查看网页获取发送的GET请求的特征,对比第二页的URL https://list.jd.com/list.html?cat=9987,653,655&page=2&sort=sort_rank_asc&trans=1&JL=6_0_0&ms=6#J_main
https://list.jd.com/list.html?cat=9987,653,655
https://list.jd.com/list.html?cat=9987,653,655&page=2&sort=sort_rank_asc&trans=1&JL=6_0_0&ms=6#J_main
通过page来区分检索的是哪一页,尝试在URL地址栏输入
https://list.jd.com/list.html?cat=9987,653,655 &page=3
发现可以打开网页。那就OK了
第二步编写一个函数用来处理每个页面,这里需要对爬虫进行简单的伪装,添加uesr-agent的头部属性来假装一个浏览器。再通过beautifulsoup来进行过滤。
比较展示列表中的图片width和height都是220,以此为过滤条件
imglist = soup.find_all("img",width=220,height=220)
有的img标签含有src属性但是有的img标签没有,通过正则表达式获取图片的url
src = re.compile('//img.+\.jpg').search(img.__str__())
imgurl = "https:" + src.group()
最后通过urlretrieve保存图片到本地
request.urlretrieve(imgurl,filename=imagename)
代码
import re
from bs4 import BeautifulSoup
from urllib import request
from urllib import error
def craw(url,page):
# print("====="+str(page))
req = request.Request(url)
# 需要进行浏览器伪装
req.add_header('user-agent','Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36')
html = request.urlopen(req).read().decode("utf8")
soup = BeautifulSoup(html,"html5lib")
# 过滤信息
imglist = soup.find_all("img",width=220,height=220)
# print(imglist)
x=1
for img in imglist:
# print(str(page)+str(x))
# print(img)
src = re.compile('//img.+\.jpg').search(img.__str__())
if src == None:
continue
imgurl = "https:" + src.group()
# 保存到D盘
imagename = "D:/img/" + str(page) + str(x) + ".jpg"
try:
# 下载图片到指定的路径并重命名
request.urlretrieve(imgurl,filename=imagename)
except error.URLError as e:
print(e.reason)
x+=1
def test():
for i in range(1,6):#前五个页面
url = "https://list.jd.com/list.html?cat=9987,653,655&page=" + str(i)
craw(url,i)
if __name__ == "__main__":
test()