python爬取蚂蜂窝帖子图片

前言

最近在学习python网络爬虫,从爬取图片入手。周末爬取了一个图标网站、果壳、数字尾巴的帖子的图片,现在尝试爬取蚂蜂窝的帖子里的图片。爬取图片仅为个人练习,侵删。

代码框架

import urllib.request
import requests
import re
import os

def getHTML(url):
#获取url指向的网页的html文本 

def getImageUrl(html):
#获取html中的图片url

def crawlImage(urlList,savepath='folder'):
#根据图片的url,批量保存图片到本地

1.获取网页html

效仿博主cici_vivi的做法,对蚂蜂窝上随意选取的一个旅游帖,分析它的请求。
Firefox浏览器打开该帖子的url,
按F12“查看元素”,
点击“网络”,
按F5刷新网页,
点击get到的html,
查看Request headers:
在这里插入图片描述该网页完整的Request headers如下:

Host: www.mafengwo.cn
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Language: zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2
Accept-Encoding: gzip, deflate
Referer: http://www.mafengwo.cn/
Connection: keep-alive
Cookie: mfw_uuid=5e15a493-892d-30fd-8f7b-f9b427d723e0; _r=baidu; _rp=a%3A2%3A%7Bs%3A1%3A%22p%22%3Bs%3A18%3A%22www.baidu.com%2Flink%22%3Bs%3A1%3A%22t%22%3Bi%3A1578476691%3B%7D; __jsluid_h=59e2d5954da4576186658ea27c253918; __mfwa=1578476692581.63366.8.1584940053205.1584945013269; __mfwlv=1584940053; __mfwvn=5; __mfwlt=1584945030; uva=s%3A307%3A%22a%3A4%3A%7Bs%3A13%3A%22host_pre_time%22%3Bs%3A10%3A%222020-01-08%22%3Bs%3A2%3A%22lt%22%3Bi%3A1578476693%3Bs%3A10%3A%22last_refer%22%3Bs%3A180%3A%22https%3A%2F%2Fwww.baidu.com%2Flink%3Furl%3DzIIeEAiySnkbqfJPHb9C5AVbpHakB4DYW5hweW9_1G6COmUNxEebXW-syv4VC0rFGyXqsd14AbrOhpkq0m-rhQ77RyXSl3CxJFSI0WQeew7%26wd%3D%26eqid%3D8aa9855e0014e809000000065e15a48a%22%3Bs%3A5%3A%22rhost%22%3Bs%3A13%3A%22www.baidu.com%22%3B%7D%22%3B; __mfwurd=a%3A3%3A%7Bs%3A6%3A%22f_time%22%3Bi%3A1578476693%3Bs%3A9%3A%22f_rdomain%22%3Bs%3A13%3A%22www.baidu.com%22%3Bs%3A6%3A%22f_host%22%3Bs%3A3%3A%22www%22%3B%7D; __mfwuuid=5e15a493-892d-30fd-8f7b-f9b427d723e0; Hm_lvt_8288b2ed37e5bc9b4c9f7008798d2de0=1584854152,1584885764,1584928736,1584945013; UM_distinctid=16f848aebd79cd-088c79933954f78-4c302a7b-1fa400-16f848aebd84a5; CNZZDATA30065558=cnzz_eid%3D1650508070-1578473002-null%26ntime%3D1584939849; oad_n=a%3A3%3A%7Bs%3A3%3A%22oid%22%3Bi%3A1029%3Bs%3A2%3A%22dm%22%3Bs%3A15%3A%22www.mafengwo.cn%22%3Bs%3A2%3A%22ft%22%3Bs%3A19%3A%222020-03-22+13%3A15%3A49%22%3B%7D; __mfwc=direct; bottom_ad_status=0; __jsl_clearance=1584945010.161|0|F0uqf9xJ1%2BXv02g3rPjNIDIGtu4%3D; PHPSESSID=d07def41hatb2or2i76ndgi0e7; Hm_lpvt_8288b2ed37e5bc9b4c9f7008798d2de0=1584945030; __mfwb=8d5c8d82f88c.2.direct
Upgrade-Insecure-Requests: 1
Cache-Control: max-age=0

作为初学者,我对这些内容的含义自然是一窍不通的。因此,我选择将这些内容一股到都塞到获取网页html代码里的headers字典里:

def getHTML(url):
	headers = {
				'Host': 'www.mafengwo.cn',
				'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0',
				'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
				'Accept-Language': 'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
				'Accept-Encoding': 'gzip, deflate',
				'Referer': 'http://www.mafengwo.cn/',
				'Connection': 'keep-alive',
				'Cookie': 'mfw_uuid=5e15a493-892d-30fd-8f7b-f9b427d723e0; _r=baidu; _rp=a%3A2%3A%7Bs%3A1%3A%22p%22%3Bs%3A18%3A%22www.baidu.com%2Flink%22%3Bs%3A1%3A%22t%22%3Bi%3A1578476691%3B%7D; __jsluid_h=59e2d5954da4576186658ea27c253918; __mfwa=1578476692581.63366.8.1584940053205.1584945013269; __mfwlv=1584940053; __mfwvn=5; __mfwlt=1584945030; uva=s%3A307%3A%22a%3A4%3A%7Bs%3A13%3A%22host_pre_time%22%3Bs%3A10%3A%222020-01-08%22%3Bs%3A2%3A%22lt%22%3Bi%3A1578476693%3Bs%3A10%3A%22last_refer%22%3Bs%3A180%3A%22https%3A%2F%2Fwww.baidu.com%2Flink%3Furl%3DzIIeEAiySnkbqfJPHb9C5AVbpHakB4DYW5hweW9_1G6COmUNxEebXW-syv4VC0rFGyXqsd14AbrOhpkq0m-rhQ77RyXSl3CxJFSI0WQeew7%26wd%3D%26eqid%3D8aa9855e0014e809000000065e15a48a%22%3Bs%3A5%3A%22rhost%22%3Bs%3A13%3A%22www.baidu.com%22%3B%7D%22%3B; __mfwurd=a%3A3%3A%7Bs%3A6%3A%22f_time%22%3Bi%3A1578476693%3Bs%3A9%3A%22f_rdomain%22%3Bs%3A13%3A%22www.baidu.com%22%3Bs%3A6%3A%22f_host%22%3Bs%3A3%3A%22www%22%3B%7D; __mfwuuid=5e15a493-892d-30fd-8f7b-f9b427d723e0; Hm_lvt_8288b2ed37e5bc9b4c9f7008798d2de0=1584854152,1584885764,1584928736,1584945013; UM_distinctid=16f848aebd79cd-088c79933954f78-4c302a7b-1fa400-16f848aebd84a5; CNZZDATA30065558=cnzz_eid%3D1650508070-1578473002-null%26ntime%3D1584939849; oad_n=a%3A3%3A%7Bs%3A3%3A%22oid%22%3Bi%3A1029%3Bs%3A2%3A%22dm%22%3Bs%3A15%3A%22www.mafengwo.cn%22%3Bs%3A2%3A%22ft%22%3Bs%3A19%3A%222020-03-22+13%3A15%3A49%22%3B%7D; __mfwc=direct; bottom_ad_status=0; __jsl_clearance=1584945010.161|0|F0uqf9xJ1%2BXv02g3rPjNIDIGtu4%3D; PHPSESSID=d07def41hatb2or2i76ndgi0e7; Hm_lpvt_8288b2ed37e5bc9b4c9f7008798d2de0=1584945030; __mfwb=8d5c8d82f88c.2.direct',
				'Upgrade-Insecure-Requests': '1',
				'Cache-Control': 'max-age=0'
				}
	try:
		html = requests.get(url,headers=headers,timeout=5)
		html.raise_for_status()
		html.encoding = 'utf-8' #r.apparent_encoding
		return html.text
	except:
		return 'something wrong'

成功获得这篇帖子的html文本:
在这里插入图片描述

在网上看到的获取网页html的代码中,headers通常只包括‘User-Agent’一项。但是我在实践中发现,仅包括‘User-Agent’一项时抓取蚂蜂窝帖子的html文本会出现http error,无法成功获取html。为了验证headers中哪些内容是必须的,我采取了一个笨方法进行实验:逐一注释掉headers中的项目,直至无法成功获取html。实验发现,headers至少应该包括‘User-Agent’和‘Cookie’两项。‘User-Agent’是不变的,会变的是‘Cookie’,因此将其作为函数参数。改进后的代码是:

def getHTML(url,cookie):
	headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0',
        'Cookie': cookie}
	try:
		html = requests.get(url,headers=headers,timeout=5)
		html.raise_for_status()
		html.encoding = 'utf-8'#r.apparent_encoding
		return html.text
	except:
		return 'something wrong'

2.提取图片url

在这一部分,我的做法是根据html中包含图片的标签的规律,编写相应的正则表达式来有效筛选、提取图片的url。
为此需要首先观察蚂蜂窝帖子html中图片标签的规律,对帖子正文的图片右键“查看元素”:
在这里插入图片描述
研究了多张图片后发现,图片的有效url包含在’data-rt-src="'和‘?image…’中,由此编写代码如下:

def getImageUrl(html):
#将html按空格分割并存入一个列表中,然后获取其中的图片url
    html_splited = re.split(r'\s+',html) #对html文本按空格分割
    targetURL = [] #用于保存目标url的list
    for i in html_splited:
        if (re.match(r'data-src',i)): #筛选目标url的条件
            if (re.match(r'.*?png.',i)) or (re.match(r'.*?jpg.',i)) or (re.match(r'.*?jpeg.',i)): #筛选目标url的条件            
                if(re.match(r'.*?http',i)): #筛选目标url的条件,过滤掉不具有有效路径的图片url                
                    if (re.match(r'.*\?',i)):
                        url = re.search(r'.*src="(.*)\?',i).group(1) #获取图片有效url                
                    else:
                        url = re.search(r'.*src="(.*)"',i).group(1)
                    #url = i
                    targetURL.append(url)
                    print(url)
    return targetURL

代码返回获得的图片url列表:
在这里插入图片描述

3.将图片保存到本地

这一部分代码比较简单,从图片url的列表中,注意打开图片url,然后保存到本地即可。
代码会检查图片格式;用户可以定义保存的相对路径,如无定义,则会新建“folder”文件夹并将图片保存于其中;在保存图片时,图片按数字升序命名。

def crawlImage(urlList,savepath='folder'):
#根据图片的url,批量保存图片到本地
    count = 1
    for i in urlList:
        img_webpage = urllib.request.urlopen(i)
        img_data = img_webpage.read()
        pathExist = os.path.exists(savepath)#检查保存路径(文件夹)是否存在,若否则创建文件夹
        if (pathExist == False):
            os.mkdir(savepath)
        if re.match(r'.*jpg',i):#检查图片格式,未知图片格式则以png格式保存
            imageType = '.jpg'
        elif re.match(r'.*jpeg',i):
            imageType = '.jpeg'
        elif re.match(r'.*gif',i):
            imageType = '.gif'
        else:
            imageType = '.png'
        #open函数打开(创建)一个文件,其中模式'wb'表示以二进制格式打开一个文件只用于写入:
        #如果该文件已存在则打开文件,并从开头开始编辑,原有内容会被删除;如果该文件不存在,则创建新文件。
        save_image = open(savepath+'/'+str(count)+imageType,'wb')  
        save_image.write(img_data)
        save_image.close()
        count += 1        

完整代码

import urllib.request
import requests
import re
import os

def getHTML(url,cookie):
	headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0',
        'Cookie': cookie}
	try:
		html = requests.get(url,headers=headers,timeout=5)
		html.raise_for_status()
		html.encoding = 'utf-8'#r.apparent_encoding
		return html.text
	except:
		return 'something wrong'
    
def getImageUrl(html):
#将html按空格分割并存入一个列表中,然后获取其中的图片url
    html_splited = re.split(r'\s+',html) #对html文本按空格分割
    targetURL = [] #用于保存目标url的list
    for i in html_splited:
        if (re.match(r'.*data-rt-src',i)): #筛选目标url的条件
            if (re.match(r'.*?png',i)) or (re.match(r'.*?jpg',i)) or (re.match(r'.*?jpeg',i)) or (re.match(r'.*?JPG',i)): #筛选目标url的条件                         
                if(re.match(r'.*http.*',i)): #筛选目标url的条件                
                    if (re.match(r'.*\?',i)):
                        url = re.search(r'.*src="(.*)\?',i).group(1) #获取准确图片url                
                    else:
                        url = re.search(r'.*src="(.*)"',i).group(1)
                    targetURL.append(url)
                    print(url)
    return targetURL

def crawlImage(urlList,savepath='folder'):
#根据图片的url,批量保存图片到本地
    count = 1
    for i in urlList:
        img_webpage = urllib.request.urlopen(i)
        img_data = img_webpage.read()
        pathExist = os.path.exists(savepath)#检查保存路径(文件夹)是否存在,若否则创建文件夹
        if (pathExist == False):
            os.mkdir(savepath)
        if re.match(r'.*jpg',i):#检查图片格式,未知图片格式则以png格式保存
            imageType = '.jpg'
        elif re.match(r'.*jpeg',i):
            imageType = '.jpeg'
        elif re.match(r'.*gif',i):
            imageType = '.gif'
        else:
            imageType = '.png'
        #open函数打开(创建)一个文件,其中模式'wb'表示以二进制格式打开一个文件只用于写入:
        #如果该文件已存在则打开文件,并从开头开始编辑,原有内容会被删除;如果该文件不存在,则创建新文件。
        save_image = open(savepath+'/'+str(count)+imageType,'wb')  
        save_image.write(img_data)
        save_image.close()
        count += 1        
        
if __name__ == '__main__':    
    url ='http://www.mafengwo.cn/i/18845218.html'
    cookie='mfw_uuid=5e15a493-892d-30fd-8f7b-f9b427d723e0; _r=baidu; _rp=a%3A2%3A%7Bs%3A1%3A%22p%22%3Bs%3A18%3A%22www.baidu.com%2Flink%22%3Bs%3A1%3A%22t%22%3Bi%3A1578476691%3B%7D; __jsluid_h=59e2d5954da4576186658ea27c253918; __mfwa=1578476692581.63366.8.1584940053205.1584945013269; __mfwlv=1584940053; __mfwvn=5; __mfwlt=1584946062; uva=s%3A307%3A%22a%3A4%3A%7Bs%3A13%3A%22host_pre_time%22%3Bs%3A10%3A%222020-01-08%22%3Bs%3A2%3A%22lt%22%3Bi%3A1578476693%3Bs%3A10%3A%22last_refer%22%3Bs%3A180%3A%22https%3A%2F%2Fwww.baidu.com%2Flink%3Furl%3DzIIeEAiySnkbqfJPHb9C5AVbpHakB4DYW5hweW9_1G6COmUNxEebXW-syv4VC0rFGyXqsd14AbrOhpkq0m-rhQ77RyXSl3CxJFSI0WQeew7%26wd%3D%26eqid%3D8aa9855e0014e809000000065e15a48a%22%3Bs%3A5%3A%22rhost%22%3Bs%3A13%3A%22www.baidu.com%22%3B%7D%22%3B; __mfwurd=a%3A3%3A%7Bs%3A6%3A%22f_time%22%3Bi%3A1578476693%3Bs%3A9%3A%22f_rdomain%22%3Bs%3A13%3A%22www.baidu.com%22%3Bs%3A6%3A%22f_host%22%3Bs%3A3%3A%22www%22%3B%7D; __mfwuuid=5e15a493-892d-30fd-8f7b-f9b427d723e0; Hm_lvt_8288b2ed37e5bc9b4c9f7008798d2de0=1584854152,1584885764,1584928736,1584945013; UM_distinctid=16f848aebd79cd-088c79933954f78-4c302a7b-1fa400-16f848aebd84a5; CNZZDATA30065558=cnzz_eid%3D1650508070-1578473002-null%26ntime%3D1584945249; oad_n=a%3A3%3A%7Bs%3A3%3A%22oid%22%3Bi%3A1029%3Bs%3A2%3A%22dm%22%3Bs%3A15%3A%22www.mafengwo.cn%22%3Bs%3A2%3A%22ft%22%3Bs%3A19%3A%222020-03-22+13%3A15%3A49%22%3B%7D; __mfwc=direct; bottom_ad_status=0; PHPSESSID=d07def41hatb2or2i76ndgi0e7; Hm_lpvt_8288b2ed37e5bc9b4c9f7008798d2de0=1584946062; __jsl_clearance=1584949023.292|0|gKKMHG1iaq8wReTJEWUf6ebgBMk%3D'
    html = getHTML(url,cookie)
    targetURL = getImageUrl(html)
    crawlImage(targetURL,savepath='www.mafengwo.cn')

爬取的图片成功保存到本地:
在这里插入图片描述

后记

目前代码存在一个硬伤:所获取的网页html是不完整的,因此我也只抓到了这篇游记的前24张图片。
经过确认,确是蚂蜂窝页面异步加载的原因导致的,目前这个问题已经解决,请参看我下一篇博客

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值