其实之前是想利用煎蛋网来联系一下scrapy的ImagesPipeline爬取煎蛋网的妹子图并下载下来保存到本地,无奈这个ImagePipeline一点都不给面子一直报404错误,而且我还不知道问题出在哪里,所以一怒之下就不用框架了,直接爬一下。
先来一张:
从下图可以看出总的页数:
在翻页的过程中URL中只有页数是在发生改变的,这个就比较好构建URL了,而且图片的信心在原始的请求里面就能够找到图片的链接地址,这个也比较好办:
于是可以开始写代码了:
import requests
from pyquery import PyQuery as pq
from requests.exceptions import RequestException
import os
from hashlib import md5
from multiprocessing import Pool
headers={
'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding':'gzip, deflate, sdch',
'Referer':'http://jandan.net/ooxx',
'Referer':'http://jandan.net/ooxx',
'User-Agent':'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
'Cookie':'__cfduid=d0f8f8aef303ad3b55cd071a426e7a59c1504854664; _ga=GA1.2.986719823.1501079288; _gid=GA1.2.1585289570.1506061387',
}
def get_page(url):
response=requests.get(url,headers=headers)
try:
if response.status_code==200:
return response.text
return None
except RequestException:
return None
def parse_page(html):
doc=pq(html)
links=doc('.commentlist .row .text p a')
for link in links:
image_url='http:'+pq(link).attr('href')
yield image_url
def download_image(url):
response=requests.get(url,headers=headers)
try:
if response.status_code==200:
return response.content
return None
except RequestException:
return None
def save_image(content):
path_name='{0}/{1}.{2}'.format(os.getcwd(),md5(content).hexdigest(),'jpg')
if not os.path.exists(path_name):
with open(path_name,'wb') as f:
f.write(content)
f.close()
def main(page):
print('===============开始抓取第%r页==============='%page)
url = 'http://jandan.net/ooxx/page-{}#comments'.format(page)
html=get_page(url)
if html:
urls=parse_page(html)
for url in urls:
print('正在下载:%r'%url)
content=download_image(url)
save_image(content)
if __name__=='__main__':
pool=Pool()
pool.map(main,[page*1 for page in range(1,137)])
运行结果如下:
总共下载下来3560几张图片:
妹子图看多了真是够审美疲劳的,什么丰乳美臀的、露腿露点的,一点兴趣都没有,想想这些皆不若气质二字来得重要。