Scrapy爬虫框架,ImagesPipeline的基本用法,图片爬取思路详解

本文章需要对Scrapy有一定基础才可阅读,讲解过程不会面向新手。Scrapy爬虫框架,入门案例

github源码

目录

1.ImagesPipeline模块说明

2.案例:百度图片爬取

1)URL分析

2)程序设计

3)执行程序与效果预览


1.ImagesPipeline模块说明

ImagesPipeline是Scrapy中的一个pipe管道组件的一个插件模块,封装了对图片爬取的一些处理,是Scrapy给出的图片处理方案,可以避免我们重复造轮子,我们只要拿到图片链接丢给这个管道,其他的不用我们管,包括图片的命名和后缀名处理,图片存储地址(需要修改默认值),图片长宽处理等。

下面是ImagesPipeline的源码,这里仅讲两个函数

第一个函数是get_media_requests,该函数的作用是下载图片,调用item中图片的链接(我们一般将图片链接存在item中),调用Request函数进行下载,默认的函数写死了,我们还需要进行一些处理才可以拿到图片链接。

def get_media_requests(self, item, info):
    return [Request(x) for x in item.get(self.images_urls_field, [])]
第二个函数是file_path,定义下载图片存储的路径,我们需要处理一下,以此把图片存储到我们想存储的地方
def file_path(self, request, response=None, info=None):
    image_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()
    return 'full/%s.jpg' % (image_guid)
"""
Images Pipeline

See documentation in topics/media-pipeline.rst
"""
import functools
import hashlib
import six

try:
    from cStringIO import StringIO as BytesIO
except ImportError:
    from io import BytesIO

from PIL import Image

from scrapy.utils.misc import md5sum
from scrapy.utils.python import to_bytes
from scrapy.http import Request
from scrapy.settings import Settings
from scrapy.exceptions import DropItem
#TODO: from scrapy.pipelines.media import MediaPipeline
from scrapy.pipelines.files import FileException, FilesPipeline


class NoimagesDrop(DropItem):
    """Product with no images exception"""


class ImageException(FileException):
    """General image error exception"""


class ImagesPipeline(FilesPipeline):
    """Abstract pipeline that implement the image thumbnail generation logic

    """

    MEDIA_NAME = 'image'

    # Uppercase attributes kept for backward compatibility with code that subclasses
    # ImagesPipeline. They may be overridden by settings.
    MIN_WIDTH = 0
    MIN_HEIGHT = 0
    EXPIRES = 90
    THUMBS = {}
    DEFAULT_IMAGES_URLS_FIELD = 'image_urls'
    DEFAULT_IMAGES_RESULT_FIELD = 'images'

    def __init__(self, store_uri, download_func=None, settings=None):
        super(ImagesPipeline, self).__init__(store_uri, settings=settings,
                                             download_func=download_func)

        if isinstance(settings, dict) or settings is None:
            settings = Settings(settings)

        resolve = functools.partial(self._key_for_pipe,
                                    base_class_name="ImagesPipeline",
                                    settings=settings)
        self.expires = settings.getint(
            resolve("IMAGES_EXPIRES"), self.EXPIRES
        )

        if not hasattr(self, "IMAGES_RESULT_FIELD"):
            self.IMAGES_RESULT_FIELD = self.DEFAULT_IMAGES_RESULT_FIELD
        if not hasattr(self, "IMAGES_URLS_FIELD"):
            self.IMAGES_URLS_FIELD = self.DEFAULT_IMAGES_URLS_FIELD

        self.images_urls_field = settings.get(
            resolve('IMAGES_URLS_FIELD'),
            self.IMAGES_URLS_FIELD
        )
        self.images_result_field = settings.get(
            resolve('IMAGES_RESULT_FIELD'),
            self.IMAGES_RESULT_FIELD
        )
        self.min_width = settings.getint(
            resolve('IMAGES_MIN_WIDTH'), self.MIN_WIDTH
        )
        self.min_height = settings.getint(
            resolve('IMAGES_MIN_HEIGHT'), self.MIN_HEIGHT
        )
        self.thumbs = settings.get(
            resolve('IMAGES_THUMBS'), self.THUMBS
        )

    @classmethod
    def from_settings(cls, settings):
        s3store = cls.STORE_SCHEMES['s3']
        s3store.AWS_ACCESS_KEY_ID = settings['AWS_ACCESS_KEY_ID']
        s3store.AWS_SECRET_ACCESS_KEY = settings['AWS_SECRET_ACCESS_KEY']
        s3store.AWS_ENDPOINT_URL = settings['AWS_ENDPOINT_URL']
        s3store.AWS_REGION_NAME = settings['AWS_REGION_NAME']
        s3store.AWS_USE_SSL = settings['AWS_USE_SSL']
        s3store.AWS_VERIFY = settings['AWS_VERIFY']
        s3store.POLICY = settings['IMAGES_STORE_S3_ACL']

        gcs_store = cls.STORE_SCHEMES['gs']
        gcs_store.GCS_PROJECT_ID = settings['GCS_PROJECT_ID']
        gcs_store.POLICY = settings['IMAGES_STORE_GCS_ACL'] or None

        store_uri = settings['IMAGES_STORE']
        return cls(store_uri, settings=settings)

    def file_downloaded(self, response, request, info):
        return self.image_downloaded(response, request, info)

    def image_downloaded(self, response, request, info):
        checksum = None
        for path, image, buf in self.get_images(response, request, info):
            if checksum is None:
                buf.seek(0)
                checksum = md5sum(buf)
            width, height = image.size
            self.store.persist_file(
                path, buf, info,
                meta={'width': width, 'height': height},
                headers={'Content-Type': 'image/jpeg'})
        return checksum

    def get_images(self, response, request, info):
        path = self.file_path(request, response=response, info=info)
        orig_image = Image.open(BytesIO(response.body))

        width, height = orig_image.size
        if width < self.min_width or height < self.min_height:
            raise ImageException("Image too small (%dx%d < %dx%d)" %
                                 (width, height, self.min_width, self.min_height))

        image, buf = self.convert_image(orig_image)
        yield path, image, buf

        for thumb_id, size in six.iteritems(self.thumbs):
            thumb_path = self.thumb_path(request, thumb_id, response=response, info=info)
            thumb_image, thumb_buf = self.convert_image(image, size)
            yield thumb_path, thumb_image, thumb_buf

    def convert_image(self, image, size=None):
        if image.format == 'PNG' and image.mode == 'RGBA':
            background = Image.new('RGBA', image.size, (255, 255, 255))
            background.paste(image, image)
            image = background.convert('RGB')
        elif image.mode == 'P':
            image = image.convert("RGBA")
            background = Image.new('RGBA', image.size, (255, 255, 255))
            background.paste(image, image)
            image = background.convert('RGB')
        elif image.mode != 'RGB':
            image = image.convert('RGB')

        if size:
            image = image.copy()
            image.thumbnail(size, Image.ANTIALIAS)

        buf = BytesIO()
        image.save(buf, 'JPEG')
        return image, buf

    def get_media_requests(self, item, info):
        return [Request(x) for x in item.get(self.images_urls_field, [])]

    def item_completed(self, results, item, info):
        if isinstance(item, dict) or self.images_result_field in item.fields:
            item[self.images_result_field] = [x for ok, x in results if ok]
        return item

    def file_path(self, request, response=None, info=None):
        image_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()
        return 'full/%s.jpg' % (image_guid)

    def thumb_path(self, request, thumb_id, response=None, info=None):
        thumb_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()
        return 'thumbs/%s/%s.jpg' % (thumb_id, thumb_guid)

 

2.案例:百度图片爬取

1)URL分析

打开百度图片:https://image.baidu.com/

随便搜一点东西

打开检查,Network,ctrl+R刷新,向下翻几页,可以看到百度是通过ajax的xhr来传递异步数据的

 打开一个xhr,复制Request URL

 

可以看到请求参数还是蛮多的 

 https://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&ct=201326592&is=&fp=result&queryWord=%E6%98%9F%E9%99%85&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=-1&z=&ic=0&hd=&latest=&copyright=&word=%E6%98%9F%E9%99%85&s=&se=&tab=&width=&height=&face=0&istype=2&qc=&nc=1&fr=&expermode=&force=&pn=30&rn=30&gsm=1e&1586664482091=

 在浏览器输入这一段url,分析后可以得出图片的url为"thumbURL":"https://ss1.bdstatic.com/70cFuXSh_Q1YnxGkpoWK1HF6hhy/it/u=48630087,4029563252&fm=26&gp=0.jpg"

ctrl+F搜索thumbURL,可以得出一个xhr一共提供30个图片的链接

 下面分析URL的请求参数,可以很明显的看出queryWord:星际就是我们搜索的内容

 接着分析,word也有一个搜索内容,然后是pn:30,这个pn就是页码数,经过观察可以发现第一页为30,第二页为60,依次+30。

在看到这一段URL

https://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&ct=201326592&is=&fp=result&queryWord=%E6%98%9F%E9%99%85&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=-1&z=&ic=0&hd=&latest=&copyright=&word=%E6%98%9F%E9%99%85&s=&se=&tab=&width=&height=&face=0&istype=2&qc=&nc=1&fr=&expermode=&force=&pn=30&rn=30&gsm=1e&1586664482091= 

我们把word=%E6%98%9F%E9%99%queryWord=%E6%98%9F%E9%99%85改为

word=迪丽热巴,queryWord=迪丽热巴,这里为什么是%E6%98%9F%E9%99%和不是中文,这是因为游览器给你做了一次quote转义,http协议不允许传递中文,等下在程序里我们也要做一次转义

于是可以得出一段新的URL

 https://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&ct=201326592&is=&fp=result&queryWord=迪丽热巴&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=-1&z=&ic=0&hd=&latest=&copyright=&word=迪丽热巴&s=&se=&tab=&width=&height=&face=0&istype=2&qc=&nc=1&fr=&expermode=&force=&pn=30&rn=30&gsm=1e&1586660816262=

粘贴到浏览器上看看效果,成功

 

2)程序设计

打开一个终端,创建scrapy项目(Linux打开Terminor,window打开cmd)

输入

scrapy startproject baiduimage

cd baiduimage

scrapy genspider image image.baidu.com

使用编辑器打开项目,我用的是PyChram,项目结构就出来了

 由于这个程序比较简单,我仅讲解其中几个坑点,源码在开篇给出的github里有,下载后用编辑器打开就可以运行

 

images.py

第一个要注意的地方就是转义(http协议不支持中文),调用urllib提供的parse.quote即可转义

 word_origin = input("请输入搜索关键字:")
 word = parse.quote(word_origin)

在一个就是正则匹配,拿到thumbURL图片链接,关于正则匹配规则我这里不多说,我博客的网络爬虫栏里有适合新手的教程

 regex = '"thumbURL":"(.*?)"'
 pattern = re.compile(regex, re.S)
 links = pattern.findall(response.text)

# -*- coding: utf-8 -*-
import scrapy
import json
import re
from urllib import parse
from ..items import BaiduimageItem


class ImagesSpider(scrapy.Spider):
    name = 'images'
    allowed_domains = ['image.baidu.com']
    word_origin = input("请输入搜索关键字:")
    word = parse.quote(word_origin)
    url = "https://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&is=&fp=result&queryWord=" + word + "&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=-1&z=&ic=0&hd=&latest=&copyright=&word=" + word + "&s=&se=&tab=&width=&height=&face=0&istype=2&qc=&nc=1&fr=&expermode=&force=&pn={}&rn=30&gsm=1e&1586660816262="

    def start_requests(self):
        #页数可调整,开始页30,每次递增30
        for pn in range(30,151,30):
            url=self.url.format(pn)
            yield scrapy.Request(url=url,callback=self.parse)

    def parse(self, response):
        #正则匹配符合条件的thumbURL
        regex = '"thumbURL":"(.*?)"'
        pattern = re.compile(regex, re.S)
        links = pattern.findall(response.text)
        item=BaiduimageItem()
        #将搜索内容赋值给item,创建文件夹会用到
        item["word"]=self.word_origin
        for i in links:
            item["link"]=i

            yield item

 

pipelines.py

此处class BaiduimagePipeline继承了scrapy给我们提供的ImagesPipeline图片处理管道,为了让这个模块适应我们的程序,需要重写两个函数。

这里我将item中的word参数复制给了一个全局变量,目的是传递给file_path。也可以使用scrapy.Request(meta={})去传递

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html

from scrapy.pipelines.images import ImagesPipeline
import scrapy
import hashlib
from scrapy.utils.python import to_bytes

class BaiduimagePipeline(ImagesPipeline):

    word=""
    
    # 重写ImagesPipeline里的get_media_requests函数
    # 原函数的请求不适应与本程序
    def get_media_requests(self, item, info):
        self.word=item['word']
        yield scrapy.Request(url=item['link'])
    
    # 重写ImagesPipeline里的file_path函数
    # 原函数return 'full/%s.jpg' % (image_guid) 
    # 我们将其改为自己想存放的路径地址
    def file_path(self, request, response=None, info=None):
        image_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()
        return self.word + '/%s.jpg' % (image_guid)

 

settings.py

在settings.py里IMAGES_STORE是图片存储的路径

BOT_NAME = 'baiduimage'

SPIDER_MODULES = ['baiduimage.spiders']
NEWSPIDER_MODULE = 'baiduimage.spiders'

ROBOTSTXT_OBEY = False
DOWNLOAD_DELAY = 0.5

DEFAULT_REQUEST_HEADERS = {
  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  'Accept-Language': 'en',
  'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.116 Safari/537.36'
}

ITEM_PIPELINES = {
   'baiduimage.pipelines.BaiduimagePipeline': 300,
}


IMAGES_STORE= 'D:\\图片\\'

ImagesPipeline中的from_setting会调用 IMAGES_STORE,store_uri=settings['IMAGES_STORE']

 @classmethod
    def from_settings(cls, settings):
        s3store = cls.STORE_SCHEMES['s3']
        s3store.AWS_ACCESS_KEY_ID = settings['AWS_ACCESS_KEY_ID']
        s3store.AWS_SECRET_ACCESS_KEY = settings['AWS_SECRET_ACCESS_KEY']
        s3store.AWS_ENDPOINT_URL = settings['AWS_ENDPOINT_URL']
        s3store.AWS_REGION_NAME = settings['AWS_REGION_NAME']
        s3store.AWS_USE_SSL = settings['AWS_USE_SSL']
        s3store.AWS_VERIFY = settings['AWS_VERIFY']
        s3store.POLICY = settings['IMAGES_STORE_S3_ACL']

        gcs_store = cls.STORE_SCHEMES['gs']
        gcs_store.GCS_PROJECT_ID = settings['GCS_PROJECT_ID']
        gcs_store.POLICY = settings['IMAGES_STORE_GCS_ACL'] or None

        store_uri = settings['IMAGES_STORE']
        return cls(store_uri, settings=settings)

items.py 

两个参数,一个图片链接,一个搜索内容

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class BaiduimageItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    link=scrapy.Field()
    word=scrapy.Field()
    pass

 

3)执行程序与效果预览

创建一个执行程序

from scrapy import cmdline

cmdline.execute('scrapy crawl images'.split())

 

  • 8
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值