- ImagesPipeline:
-只需要将img的src的属性值进行解析,提交到管道,管道就会对图片的src进行请求发送获取图片的二次请求
-需求:爬取站长素材中的高清图片
-使用流程:
-数据解析(图片的地址)
-将存储图片地址的item提交到制定的管道类
-在管道文件中自定制个基于ImagesPipeLine的一个管道类
- get_media_request
- file_path
- item_completed
-在配置文件中:
-指定图片存储的目录:IMAGES_STORE = './ imgs'
-指定开启的管道:自定制的管道类
数据解析(图片的地址):
import scrapy
from imgsPro.items import ImgsproItem
class ImgSpider(scrapy.Spider):
name = 'img'
# allowed_domains = ['www.xxx.com']
start_urls = ['https://sc.chinaz.com/tupian/']
def parse(self, response):
div_list = response.xpath('//*[@id="container"]/div')
for div in div_list:
# 使用伪属性src2
img_src = 'https:' + div.xpath('./div/a/img/@src2').extract_first()
# print(img_src)
# 将存储图片地址的item提交到制定的管道类
item = ImgsproItem()
item['img_src'] = img_src
yield item
在管道文件中自定制个基于ImagesPipeLine的一个管道类
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
# class ImgsproPipeline:
# def process_item(self, item, spider):
# return item
from scrapy.pipelines.images import ImagesPipeline
import scrapy
class ImgproPipeline(ImagesPipeline):
def get_media_requests(self, item, info):
yield scrapy.Request(item['img_src'])
# 定制图片的名称
def file_path(self, request, response=None, info=None):
img_name = request.url.split('/')[-1]
return img_name
def item_completed(self, results, item, info):
return item # 该返回值会传递给下一个即将被执行的管道类
-在配置文件中:
-指定图片存储的目录:IMAGES_STORE = ‘./ imgs’
-指定开启的管道:自定制的管道类
# 指定图片存储的目录
IMAGES_STORE = './imgs'
ITEM_PIPELINES = {
'imgsPro.pipelines.ImgproPipeline': 300,
}