创建项目:
运行scrapy startproject your_project_name
scrapy genspider image_spider
定义Spider:在spiders文件夹下,
创建一个新文件如images_spider.py,
编写爬虫逻辑,例如选择想要下载图片的URL列表。
实现代码如下:
spider.py
import scrapy
class ImagesSpider(scrapy.Spider):
name = 'image_spider'
allowed_domains = ['example.com']
start_urls = ['http://example.com/images/page1', 'http://example.com/images/page2']
def parse(self, response):
for img_url in response.css('img::attr(src)').getall():
yield {'image_url': img_url}
next_page = response.css('a.next::attr(href)').get()
if next_page is not None:
yield response.follow(next_page, self.parse)
设置pipelines.py文件
ITEM_PIPELINES = {
'your_project.pipelines.ImagesPipeline': 300,
}
from scrapy.exceptions import DropItem
from scrapy.pipelines.images import ImagesPipeline
class ImagesPipeline(ImagesPipeline):
def get_media_requests(self, item, info):
for image_url in item['image_url']:
yield scrapy.Request(image_url)
def file_path(self, request, response=None, info=None):
image_guid = str(request.meta['image_guid']) or request.url.split('/')[-1]
return f"images/{image_guid}.jpg"
def item_completed(self, results, item, info):
image_paths = [x['path'] for ok, x in results if ok]
if not image_paths:
raise DropItem("Image download failed")
item['image_paths'] = image_paths
return item
运行:
scrapy crawl image_spider