利用scrapy下载图片

创建项目:
  运行scrapy startproject your_project_name
  scrapy  genspider image_spider
定义Spider:在spiders文件夹下,
创建一个新文件如images_spider.py,
编写爬虫逻辑,例如选择想要下载图片的URL列表。

实现代码如下:

spider.py
import scrapy

class ImagesSpider(scrapy.Spider):
    name = 'image_spider'
    allowed_domains = ['example.com']  # 替换为你想爬取的网站
    start_urls = ['http://example.com/images/page1', 'http://example.com/images/page2']  # 同样替换为实际页面URLs

    def parse(self, response):
        for img_url in response.css('img::attr(src)').getall():  # 使用CSS选择器找到图片链接
            yield {'image_url': img_url}

        next_page = response.css('a.next::attr(href)').get()  # 如果有分页,获取下一页链接
        if next_page is not None:
            yield response.follow(next_page, self.parse)
设置pipelines.py文件
ITEM_PIPELINES = {
    'your_project.pipelines.ImagesPipeline': 300,
}

# images_pipelines.py
from scrapy.exceptions import DropItem
from scrapy.pipelines.images import ImagesPipeline

class ImagesPipeline(ImagesPipeline):
    def get_media_requests(self, item, info):
        for image_url in item['image_url']:
            yield scrapy.Request(image_url)

    def file_path(self, request, response=None, info=None):
        image_guid = str(request.meta['image_guid']) or request.url.split('/')[-1]
        return f"images/{image_guid}.jpg"

    def item_completed(self, results, item, info):
        image_paths = [x['path'] for ok, x in results if ok]
        if not image_paths:
            raise DropItem("Image download failed")
        item['image_paths'] = image_paths
        return item
运行:
scrapy crawl image_spider
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值