scrapy图片爬取--在pipelines中使用urlretrieve下载图片

1.创建工程、爬虫文件

scrapy startproject biantuPro Projects
cd Projects
cd biantuPro
scrapy genspider biantu xxx

2.编辑爬虫文件 biantu.py

import scrapy

from ..items import BiantuproItem


class BiantuSpider(scrapy.Spider):
    name = 'biantu'
    # allowed_domains = ['xxx']
    start_urls = ['https://pic.netbian.com/4kmeinv/']

    def parse(self, response):
        print('开始爬取-------')
        li_list = response.xpath('//*[@id="main"]/div[3]/ul/li')
        for li in li_list:
            name = li.xpath('./a/b/text()').extract_first()
            src = 'https://pic.netbian.com'+li.xpath('./a/img/@src').extract_first()

            item = BiantuproItem(src=src,name=name)
            yield item

3.编辑 items.py

import scrapy


class BiantuproItem(scrapy.Item):
    # define the fields for your item here like:
    name = scrapy.Field()
    src = scrapy.Field()

4.编辑pipelines.py

import os.path

from itemadapter import ItemAdapter

import urllib.request
if not os.path.exists('./imgs'):
    os.makedirs('./imgs')
class BiantuproPipeline:
    def process_item(self, item, spider):
        name = item['name']   #item['name']同item.get('name')
        src = item['src']
        filename = './imgs/'+name+'.jpg'  #必须先创建文件夹

        urllib.request.urlretrieve(url=src,filename=filename)
        return item

5.编辑 settings.py

BOT_NAME = 'biantuPro'

SPIDER_MODULES = ['biantuPro.spiders']
NEWSPIDER_MODULE = 'biantuPro.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)'
COOKIE = '__yjs_duid=1_c61ebacc9b33c4f7a158a3f30a9514871660543039095; Hm_lvt_c59f2e992a863c2744e1ba985abaea6c=1671718613,1671799228; zkhanecookieclassrecord=%2C54%2C66%2C58%2C; yjs_js_security_passport=072171d13d397c0eb73f1482f021b263f6d78201_1671807049_js; Hm_lpvt_c59f2e992a863c2744e1ba985abaea6c=1671808160'
REFERER = 'https://pic.netbian.com/4kmeinv/'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL = 'ERROR'
ITEM_PIPELINES = {
   'biantuPro.pipelines.BiantuproPipeline': 300,
}

6.启动爬虫

scrapy crawl biantu

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值