scrapy学习案例_爬取当当网书籍信息

scrapy学习案例_爬取当当网书籍信息

1. 创建scrapy项目
scrapy startproject dangdang
2. 创建爬虫文件(../dangdang/dangdang/spiders)
scrapy genspider dang https://category.dangdang.com/cp01.01.02.00.00.00.html

爬取思路

  1. 使用xpath解析response数据,找到每个数据最外层结构(这里是li标签)
li_list = response.xpath('//ul[@id="component_59"]/li')
  1. 循环li_list再次使用xpath解析其中的数据(src, name, price)
# 第一张图片和其他的不一样(第一张没有设置懒加载)
src = li.xpath('.//img/@data-original').extract_first()
if src is None:
    src = li.xpath('.//img/@src').extract_first()

name = li.xpath('.//img/@alt').extract_first()
price = li.xpath('.//p[@class="price"]/span[1]/text()').extract_first()
  1. 调用items.py里面的类class DangdangItem(scrapy.Item)
from dangdang.items import DangdangItem

book = DangdangItem(src=src, name=name, price=price)
  1. 把book交给管道pipelines
yield book
  1. 管道下载数据信息(../dangdang/dangdang/pipelines.py)
class DangdangPipeline:
    # 在爬虫文件开始之前执行
    def open_spider(self, spider):
        self.fp = open('book.json', 'w', encoding='utf-8')

    # item就是yield返回来的东西
    def process_item(self, item, spider):
        # 以下这种模式不推荐  因为每传递过来一个对象就打开一次文件 对文件的操作过于频繁
        # write方法必须要是字符串而不能是对象
        # w模式 会每一次对象打开一次文件 覆盖之前文件
        # with open('book.json', 'a', encoding='utf-8')as fp:
        self.fp.write(str(item))
        return item

    # 在爬虫文件执行完后执行
    def close_spider(self, spider):
        self.fp.close()
  1. 开启第二个管道下载数据图片
class DangdangLoadPipeline:
    def process_item(self, item, spider):

        url = 'http:' + item.get('src')
        filename = './books/' + item.get('name') + '.jpg'

        urllib.request.urlretrieve(url=url, filename=filename)

        return item

注意: 这里需要在../dangdang/dangdang/spiders目录下创建一个books文件夹来保存数据图片(对应代码)

  1. 在settings中开启通道(../dangdang/dangdang/settings.py)
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   "dangdang.pipelines.DangdangPipeline": 300,
   "dangdang.pipelines.DangdangLoadPipeline": 301,
}

这个不是自己写,他已经写好了但是封印起来了,我们只需要解封即可

管道可以有很多个,每个管道有优先级,范围是1~1000,值越小优先级越高
  1. 多页下载
if self.page < 100:
    self.page += 1
    url = 'https://category.dangdang.com/pg' + str(self.page) + '-cp01.01.02.00.00.00.html'

    # 调用parse函数
    yield scrapy.Request(url=url, callback=self.parse)  # 回调的函数不加()

主要是yield scrapy.Request(url=url, callback=self.parse)

3. 运行文件(../dangdang/dangdang/spiders)
scrapy crawl dang

代码

  • ./dangdang/dangdang/spiders/dang.py
import scrapy
from dangdang.items import DangdangItem


class DangSpider(scrapy.Spider):
    name = "dang"
    allowed_domains = ["category.dangdang.com"]
    start_urls = ["https://category.dangdang.com/cp01.01.02.00.00.00.html"]

    page = 1

    def parse(self, response):
        # pipelines 下载数据
        # items     定义数据结构

        # //ul[@id="component_59"]/li//img/@src
        # //ul[@id="component_59"]/li//img/@alt
        # //ul[@id="component_59"]/li//p[@class="price"]/span[1]/text()
        # 所有的selector对象都可以再次调用xpath方法

        li_list = response.xpath('//ul[@id="component_59"]/li')

        for li in li_list:
            # 第一张图片和其他的不一样(第一张没有设置懒加载)
            src = li.xpath('.//img/@data-original').extract_first()
            if src is None:
                src = li.xpath('.//img/@src').extract_first()

            name = li.xpath('.//img/@alt').extract_first()
            price = li.xpath('.//p[@class="price"]/span[1]/text()').extract_first()

            book = DangdangItem(src=src, name=name, price=price)

            # 获取一个book就把book交给管道pipelines
            yield book

        # 多页面
        if self.page < 100:
            self.page += 1
            url = 'https://category.dangdang.com/pg' + str(self.page) + '-cp01.01.02.00.00.00.html'

            # 调用parse函数
            yield scrapy.Request(url=url, callback=self.parse)  # 回调的函数不加()

  • ./dangdang/dangdang/items.py
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


# 这里放要下载的数据结构
class DangdangItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()

    src = scrapy.Field()  # 图片
    name = scrapy.Field()  # 名字
    price = scrapy.Field()  # 价格

  • ./dangdang/dangdang/pipelines.py
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
import urllib.request

# useful for handling different item types with a single interface
from itemadapter import ItemAdapter


# 如果想使用管带 必须在settings中开启管道
class DangdangPipeline:
    # 在爬虫文件开始之前执行
    def open_spider(self, spider):
        self.fp = open('book.json', 'w', encoding='utf-8')

    # item就是yield返回来的东西
    def process_item(self, item, spider):
        # 以下这种模式不推荐  因为每传递过来一个对象就打开一次文件 对文件的操作过于频繁
        # write方法必须要是字符串而不能是对象
        # w模式 会每一次对象打开一次文件 覆盖之前文件
        # with open('book.json', 'a', encoding='utf-8')as fp:
        self.fp.write(str(item))
        return item

    # 在爬虫文件执行完后执行
    def close_spider(self, spider):
        self.fp.close()


# 多条管道开启
class DangdangLoadPipeline:
    def process_item(self, item, spider):

        url = 'http:' + item.get('src')
        filename = './books/' + item.get('name') + '.jpg'

        # 下载到url资源,名字为filename
        urllib.request.urlretrieve(url=url, filename=filename)

        return item

  • ../dangdang/dangdang/settings.py
# Scrapy settings for dangdang project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = "dangdang"

SPIDER_MODULES = ["dangdang.spiders"]
NEWSPIDER_MODULE = "dangdang.spiders"


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = "dangdang (+http://www.yourdomain.com)"

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
#    "Accept-Language": "en",
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    "dangdang.middlewares.DangdangSpiderMiddleware": 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    "dangdang.middlewares.DangdangDownloaderMiddleware": 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    "scrapy.extensions.telnet.TelnetConsole": None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    # 管道可以有很多个,每个管道有优先级,范围是1~1000,值越小优先级越高
   "dangdang.pipelines.DangdangPipeline": 300,
   "dangdang.pipelines.DangdangLoadPipeline": 301,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = "httpcache"
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage"

# Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"

这里只是解封而已

其他文件都没变


运行结果

  • 会在../dangdang/dangdang/spiders的目录下生成一个book.json文件

  • 会在../dangdang/dangdang/spiders/books的目录下生成所有图片


通过此案,需要学会:
  1. 要会写items.py文件

格式差不多就是很多个 <xxx> = scrapy.Field()

  1. 要会创建items.py里面的DangdangItem对象

book = DangdangItem(src=src, name=name, price=price)

  1. 要会传给管道

yield book

  1. 要会仿写给好的管道,开启管道,并在settings.py中解封
  2. 要会使用回调函数回调自身函数

yield scrapy.Request(url=url, callback=self.parse)


  • 9
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
好的,我可以为您解答这个问题。 首先,我们需要创建一个Scrapy项目,使用命令`scrapy startproject kaoyan`,其中"kaoyan"为项目名称。然后在项目下创建一个Spider,使用命令`scrapy genspider kaoyan_spider kaoyan.com`,其中"kaoyan_spider"为Spider名称,"kaoyan.com"为需要爬取的网站域名。 接下来,在Spider中实现xpath翻页爬取。具体代码如下: ```python import scrapy class KaoyanSpider(scrapy.Spider): name = 'kaoyan_spider' allowed_domains = ['kaoyan.com'] start_urls = ['https://www.kaoyan.com/news/'] def parse(self, response): # 实现xpath爬取 data = response.xpath("你需要爬取的数据的xpath表达式") # 处理爬取到的数据 yield { "data": data } # 实现翻页 next_page = response.xpath("下一页的xpath表达式").get() if next_page: next_url = response.urljoin(next_page) yield scrapy.Request(next_url, callback=self.parse) ``` 在代码中,我们首先实现了xpath爬取,将爬取到的数据通过yield语句传给管道进行处理。然后,我们实现了翻页部分,找到下一页的xpath表达式并使用response.urljoin()方法构造下一页的URL,再使用Scrapy.Request()方法发送请求并指定回调函数为parse,从而实现翻页爬取。 需要注意的是,Scrapy框架已经实现了一些常见的翻页方法,例如使用LinkExtractor实现翻页,使用CrawlSpider继承类实现翻页等等。因此,在实际开发中可以根据具体情况选择最适合的翻页方法。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值