爬虫Scrapy框架学习(三)-爬取苏宁图书信息案例

爬取苏宁图书案例

1.项目文件架构

在这里插入图片描述

2.爬取数据网页页面

在这里插入图片描述

3.suning.py文件

# -*- coding: utf-8 -*-
import scrapy
from copy import deepcopy
import re

class SuningSpider(scrapy.Spider):
    name = 'suning'
    allowed_domains = ['suning.com']
    start_urls = ['https://list.suning.com/1-502325-0.html']

    def parse(self, response):
        #大分类
        li_list = response.xpath("//div[@id='filter-results']/ul/li")
        # print(li_list)
        for li in li_list:

            item = {}
            item["introduction"] = li.xpath(".//img[@class='search-loading']/@alt").extract_first()

            item["img"] = li.xpath(".//div[@class='img-block']//img/@src2").extract_first()
            item["img"] = "http:" + item["img"]

            #价格取不到# item["price"] = li.xpath("//*[@id='filter-results']/ul/li[2]/div/div/div/div[2]/p[1]/em/b//text()").extract()
            item["store_name"] = li.xpath(".//p[@class='seller oh no-more ']/@salesname").extract_first()

            item["book_href"] = li.xpath(".//div[@class='img-block']/a/@href").extract_first()
            item["book_href"] = "https:" + item["book_href"]

            # yield  scrapy.Request(
            #     item["book_href"],
            #     callback=self.get_price,
            #     meta = {"item":deepcopy(item)} #深拷贝防止并发覆盖
            # )
            # yield item #对应操作每一个信息都是一个全新的item字典空间,所以不需要考虑覆盖问题。
            print(item)
    # def get_price(self, response):
    #     pass

            # 翻页
        page_count = int(re.findall("param.pageNumbers = \"(.*?)\";", response.body.decode())[0])
        current_page = int(re.findall("param.currentPage = \"(.*?)\";", response.body.decode())[0])
        print("第" + str(current_page) + "页" + "*"*50)
        if current_page < page_count:
            print("执行到内部")
            next_url = "https://list.suning.com/emall/showProductList.do?ci=502325&pg=03&cp={}".format(current_page + 1)
            yield scrapy.Request(
                next_url,
                callback=self.parse,
                # meta={"item": response.meta["item"]} #只传输纯净的小分类信息(洁癖)
                # meta = {"item":item}
                dont_filter=True
            )

  • 当存在读取大分类下小分类的情况,应使用深拷贝防止数据覆盖。

4.pipelines.py文件

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html


class BookPipeline(object):
    def process_item(self, item, spider):
        print(item)
        return item

  • 此处没有什么东西,就是打印爬取的数据

5.settings.py文件

# -*- coding: utf-8 -*-

# Scrapy settings for book project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'book'

SPIDER_MODULES = ['book.spiders']
NEWSPIDER_MODULE = 'book.spiders'

LOG_LEVEL ="WARNING"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False #不遵循不能爬虫请求

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'book.middlewares.BookSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'book.middlewares.BookDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'book.pipelines.BookPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

  • 注意一定要将ROBOTSTXT_OBEY 设为False从下一页开始便不满足robots协议了。

6.结果展示

在这里插入图片描述

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值