scrapy学习案例_爬取当当网书籍信息
1. 创建scrapy项目
scrapy startproject dangdang
2. 创建爬虫文件(../dangdang/dangdang/spiders
)
scrapy genspider dang https://category.dangdang.com/cp01.01.02.00.00.00.html
爬取思路
- 使用xpath解析response数据,找到每个数据最外层结构(这里是li标签)
li_list = response.xpath('//ul[@id="component_59"]/li')
- 循环
li_list
再次使用xpath解析其中的数据(src, name, price)
# 第一张图片和其他的不一样(第一张没有设置懒加载)
src = li.xpath('.//img/@data-original').extract_first()
if src is None:
src = li.xpath('.//img/@src').extract_first()
name = li.xpath('.//img/@alt').extract_first()
price = li.xpath('.//p[@class="price"]/span[1]/text()').extract_first()
- 调用items.py里面的类
class DangdangItem(scrapy.Item)
from dangdang.items import DangdangItem
book = DangdangItem(src=src, name=name, price=price)
- 把book交给管道pipelines
yield book
- 管道下载数据信息(
../dangdang/dangdang/pipelines.py
)
class DangdangPipeline:
# 在爬虫文件开始之前执行
def open_spider(self, spider):
self.fp = open('book.json', 'w', encoding='utf-8')
# item就是yield返回来的东西
def process_item(self, item, spider):
# 以下这种模式不推荐 因为每传递过来一个对象就打开一次文件 对文件的操作过于频繁
# write方法必须要是字符串而不能是对象
# w模式 会每一次对象打开一次文件 覆盖之前文件
# with open('book.json', 'a', encoding='utf-8')as fp:
self.fp.write(str(item))
return item
# 在爬虫文件执行完后执行
def close_spider(self, spider):
self.fp.close()
- 开启第二个管道下载数据图片
class DangdangLoadPipeline:
def process_item(self, item, spider):
url = 'http:' + item.get('src')
filename = './books/' + item.get('name') + '.jpg'
urllib.request.urlretrieve(url=url, filename=filename)
return item
注意: 这里需要在
../dangdang/dangdang/spiders
目录下创建一个books文件夹来保存数据图片(对应代码)
- 在settings中开启通道(
../dangdang/dangdang/settings.py
)
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
"dangdang.pipelines.DangdangPipeline": 300,
"dangdang.pipelines.DangdangLoadPipeline": 301,
}
这个不是自己写,他已经写好了但是封印起来了,我们只需要解封即可
管道可以有很多个,每个管道有优先级,范围是1~1000,值越小优先级越高
- 多页下载
if self.page < 100:
self.page += 1
url = 'https://category.dangdang.com/pg' + str(self.page) + '-cp01.01.02.00.00.00.html'
# 调用parse函数
yield scrapy.Request(url=url, callback=self.parse) # 回调的函数不加()
主要是
yield scrapy.Request(url=url, callback=self.parse)
3. 运行文件(../dangdang/dangdang/spiders
)
scrapy crawl dang
代码
./dangdang/dangdang/spiders/dang.py
import scrapy
from dangdang.items import DangdangItem
class DangSpider(scrapy.Spider):
name = "dang"
allowed_domains = ["category.dangdang.com"]
start_urls = ["https://category.dangdang.com/cp01.01.02.00.00.00.html"]
page = 1
def parse(self, response):
# pipelines 下载数据
# items 定义数据结构
# //ul[@id="component_59"]/li//img/@src
# //ul[@id="component_59"]/li//img/@alt
# //ul[@id="component_59"]/li//p[@class="price"]/span[1]/text()
# 所有的selector对象都可以再次调用xpath方法
li_list = response.xpath('//ul[@id="component_59"]/li')
for li in li_list:
# 第一张图片和其他的不一样(第一张没有设置懒加载)
src = li.xpath('.//img/@data-original').extract_first()
if src is None:
src = li.xpath('.//img/@src').extract_first()
name = li.xpath('.//img/@alt').extract_first()
price = li.xpath('.//p[@class="price"]/span[1]/text()').extract_first()
book = DangdangItem(src=src, name=name, price=price)
# 获取一个book就把book交给管道pipelines
yield book
# 多页面
if self.page < 100:
self.page += 1
url = 'https://category.dangdang.com/pg' + str(self.page) + '-cp01.01.02.00.00.00.html'
# 调用parse函数
yield scrapy.Request(url=url, callback=self.parse) # 回调的函数不加()
./dangdang/dangdang/items.py
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
# 这里放要下载的数据结构
class DangdangItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
src = scrapy.Field() # 图片
name = scrapy.Field() # 名字
price = scrapy.Field() # 价格
./dangdang/dangdang/pipelines.py
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
import urllib.request
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
# 如果想使用管带 必须在settings中开启管道
class DangdangPipeline:
# 在爬虫文件开始之前执行
def open_spider(self, spider):
self.fp = open('book.json', 'w', encoding='utf-8')
# item就是yield返回来的东西
def process_item(self, item, spider):
# 以下这种模式不推荐 因为每传递过来一个对象就打开一次文件 对文件的操作过于频繁
# write方法必须要是字符串而不能是对象
# w模式 会每一次对象打开一次文件 覆盖之前文件
# with open('book.json', 'a', encoding='utf-8')as fp:
self.fp.write(str(item))
return item
# 在爬虫文件执行完后执行
def close_spider(self, spider):
self.fp.close()
# 多条管道开启
class DangdangLoadPipeline:
def process_item(self, item, spider):
url = 'http:' + item.get('src')
filename = './books/' + item.get('name') + '.jpg'
# 下载到url资源,名字为filename
urllib.request.urlretrieve(url=url, filename=filename)
return item
../dangdang/dangdang/settings.py
# Scrapy settings for dangdang project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = "dangdang"
SPIDER_MODULES = ["dangdang.spiders"]
NEWSPIDER_MODULE = "dangdang.spiders"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = "dangdang (+http://www.yourdomain.com)"
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
# "Accept-Language": "en",
#}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# "dangdang.middlewares.DangdangSpiderMiddleware": 543,
#}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# "dangdang.middlewares.DangdangDownloaderMiddleware": 543,
#}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# "scrapy.extensions.telnet.TelnetConsole": None,
#}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
# 管道可以有很多个,每个管道有优先级,范围是1~1000,值越小优先级越高
"dangdang.pipelines.DangdangPipeline": 300,
"dangdang.pipelines.DangdangLoadPipeline": 301,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = "httpcache"
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage"
# Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"
这里只是解封而已
其他文件都没变
运行结果
-
会在
../dangdang/dangdang/spiders
的目录下生成一个book.json
文件 -
会在
../dangdang/dangdang/spiders/books
的目录下生成所有图片
通过此案,需要学会:
- 要会写items.py文件
格式差不多就是很多个
<xxx> = scrapy.Field()
- 要会创建items.py里面的DangdangItem对象
book = DangdangItem(src=src, name=name, price=price)
- 要会传给管道
yield book
- 要会仿写给好的管道,开启管道,并在settings.py中解封
- 要会使用回调函数回调自身函数
yield scrapy.Request(url=url, callback=self.parse)