scrapy_redis分布式爬虫爬取亚马逊图书
- 最近在学习分布式爬虫,选取当当图书进行了一次小练习
- 网址,
https://www.amazon.cn/gp/book/all_category/ref=sv_b_0
- 前期准备
- 安装 redis 数据库,网上由教程请自行谷歌
- 安装 Scrapy 和 scrapy-redis
- pip install scrapy(如果出现问题请自行谷歌解决,需要vc环境)
- pip install scrapy-redis
抓取流程
- 抓取分类页面下的 “每个大分类下的小分类下的列表页的图书部分内容”
主要代码
- settings
# -*- coding: utf-8 -*-
# Scrapy settings for amazon project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'amazon'
SPIDER_MODULES = ['amazon.spiders']
NEWSPIDER_MODULE = 'amazon.spiders'
# redis组件
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_PERSIST = True
ITEM_PIPELINES = {
'scrapy_redis.pipelines.RedisPipeline': 400,
}
REDIS_HOST = "127.0.0.1"
REDIS_PORT = 6379
REDIS_PARAMS = {
'password': 'root',
}
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 0.5
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'amazon.middlewares.AmazonSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'amazon.middlewares.AmazonDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'amazon.pipelines.AmazonPipeline': 300,
#}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
- spiders
# -*- coding: utf-8 -*-
import scrapy
from scrapy_redis.spiders import RedisSpider
from copy import deepcopy
class BookSpider(RedisSpider):
name = 'book'
allowed_domains = ['amazon.cn']
# start_urls = ['http://amazon.cn/']
redis_key = "amazon_book"
def parse(self, response):
div_list = response.xpath('//div[@id="content"]/div[@class="a-row a-size-base"]')
for div in div_list:
item = {}
item['first_title'] = div.xpath('./div[1]/h5/a/@title').extract_first()
td_list = div.xpath('./div[2]//td')
for td in td_list:
item['second_title'] = td.xpath('./a/@title').extract_first()
item['second_url'] = td.xpath('./a/@href').extract_first()
if item['second_url']:
# 有一个url不完整,所以需要判断一下
if "http://www.amazon.cn/" in item['second_url']:
yield scrapy.Request(
url=item['second_url'],
callback=self.parse_book_list,
meta={'item': deepcopy(item)}
)
def parse_book_list(self, response):
item = response.meta['item']
li_list = response.xpath('//div[@id="mainResults"]/ul/li')
for li in li_list:
item['book_name'] = li.xpath('.//div[@class="a-row a-spacing-small"]/div[1]/a/@title').extract_first()
item['book_author'] = li.xpath('.//div[@class="a-row a-spacing-small"]/div[2]/span/text()').extract()
item['book_type'] = li.xpath('.//div[@class="a-column a-span7"]/div[@class="a-row a-spacing-none"][1]//text()').extract_first()
item['book_price'] = li.xpath('.//div[@class="a-column a-span7"]/div[@class="a-row a-spacing-none"][2]/a//text()').extract_first()
print(item)
# 翻页
next_url = response.xpath('(//a[text()="下一页"]|//a[@title="下一页"])/@href').extract_first()
if next_url:
next_url = "https://www.amazon.cn" + next_url
yield scrapy.Request(
url=next_url,
callback=self.parse_book_list,
meta={'item': item}
)
-
执行
- Master端输入
lpush amazon_book "https://www.amazon.cn/gp/book/all_category/ref=sv_b_0"
- Slaver端输入
scrapy crawl book
- Master端输入
-
部分执行结果
-
小结
- 亚马逊的网址的html都很有规律性,但是就是这种规律性给xpath提取网页数据带来了一些麻烦。
- 大部分url都是完整的,但是”Kindle今日特价书“板块的url不是完整的,
- 感觉使用crawlspider的方法来爬取这个网站可能会更好一些。