scrapy_redis爬取京东图书

爬虫的基本流程:

scrapy框架的爬虫流程:

 

 

 

使用scrapy_redis爬取京东图书:

jd.py

# -*- coding: utf-8 -*-
import scrapy
from jdbook.items import JdbookItem
from copy import deepcopy
import json
import urllib

#存不进MongoDB???分布式 加四行代码 ,没有yield ,没有存到manggodb

class JdSpider(scrapy.Spider):
    name = 'jd'
    allowed_domains = ['jd.com','p.3.cn']
    start_urls = ['https://book.jd.com/booksort.html']

    def parse(self, response):
        dt_list = response.xpath("//div[@class='mc']/dl/dt")  # 大分类列表
        for dt in dt_list:
            item = JdbookItem()
            item["b_cate"] = dt.xpath("./a/text()").extract_first()
            em_list = dt.xpath("./following-sibling::dd[1]/em")  # 小分类列表
            for em in em_list:
                item["s_href"] = em.xpath("./a/@href").extract_first()
                item["s_cate"] = em.xpath("./a/text()").extract_first()
                if item["s_href"] is not None:
                    item["s_href"] = "https:" + item["s_href"]
                    yield scrapy.Request(item["s_href"],callback=self.parse_book_list,meta={"item": deepcopy(item)})

    def parse_book_list(self, response):  # 解析列表页
        item = response.meta["item"]
        li_list = response.xpath("//div[@id='plist']/ul/li")
        for li in li_list:
            item["book_img"] = li.xpath(".//div[@class='p-img']//img/@src").extract_first()
            if item["book_img"] is None:
                item["book_img"] = li.xpath(".//div[@class='p-img']//img/@data-lazy-img").extract_first()
            item["book_img"] = "https:" + item["book_img"] if item["book_img"] is not None else None
            item["book_name"] = li.xpath(".//div[@class='p-name']/a/em/text()").extract_first().strip()
            item["book_author"] = li.xpath(".//span[@class='author_type_1']/a/text()").extract()
            item["book_press"] = li.xpath(".//span[@class='p-bi-store']/a/@title").extract_first()
            item["book_publish_date"] = li.xpath(".//span[@class='p-bi-date']/text()").extract_first().strip()
            item["book_sku"] = li.xpath("./div/@data-sku").extract_first()
            url='https://sclub.jd.com/comment/productPageComments.action?&productId={}&score=0&sortType=5&page=0&pageSize=10'.format(item['book_sku'])
            yield scrapy.Request('https://p.3.cn/prices/mgets?skuIds=J_{}'.format(item["book_sku"]),callback=self.parse_book_price,meta={"item": deepcopy(item)})

            #yield scrapy.Request(url,callback=self.parse_book_comment, meta={"item": deepcopy(item)})

        # 列表页翻页
        next_url = response.xpath("//a[@class='pn-next']/@href").extract_first()
        if next_url is not None:
            next_url = urllib.parse.urljoin(response.url, next_url)
            yield scrapy.Request(next_url,callback=self.parse_book_list,meta={"item": item})

    def parse_book_price(self, response):
        item = response.meta["item"]
        item["book_price"] = json.loads(response.body.decode())[0]["op"]
        #print(item)
        yield item

    def parse_book_comment(self, response):
        item = response.meta["item"]
        movie = json.loads(response.text)
        for comment in movie['comments']:
            item['comment']=comment['content']
            #print(item)


 items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class JdbookItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    b_cate=scrapy.Field()
    s_cate = scrapy.Field()
    book_img = scrapy.Field()
    book_name= scrapy.Field()
    book_author= scrapy.Field()
    book_press = scrapy.Field()
    book_publish_date = scrapy.Field()
    book_sku = scrapy.Field()
    book_price = scrapy.Field()
    s_href = scrapy.Field()
    comment = scrapy.Field()

settings.py 分布式爬虫的设置

 -*- coding: utf-8 -*-

# Scrapy settings for jdbook project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'jdbook'

SPIDER_MODULES = ['jdbook.spiders']
NEWSPIDER_MODULE = 'jdbook.spiders'


DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_PERSIST = True
REDIS_URL = "redis://127.0.0.1:6379"


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'jdbook (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'

#LOG_LEVEL = "WARNING"
# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'jdbook.middlewares.JdbookSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'jdbook.middlewares.JdbookDownloaderMiddleware': 543,
# }

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   #'jdbook.pipelines.JdbookPipeline': 300,
   #'jdbook.pipelines.JdbookPipeline1': 301,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

 pipelines.py存储文件

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

from pymongo import MongoClient
import json

client = MongoClient('localhost',port=27017)
collection = client["jd"]["book1"]

class JdbookPipeline(object):
    def process_item(self, item, spider):
        collection.insert(dict(item))
        return item




class JdbookPipeline1(object):
    def process_item(self, item, spider):
        with open('temp.txt','a',encoding='utf-8')as f :
            f.write(json.dumps(dict(item),ensure_ascii=False,indent=2))
        return item

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Scrapy Redis是一个用于分布式爬取网页的Python框架。它是基于Scrapy框架的扩展,采用Redis作为分布式队列,可以在多个爬虫节点之间共享任务和数据。通过引入scrapy_redis.spider文件中的RedisSpider类,可以将原来继承的scrapy.Spider类改为RedisSpider类,从而实现对分布式爬虫的支持。 在使用分布式爬虫的过程中,首先需要将项目配置为分布式,并将项目拷贝到多台服务器中。然后启动所有的爬虫项目,这样每个爬虫节点都可以独立运行。接下来,在主redis-cli中使用lpush命令将需要爬取的网址推送到Redis队列中。这样,所有的爬虫节点都会开始运行,同时获取不同的任务和数据,实现分布式爬取的效果。 要使用Scrapy Redis进行分布式爬取,首先需要安装scrapy_redis包。可以通过在CMD工具中执行命令"pip install scrapy_redis"来进行安装。安装完成后,就可以在项目中使用scrapy_redis进行分布式爬取了。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* *3* [Scrapy基于scrapy_redis实现分布式爬虫部署](https://blog.csdn.net/baoshuowl/article/details/79701303)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 100%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值