Scrapy结合MongoDB源码重构,打磨完美指纹存储机制!

本篇文章将带给各位读者关于 Scrapy 与 MongoDB 的结合,打磨出完美的指纹存储机制,同时也解决了 Redis 内存压力的问题。我们将深入探讨Scrapy-Redis 源码的改造,使其可以根据不同场景进行灵活配置和使用。欢迎各位读者阅读并参与讨论!

特别声明:本公众号文章只作为学术研究,不作为其他不法用途;如有侵权请联系作者删除。

bdcfde141900258d7ae6419c4803298d.gif

这是「进击的Coder」的第 937 篇技术分享

作者:TheWeiJun

来源:逆向与爬虫的故事

阅读本文大概需要 17 分钟。

立即加星标

6df4a125dc61b067b11293cb6fc4297b.png

每月看好文

 目录


一、前言介绍

二、架构梳理

三、源码分析

四、源码重写

五、文章总结

d81316301f483be5a2d61d8e9e9d9278.gif


一、前言介绍

在使用 Scrapy-Redis 进行数据采集时,经常会面临着 Redis 内存不足的困扰,特别是当 Redis 中存储的指纹数量过多时,可能导致 Redis 崩溃、指纹丢失,进而影响整个爬虫的稳定性。那么,面对这类问题,我们应该如何应对呢?我将在本文中分享解决方案:通过改造 Scrapy-Redis 源码,引入 MongoDB 持久化存储,从根本上解决了上述问题。敬请关注我的文章,一起探讨这个解决方案的实现过程,以及带来的收益和挑战。

二、架构梳理

1、进行源码分析之前,我们需要先了解下 scrapy 及 scrapy-redis 的架构图,两者相比,是哪些地方进行了改造?带着这样的疑问,我们来看下两个框架的架构图:

26ea545da423c1d869c67c59a1ec8814.png

                                                         图1(scrapy架构图)

38c061a395f51d036dd643800675476d.png

图2(scrapy-redis架构图)

2、拿 图2 同 图1 对比,我们可以看到 scrapy-redis 在 scrapy 的架构上增加了 redis,基于 redis 的特性拓展了如下四种组件:Scheduler,Dupfilter,ItemPipeline,BaseSpider,这也是为什么在 redis 中会生成spider:requests、spider:items、spider:dupfilter 三个 key 的原因。接下来我们进入源码分析环节,来看看 scrapy-redis 如何进行指纹改造吧。


三、源码分析

1、分析 scrapy-redis 源码,我们在使用 scrapy-redis 时,在 settings 模块都会进行如下配置:

2595b1f7ca8e162141371adc337a206a.png

总结:这里面的三个参数,分别同 redis 进行请求出入、请求指纹、请求优先级交互,如果我们想要修改 redis 指纹模块,那么我们需要对 RFPDupeFilter 模块进行重写,从而结合 mongodb 进行大量指纹存储,接下来进入源码分析环节。

2、阅读分析 RFPDupeFilter 源码,我们先来附上 RFPDupeFilter 完整源码如下:

import logging
import time


from scrapy.dupefilters import BaseDupeFilter
from scrapy.utils.request import request_fingerprint


from . import defaults
from .connection import get_redis_from_settings




logger = logging.getLogger(__name__)




# TODO: Rename class to RedisDupeFilter.
class RFPDupeFilter(BaseDupeFilter):
    """Redis-based request duplicates filter.


    This class can also be used with default Scrapy's scheduler.


    """


    logger = logger


    def __init__(self, server, key, debug=False):
        """Initialize the duplicates filter.


        Parameters
        ----------
        server : redis.StrictRedis
            The redis server instance.
        key : str
            Redis key Where to store fingerprints.
        debug : bool, optional
            Whether to log filtered requests.


        """
        self.server = server
        self.key = key
        self.debug = debug
        self.logdupes = True


    @classmethod
    def from_settings(cls, settings):
        """Returns an instance from given settings.


        This uses by default the key ``dupefilter:<timestamp>``. When using the
        ``scrapy_redis.scheduler.Scheduler`` class, this method is not used as
        it needs to pass the spider name in the key.


        Parameters
        ----------
        settings : scrapy.settings.Settings


        Returns
        -------
        RFPDupeFilter
            A RFPDupeFilter instance.




        """
        server = get_redis_from_settings(settings)
        # XXX: This creates one-time key. needed to support to use this
        # class as standalone dupefilter with scrapy's default scheduler
        # if scrapy passes spider on open() method this wouldn't be needed
        # TODO: Use SCRAPY_JOB env as default and fallback to timestamp.
        key = defaults.DUPEFILTER_KEY % {'timestamp': int(time.time())}
        debug = settings.getbool('DUPEFILTER_DEBUG')
        return cls(server, key=key, debug=debug)


    @classmethod
    def from_crawler(cls, crawler):
        """Returns instance from crawler.


        Parameters
        ----------
        crawler : scrapy.crawler.Crawler


        Returns
        -------
        RFPDupeFilter
            Instance of RFPDupeFilter.


        """
        return cls.from_settings(crawler.settings)


    def request_seen(self, request):
        """Returns True if request was already seen.


        Parameters
        ----------
        request : scrapy.http.Request


        Returns
        -------
        bool


        """
        fp = self.request_fingerprint(request)
        # This returns the number of values added, zero if already exists.
        added = self.server.sadd(self.key, fp)
        return added == 0


    def request_fingerprint(self, request):
        """Returns a fingerprint for a given request.


        Parameters
        ----------
        request : scrapy.http.Request


        Returns
        -------
        str


        """
        return request_fingerprint(request)


    @classmethod
    def from_spider(cls, spider):
        settings = spider.settings
        server = get_redis_from_settings(settings)
        dupefilter_key = settings.get("SCHEDULER_DUPEFILTER_KEY", defaults.SCHEDULER_DUPEFILTER_KEY)
        key = dupefilter_key % {'spider': spider.name}
        debug = settings.getbool('DUPEFILTER_DEBUG')
        return cls(server, key=key, debug=debug)


    def close(self, reason=''):
        """Delete data on close. Called by Scrapy's scheduler.


        Parameters
        ----------
        reason : str, optional


        """
        self.clear()


    def clear(self):
        """Clears fingerprints data."""
        self.server.delete(self.key)


    def log(self, request, spider):
        """Logs given request.


        Parameters
        ----------
        request : scrapy.http.Request
        spider : scrapy.spiders.Spider


        """
        if self.debug:
            msg = "Filtered duplicate request: %(request)s"
            self.logger.debug(msg, {'request': request}, extra={'spider': spider})
        elif self.logdupes:
            msg = ("Filtered duplicate request %(request)s"
                   " - no more duplicates will be shown"
                   " (see DUPEFILTER_DEBUG to show all duplicates)")
            self.logger.debug(msg, {'request': request}, extra={'spider': spider})
            self.logdupes = False

3、我们对 scrapy-redis dupfilter.py 源码进行分析如下:

771d82c249a612062b7abfd24a7ba60a.png

解读:request_seen 方法中的 self.request_fingerprint 方法会对请求指纹进行 sha1 加密运算得到一个 40 位长度的 fp 参数,然后 redis set 会对该指纹进行 add 添加,如果指纹不存在则返回 True,return True==0 则最后结果返回 False,如果指纹存在则返回 True,return False==0 则最后结果返回 True。接下来分析下调度器是如何进行最终指纹判重的!

4、我们分析 Schedulter 源码,查看 Scheduler 对请求进行入队列处理逻辑如下:

bb7e93d8a896233752d45b2c43d99b2e.png

解读:通过分析 enqueue_request 方法,我们可以看到相关逻辑,如果该请求设置为去重并且 request_seen 方法返回为 True,则该请求不入队列;相反该请求需要入队列,并进行相关数据自增统计。

总结:其实分析到这里,我们只需要修改 request_seen 方法,即可完成 scrapy-redis fp 源码改造,通过结合 mongodb,实现各种爬虫 fp 指纹持久化存储;长话短说,接下来进入源码重写环节。


四、源码重写

1、首先我们需要在 settings 里配置 mongodb 相关参数,代码如下:

MONGO_DB = "crawler"
MONGO_URL = "mongodb://localhost:27017"

2、紧接着笔者通过继承重写BaseDupeFilter源码,自定义去重模块 MongoRFPDupeFilter 源码如下:

import logging
import time


from pymongo import MongoClient
from scrapy.dupefilters import BaseDupeFilter
from scrapy.utils.request import request_fingerprint
from scrapy_redis import defaults


logger = logging.getLogger(__name__)




class MongoRFPDupeFilter(BaseDupeFilter):
    """Redis-based request duplicates filter.
    This class can also be used with default Scrapy's scheduler.
    """


    logger = logger


    def __init__(self, key, debug=False, settings=None):
        self.key = key
        self.debug = debug
        self.logdupes: bool = True
        self.mongo_uri = settings.get('MONGO_URI')
        self.mongo_db = settings.get('MONGO_DB')
        self.client = MongoClient(self.mongo_uri)
        self.db = self.client[self.mongo_db]
        self.collection = self.db[self.key]
        self.collection.create_index([("_id", 1)])


    @classmethod
    def from_settings(cls, settings):
        key = defaults.DUPEFILTER_KEY % {'timestamp': int(time.time())}
        debug = settings.getbool('DUPEFILTER_DEBUG')
        return cls(key=key, debug=debug, settings=settings)


    @classmethod
    def from_crawler(cls, crawler):
        """Returns instance from crawler.


        Parameters
        ----------
        crawler : scrapy.crawler.Crawler


        Returns
        -------
        RFPDupeFilter
            Instance of RFPDupeFilter.


        """
        return cls.from_settings(crawler.settings)


    def request_seen(self, request):
        """Returns True if request was already seen.
        """
        fp = self.request_fingerprint(request)
        # This returns the number of values added, zero if already exists.
        if self.collection.find_one({'_id': fp}):
            return True
        self.collection.insert_one(
            {'_id': fp, "crawl_time": time.strftime("%Y-%m-%d")})
        return False


    def request_fingerprint(self, request):
        return request_fingerprint(request)


    @classmethod
    def from_spider(cls, spider):
        settings = spider.settings
        dupefilter_key = settings.get("SCHEDULER_DUPEFILTER_KEY", defaults.SCHEDULER_DUPEFILTER_KEY)
        key = dupefilter_key % {'spider': spider.name}
        debug = settings.getbool('DUPEFILTER_DEBUG')
        return cls(key=key, debug=debug, settings=settings)


    def close(self, reason=''):
        """Delete data on close. Called by Scrapy's scheduler.


        Parameters
        ----------
        reason : str, optional


        """
        self.clear()


    def clear(self):
        """Clears fingerprints data."""
        self.collection.delete(self.key)


    def log(self, request, spider):
        """Logs given request.


        Parameters
        ----------
        request : scrapy.http.Request
        spider : scrapy.spiders.Spider


        """
        if self.debug:
            msg = "Filtered duplicate request: %(request)s"
            self.logger.debug(msg, {'request': request}, extra={'spider': spider})
        elif self.logdupes:
            msg = ("Filtered duplicate request %(request)s"
                   " - no more duplicates will be shown"
                   " (see DUPEFILTER_DEBUG to show all duplicates)")
            self.logger.debug(msg, {'request': request}, extra={'spider': spider})
            self.logdupes = False

3、第三步,我们需要将继承重写的MongoRFPDupeFilter模块配置到settings文件中,代码如下:

# 确保所有的爬虫实例使用Mongodb进行重复过滤
DUPEFILTER_CLASS = "test_scrapy.dupfilter.MongoRFPDupeFilter"

4、编写测试爬虫(编写代码环节跳过),直接查看mongdb collection中fp结果,截图如下:

ce66aa973efa46fe4d12c51f79520418.jpeg

总结:到这里整个流程就结束了,接下来不管我们开发多少个爬虫,都默认使用mongodb对request fp指纹进行存储。最后我们来总结下scrapy-redis同scrapy-mongodb的指纹方式优缺点吧!

  • scrapy-redis    速度快,但由于指纹过大,内存不足会导致redis宕机,内存昂贵

  • scrapy+mongo    速度同redis相比,不是很优,优点是能存储大批量指纹,磁盘廉价


五、文章总结

亲爱的读者们,感谢你们与我一同在这个公众号里探索、学习。为了让我们能够更紧密地交流、共同进步,我特意开通了留言功能。这里不仅是一个分享知识的平台,更是一个携手成长的角落。欢迎你们在评论区留下你的学习心得、疑惑或者建议,让我们一起探讨、学习,共同成长。期待在这里,我们能够一起分享智慧的火花,点亮前行的道路。再次感谢你们的陪伴,让我们一起学习,一起成长!

本篇文章分享到这里就结束了,欢迎大家关注下期文章,我们不见不散af21e6b0c76c6016dc32ec2b5cbf813d.png8fc817d590754517468e996b538d69ea.pngfc35bc5be557c87f5ba25ffaceea928a.png

ba8d5832b5ebaa86483a0bf087c68908.gif

点分享

8929ded670f176c913d6c5e173b32fa3.gif

点收藏

e37b4bbb0dbb9a5d4f047864ca5db59d.gif

点点赞

5f57d4a69499008367712e64f1fb1e8a.gif

点在看

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
下面是一个使用 ScrapyMongoDB 的简单示例,它将爬取豆瓣电影 Top 250 的数据并存储MongoDB 数据库中: 1. 安装 Scrapy 和 pymongo: ``` pip install scrapy pymongo ``` 2. 创建 Scrapy 项目: ``` scrapy startproject douban ``` 3. 在 `settings.py` 文件中配置 MongoDB: ``` MONGODB_HOST = 'localhost' MONGODB_PORT = 27017 MONGODB_DBNAME = 'douban' MONGODB_COLLECTION = 'movies' ``` 4. 创建一个名为 `items.py` 的文件,定义要爬取的数据字段: ``` import scrapy class DoubanItem(scrapy.Item): title = scrapy.Field() rating = scrapy.Field() director = scrapy.Field() actors = scrapy.Field() year = scrapy.Field() country = scrapy.Field() category = scrapy.Field() ``` 5. 创建一个名为 `douban_spider.py` 的文件,定义爬虫: ``` import scrapy from douban.items import DoubanItem from scrapy.spiders import CrawlSpider, Rule from scrapy.linkextractors import LinkExtractor class DoubanSpider(CrawlSpider): name = 'douban' allowed_domains = ['movie.douban.com'] start_urls = ['https://movie.douban.com/top250'] rules = ( Rule(LinkExtractor(allow=('subject/\d+/$')), callback='parse_item'), Rule(LinkExtractor(allow=('top250\?start=\d+')), follow=True) ) def parse_item(self, response): item = DoubanItem() item['title'] = response.css('h1 span::text').get() item['rating'] = response.css('strong.rating_num::text').get() item['director'] = response.css('a[rel="v:directedBy"]::text').get() item['actors'] = response.css('a[rel="v:starring"]::text').getall() item['year'] = response.css('span.year::text').get() item['country'] = response.css('span[property="v:initialReleaseDate"]::text').re_first(r'(\S+)\s+\(\S+\)') item['category'] = response.css('span[property="v:genre"]::text').getall() yield item ``` 6. 运行爬虫: ``` scrapy crawl douban ``` 7. 在 MongoDB 中查看数据: ``` > use douban > db.movies.find().pretty() ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值