Scrapy 源码分析 3 middlewares

1 简介

Scrapy中有三种类型的middlewares,是Downloader middlewares,Spider middlewares,Extensions。

  • Downloader middlewares:介于引擎和下载器之间,可以在网页在下载前、后进行逻辑处理;
  • Spider middlewares:介于引擎和爬虫之间,在向爬虫输入下载结果前,和爬虫输出请求 / 数据后进行逻辑处理;
  • Extensions : 处于整个流程当中,主要提供一些辅助和状态统计;

2 共同基类 MiddlewareManager

MiddlewareManager类 位于scrapy.middleware.py中。

class MiddlewareManager:
    """Base class for implementing middleware managers"""

    component_name = 'foo middleware'

    def __init__(self, *middlewares):
        # settings 中所有的middlewares的实例 例如 'scrapy.extensions.corestats.CoreStats'
        self.middlewares = middlewares
        # Optional because process_spider_output and process_spider_exception can be None
        # 保存每个实例对象的方法的集合
        # {"open_spider":[CoreStats.open_spider,...]}
        self.methods: Dict[str, Deque[Optional[Callable]]] = defaultdict(deque)
        for mw in middlewares:
            self._add_middleware(mw)

    @classmethod
    def _get_mwlist_from_settings(cls, settings: Settings) -> list:
        # 每个子类需要实现的方法,在改方法中获取所有的middlewares全路径
        raise NotImplementedError

    @classmethod
    def from_settings(cls, settings: Settings, crawler=None):
        mwlist = cls._get_mwlist_from_settings(settings)
        middlewares = []
        enabled = []
        for clspath in mwlist:
            try:
                # 加载类
                mwcls = load_object(clspath)
                # 创建middleware类实例
                mw = create_instance(mwcls, settings, crawler)
                middlewares.append(mw)
                enabled.append(clspath)
            except NotConfigured as e:
                if e.args:
                    clsname = clspath.split('.')[-1]
                    logger.warning("Disabled %(clsname)s: %(eargs)s",
                                   {'clsname': clsname, 'eargs': e.args[0]},
                                   extra={'crawler': crawler})

        logger.info("Enabled %(componentname)ss:\n%(enabledlist)s",
                    {'componentname': cls.component_name,
                     'enabledlist': pprint.pformat(enabled)},
                    extra={'crawler': crawler})
        return cls(*middlewares)

    @classmethod
    def from_crawler(cls, crawler):
        return cls.from_settings(crawler.settings, crawler)

    def _add_middleware(self, mw) -> None:
        if hasattr(mw, 'open_spider'):
            self.methods['open_spider'].append(mw.open_spider)
        if hasattr(mw, 'close_spider'):
            self.methods['close_spider'].appendleft(mw.close_spider)

    def _process_parallel(self, methodname: str, obj, *args) -> Deferred:
        methods = cast(Iterable[Callable], self.methods[methodname])
        return process_parallel(methods, obj, *args)

    def _process_chain(self, methodname: str, obj, *args) -> Deferred:
        methods = cast(Iterable[Callable], self.methods[methodname])
        return process_chain(methods, obj, *args)

    def open_spider(self, spider: Spider) -> Deferred:
        # 调用middlewares的所有的 open_spider 方法 参数为spider
        # def open_spider(self,spider:Spider):
        #   ......
        return self._process_parallel('open_spider', spider)

    def close_spider(self, spider: Spider) -> Deferred:
        # 调用middlewares的所有的 close_spider 方法 参数为spider
        # def close_spider(self,spider:Spider):
        #   ......
        return self._process_parallel('close_spider', spider)

其中 open_spider 执行顺序为正序,close_spider 执行顺序为倒序。

MiddlewareManager有三个子类,

  • DownloaderMiddlewareManager管理Downloader middlewares
  • SpiderMiddlewareManager管理Spider middlewares
  • ExtensionManager管理Extensions

3 ExtensionManager

源码位于scrapy.extension.py中

class ExtensionManager(MiddlewareManager):

    component_name = 'extension'

    @classmethod
    def _get_mwlist_from_settings(cls, settings):
        return build_component_list(settings.getwithbase('EXTENSIONS'))
EXTENSIONS 默认值为
EXTENSIONS = {}

EXTENSIONS_BASE = {
    'scrapy.extensions.corestats.CoreStats': 0,
    'scrapy.extensions.telnet.TelnetConsole': 0,
    'scrapy.extensions.memusage.MemoryUsage': 0,
    'scrapy.extensions.memdebug.MemoryDebugger': 0,
    'scrapy.extensions.closespider.CloseSpider': 0,
    'scrapy.extensions.feedexport.FeedExporter': 0,
    'scrapy.extensions.logstats.LogStats': 0,
    'scrapy.extensions.spiderstate.SpiderState': 0,
    'scrapy.extensions.throttle.AutoThrottle': 0,
}
def build_component_list(compdict, custom=None, convert=update_classpath):
    """Compose a component list from a { class: order } dictionary."""

    def _check_components(complist):
        if len({convert(c) for c in complist}) != len(complist):
            raise ValueError(f'Some paths in {complist!r} convert to the same object, '
                             'please update your settings')

    def _map_keys(compdict):
        if isinstance(compdict, BaseSettings):
            compbs = BaseSettings()
            for k, v in compdict.items():
                prio = compdict.getpriority(k)
                if compbs.getpriority(convert(k)) == prio:
                    raise ValueError(f'Some paths in {list(compdict.keys())!r} '
                                     'convert to the same '
                                     'object, please update your settings'
                                     )
                else:
                    compbs.set(convert(k), v, priority=prio)
            return compbs
        else:
            _check_components(compdict)
            return {convert(k): v for k, v in compdict.items()}

    def _validate_values(compdict):
        """Fail if a value in the components dict is not a real number or None."""
        for name, value in compdict.items():
            if value is not None and not isinstance(value, numbers.Real):
                raise ValueError(f'Invalid value {value} for component {name}, '
                                 'please provide a real number or None instead')

    # BEGIN Backward compatibility for old (base, custom) call signature
    if isinstance(custom, (list, tuple)):
        _check_components(custom)
        return type(custom)(convert(c) for c in custom)

    if custom is not None:
        compdict.update(custom)
    # END Backward compatibility

    _validate_values(compdict)
    compdict = without_none_values(_map_keys(compdict))
    return [k for k, v in sorted(compdict.items(), key=itemgetter(1))]

由最后一行得知,EXTENSIONS参数列表中加载顺序为 value 的 正序排列 

3 DownloaderMiddlewareManager

源码位于 scrapy.core.downloader.middleware.py中

class DownloaderMiddlewareManager(MiddlewareManager):

    component_name = 'downloader middleware'

    @classmethod
    def _get_mwlist_from_settings(cls, settings):
        return build_component_list(
            settings.getwithbase('DOWNLOADER_MIDDLEWARES'))

    def _add_middleware(self, mw):
        if hasattr(mw, 'process_request'):
            self.methods['process_request'].append(mw.process_request)
        if hasattr(mw, 'process_response'):
            self.methods['process_response'].appendleft(mw.process_response)
        if hasattr(mw, 'process_exception'):
            self.methods['process_exception'].appendleft(mw.process_exception)

    def download(self, download_func: Callable, request: Request, spider: Spider):
        @defer.inlineCallbacks
        def process_request(request: Request):
            for method in self.methods['process_request']:
                method = cast(Callable, method)
                response = yield deferred_from_coro(method(request=request, spider=spider))
                if response is not None and not isinstance(response, (Response, Request)):
                    raise _InvalidOutput(
                        f"Middleware {method.__qualname__} must return None, Response or "
                        f"Request, got {response.__class__.__name__}"
                    )
                if response:
                    return response
            return (yield download_func(request=request, spider=spider))

        @defer.inlineCallbacks
        def process_response(response: Union[Response, Request]):
            if response is None:
                raise TypeError("Received None in process_response")
            elif isinstance(response, Request):
                return response

            for method in self.methods['process_response']:
                method = cast(Callable, method)
                response = yield deferred_from_coro(method(request=request, response=response, spider=spider))
                if not isinstance(response, (Response, Request)):
                    raise _InvalidOutput(
                        f"Middleware {method.__qualname__} must return Response or Request, "
                        f"got {type(response)}"
                    )
                if isinstance(response, Request):
                    return response
            return response

        @defer.inlineCallbacks
        def process_exception(failure: Failure):
            exception = failure.value
            for method in self.methods['process_exception']:
                method = cast(Callable, method)
                response = yield deferred_from_coro(method(request=request, exception=exception, spider=spider))
                if response is not None and not isinstance(response, (Response, Request)):
                    raise _InvalidOutput(
                        f"Middleware {method.__qualname__} must return None, Response or "
                        f"Request, got {type(response)}"
                    )
                if response:
                    return response
            return failure

        deferred = mustbe_deferred(process_request, request)
        deferred.addErrback(process_exception)
        deferred.addCallback(process_response)
        return deferred

默认的middlewares为

DOWNLOADER_MIDDLEWARES = {}

DOWNLOADER_MIDDLEWARES_BASE = {
    # Engine side
    'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,
    'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,
    'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,
    'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400,
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500,
    'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550,
    'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,
    'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,
    'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,
    'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,
    'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,
    'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,
    # Downloader side
}

downloader middleware中添加了三个方法

  1. process_request  执行下载前执行,可以返回Response,、Request、None,当返回None时执行下载函数download_func,返回Response,、Request时跳过后面未执行的middleware和download_func。当发生异常时进入process_exception中执行。
  2. process_response 下载函数download_func执行完毕后执行,当result为Request时直接返回,重新添加到Solt的任务队列中。返回值必须为Response、Request中的一个实例对象,当其中一个为Request直接返回操作如上,最后返回最终的Response实例对象。
  3. process_exception 发生异常时执行,可以返回Response,、Request、None,当所有的middlewares返回None时,最后返回failure,当返回Response,、Request跳过后面未执行的middleware。

其中process_request执行顺序为正序,process_response和process_exception为倒序。

函数定义为

def process_request(self, request: Request, spider: Spider) -> Union[None, Response, Request]:
    pass


def process_response(self, request: Request,response:Response, spider: Spider) -> Union[Response, Request]:
    pass


def process_exception(self, request: Request,exception:Exception, spider: Spider) -> Union[None,Response, Request]:
    pass

4 SpiderMiddlewareManager

源码位置为scrapy.core.spiderwm.py中

class SpiderMiddlewareManager(MiddlewareManager):

    component_name = 'spider middleware'

    @classmethod
    def _get_mwlist_from_settings(cls, settings):
        return build_component_list(settings.getwithbase('SPIDER_MIDDLEWARES'))

    def _add_middleware(self, mw):
        super()._add_middleware(mw)
        if hasattr(mw, 'process_spider_input'):
            self.methods['process_spider_input'].append(mw.process_spider_input)
        if hasattr(mw, 'process_start_requests'):
            self.methods['process_start_requests'].appendleft(mw.process_start_requests)
        process_spider_output = getattr(mw, 'process_spider_output', None)
        self.methods['process_spider_output'].appendleft(process_spider_output)
        process_spider_exception = getattr(mw, 'process_spider_exception', None)
        self.methods['process_spider_exception'].appendleft(process_spider_exception)

    def _process_spider_input(self, scrape_func: ScrapeFunc, response: Response, request: Request,
                              spider: Spider) -> Any:
        for method in self.methods['process_spider_input']:
            method = cast(Callable, method)
            try:
                result = method(response=response, spider=spider)
                if result is not None:
                    msg = (f"Middleware {method.__qualname__} must return None "
                           f"or raise an exception, got {type(result)}")
                    raise _InvalidOutput(msg)
            except _InvalidOutput:
                raise
            except Exception:
                return scrape_func(Failure(), request, spider)
        return scrape_func(response, request, spider)

    def _evaluate_iterable(self, response: Response, spider: Spider, iterable: Iterable,
                           exception_processor_index: int, recover_to: MutableChain) -> Generator:
        try:
            for r in iterable:
                yield r
        except Exception as ex:
            exception_result = self._process_spider_exception(response, spider, Failure(ex),
                                                              exception_processor_index)
            if isinstance(exception_result, Failure):
                raise
            recover_to.extend(exception_result)

    def _process_spider_exception(self, response: Response, spider: Spider, _failure: Failure,
                                  start_index: int = 0) -> Union[Failure, MutableChain]:
        exception = _failure.value
        # don't handle _InvalidOutput exception
        if isinstance(exception, _InvalidOutput):
            return _failure
        method_list = islice(self.methods['process_spider_exception'], start_index, None)
        for method_index, method in enumerate(method_list, start=start_index):
            if method is None:
                continue
            result = method(response=response, exception=exception, spider=spider)
            if _isiterable(result):
                # stop exception handling by handing control over to the
                # process_spider_output chain if an iterable has been returned
                return self._process_spider_output(response, spider, result, method_index + 1)
            elif result is None:
                continue
            else:
                msg = (f"Middleware {method.__qualname__} must return None "
                       f"or an iterable, got {type(result)}")
                raise _InvalidOutput(msg)
        return _failure

    def _process_spider_output(self, response: Response, spider: Spider,
                               result: Iterable, start_index: int = 0) -> MutableChain:
        # items in this iterable do not need to go through the process_spider_output
        # chain, they went through it already from the process_spider_exception method
        recovered = MutableChain()

        method_list = islice(self.methods['process_spider_output'], start_index, None)
        for method_index, method in enumerate(method_list, start=start_index):
            if method is None:
                continue
            try:
                # might fail directly if the output value is not a generator
                result = method(response=response, result=result, spider=spider)
            except Exception as ex:
                exception_result = self._process_spider_exception(response, spider, Failure(ex), method_index + 1)
                if isinstance(exception_result, Failure):
                    raise
                return exception_result
            if _isiterable(result):
                result = self._evaluate_iterable(response, spider, result, method_index + 1, recovered)
            else:
                msg = (f"Middleware {method.__qualname__} must return an "
                       f"iterable, got {type(result)}")
                raise _InvalidOutput(msg)

        return MutableChain(result, recovered)

    def _process_callback_output(self, response: Response, spider: Spider, result: Iterable) -> MutableChain:
        recovered = MutableChain()
        result = self._evaluate_iterable(response, spider, result, 0, recovered)
        return MutableChain(self._process_spider_output(response, spider, result), recovered)

    def scrape_response(self, scrape_func: ScrapeFunc, response: Response, request: Request,
                        spider: Spider) -> Deferred:
        def process_callback_output(result: Iterable) -> MutableChain:
            return self._process_callback_output(response, spider, result)

        def process_spider_exception(_failure: Failure) -> Union[Failure, MutableChain]:
            return self._process_spider_exception(response, spider, _failure)

        dfd = mustbe_deferred(self._process_spider_input, scrape_func, response, request, spider)
        dfd.addCallbacks(callback=process_callback_output, errback=process_spider_exception)
        return dfd

    def process_start_requests(self, start_requests, spider: Spider) -> Deferred:
        return self._process_chain('process_start_requests', start_requests, spider)

默认的middlewares为

SPIDER_MIDDLEWARES = {}

SPIDER_MIDDLEWARES_BASE = {
    # Engine side
    'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': 50,
    'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': 500,
    'scrapy.spidermiddlewares.referer.RefererMiddleware': 700,
    'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware': 800,
    'scrapy.spidermiddlewares.depth.DepthMiddleware': 900,
    # Spider side
}

spider middlewares中添加了三个方法

  1. process_start_requests Engin引擎创建后,Engin.Solt创建之前 执行,目前默认的middlewares中没有该方法。
  2. process_spider_input download执行完成后执行,返回Request、Response、Item、Dict等。然后调用Request.callback或者Request.errorback。
  3. process_spider_output Request.callback执行后返回值传达给process_spider_output。
  4. process_spider_exception 处理以上流程中的异常值。

其中process_spider_input 执行顺序为正序,process_start_requests、process_spider_output 和process_spider_exception 为倒序。

函数定义为

def process_start_requests(self, start_requests: Iterable,spider: Spider) -> Iterable[Request]:
    pass


def process_spider_input(self, response=Response,spider: Spider) -> Union[Request,Response,Item,dict]:
    pass


def process_spider_output(self, response:Response,result,spider: Spider):
    pass


def process_spider_exception(self, response=Response, exception=Exception,spider: Spider) -> Union[None,Iterable[Union[Request,Response,Item,dict]]]:
    pass

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值