scrapy重试机制_Python Scrapy不重试连接超时【多中间件】

博主在使用Scrapy爬虫时遇到问题,设置了重试机制和代理中间件,但发现请求失败后未按预期重试。经过分析,原因是重试中间件在代理中间件之前执行,导致请求在到达代理前已被重试。解决方案是调整中间件优先级,使代理中间件先于重试中间件执行,从而确保请求在更换代理后能正确重试。
摘要由CSDN通过智能技术生成

问题 (Question)

I've used some proxies to crawl some website. Here is I did in the settings.py:

# Retry many times since proxies often failRETRY_TIMES = 10# Retry on most error codes since proxies fail for different reasonsRETRY_HTTP_CODES = [500, 503, 504, 400, 403, 404, 408]DOWNLOAD_DELAY = 3 # 5,000 ms of delayDOWNLOADER_MIDDLEWARES = {

'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None,

'myspider.comm.rotate_useragent.RotateUserAgentMiddleware' : 100,

'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 200,

'myspider.comm.random_proxy.RandomProxyMiddleware': 300,

'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 400,

}

And I also have a proxy download middleware which have following methods:

def process_request(self, request, spider):

log('Requesting url %s with proxy %s...' % (request.url, proxy))def process_response(self, request, response, spider):

log('Response received from request url %s with proxy %s' % (request.url, proxy if proxy else 'nil'))def process_exception(self, request, exception, spider):

log_msg('Failed to request url %s with proxy %s with exception %s' % (request.url, proxy if proxy else 'nil', str(exception)))

#retry again.

return request

Since the proxy is not

As the before shows, I've set the settings RETRY_TIMES and RETRY_HTTP_CODES, and I've also return the request for a retry in the process_exception method of the proxy middle ware.

Why scrapy never retries for the failure request again, or how can I make sure the request is tried at least RETRY_TIMES I've set in the settings.py?

我使用了一些代理抓取一些网站。这是我在settings.py:

# Retry many times since proxies often failRETRY_TIMES = 10# Retry on most error codes since proxies fail for different reasonsRETRY_HTTP_CODES = [500, 503, 504, 400, 403, 404, 408]DOWNLOAD_DELAY = 3 # 5,000 ms of delayDOWNLOADER_MIDDLEWARES = {

'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None,

'myspider.comm.rotate_useragent.RotateUserAgentMiddleware' : 100,

'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 200,

'myspider.comm.random_proxy.RandomProxyMiddleware': 300,

'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 400,

}

我也有一个代理下载

def process_request(self, request, spider):

log('Requesting url %s with proxy %s...' % (request.url, proxy))def process_response(self, request, response, spider):

log('Response received from request url %s with proxy %s' % (request.url, proxy if proxy else 'nil'))def process_exception(self, request, exception, spider):

log_msg('Failed to request url %s with proxy %s with exception %s' % (request.url, proxy if proxy else 'nil', str(exception)))

#retry again.

return request

有时因为代理不是非常稳定,process_exception经常提示请求失败消息。这里的问题是,没有再次尝试失败的请求。

如之前所示,我设置设置RETRY_TIMES和RETRY_HTTP_CODES,我还返回重试请求process_exception

为什么scrapy不再重试失败的请求,或如何确保请求是我试着至少RETRY_TIMES settings.py设置?

最佳答案 (Best Answer)

Thanks for the help

'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 200,

'myspider.comm.random_proxy.RandomProxyMiddleware': 300,

Here Retry middleware gets run first, so it will retry the request before it makes it to the Proxy middleware. In my situation, scrapy needs the proxies to crawl the website, or it will timeout endlessly.

So I've reverse the priority between these two download middle wares:

'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 300,

'myspider.comm.random_proxy.RandomProxyMiddleware': 200,

谢谢你的帮助从@nyov Scrapy IRC频道。

scrapy.contrib.downloadermiddleware.retry。RetryMiddleware”:200年,

“myspider.comm.random_proxy.RandomProxyMiddleware”:300年,

这里重试中间件运行第一,所以它将重试请求之前代理中间件。���我的情况下,scrapy需要爬行网站的代理,或者它将超时没完没了地。

所以我改变这两个下载中间产品之间的优先级:

scrapy.contrib.downloadermiddleware.retry。RetryMiddleware”:300年,

“myspider.comm.random_proxy.RandomProxyMiddleware”:200年,

答案 (Answer) 2

it seem that your proxy download middleware -> process_response is not playing by the rules and hence breaking the middlewares chain

process_response() should either: return a Response object, return a Request object or raise a IgnoreRequest exception.

If it returns a Response (it could be the same given response, or a brand-new one), that response will continue to be processed with the process_response() of the next middleware in the chain.

...

看起来,你的process_response不按规则玩,因此打破了

process_response():返回一个响应对象,返回一个请求对象或提高IgnoreRequest异常。

如果它返回一个响应(可以是相同的反应,或一个全新的),响应将继续处理与process_response链中的下一个

...

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值