问题 (Question)
I've used some proxies to crawl some website. Here is I did in the settings.py:
# Retry many times since proxies often failRETRY_TIMES = 10# Retry on most error codes since proxies fail for different reasonsRETRY_HTTP_CODES = [500, 503, 504, 400, 403, 404, 408]DOWNLOAD_DELAY = 3 # 5,000 ms of delayDOWNLOADER_MIDDLEWARES = {
'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None,
'myspider.comm.rotate_useragent.RotateUserAgentMiddleware' : 100,
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 200,
'myspider.comm.random_proxy.RandomProxyMiddleware': 300,
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 400,
}
And I also have a proxy download middleware which have following methods:
def process_request(self, request, spider):
log('Requesting url %s with proxy %s...' % (request.url, proxy))def process_response(self, request, response, spider):
log('Response received from request url %s with proxy %s' % (request.url, proxy if proxy else 'nil'))def process_exception(self, request, exception, spider):
log_msg('Failed to request url %s with proxy %s with exception %s' % (request.url, proxy if proxy else 'nil', str(exception)))
#retry again.
return request
Since the proxy is not
As the before shows, I've set the settings RETRY_TIMES and RETRY_HTTP_CODES, and I've also return the request for a retry in the process_exception method of the proxy middle ware.
Why scrapy never retries for the failure request again, or how can I make sure the request is tried at least RETRY_TIMES I've set in the settings.py?
我使用了一些代理抓取一些网站。这是我在settings.py:
# Retry many times since proxies often failRETRY_TIMES = 10# Retry on most error codes since proxies fail for different reasonsRETRY_HTTP_CODES = [500, 503, 504, 400, 403, 404, 408]DOWNLOAD_DELAY = 3 # 5,000 ms of delayDOWNLOADER_MIDDLEWARES = {
'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None,
'myspider.comm.rotate_useragent.RotateUserAgentMiddleware' : 100,
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 200,
'myspider.comm.random_proxy.RandomProxyMiddleware': 300,
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 400,
}
我也有一个代理下载
def process_request(self, request, spider):
log('Requesting url %s with proxy %s...' % (request.url, proxy))def process_response(self, request, response, spider):
log('Response received from request url %s with proxy %s' % (request.url, proxy if proxy else 'nil'))def process_exception(self, request, exception, spider):
log_msg('Failed to request url %s with proxy %s with exception %s' % (request.url, proxy if proxy else 'nil', str(exception)))
#retry again.
return request
有时因为代理不是非常稳定,process_exception经常提示请求失败消息。这里的问题是,没有再次尝试失败的请求。
如之前所示,我设置设置RETRY_TIMES和RETRY_HTTP_CODES,我还返回重试请求process_exception
为什么scrapy不再重试失败的请求,或如何确保请求是我试着至少RETRY_TIMES settings.py设置?
最佳答案 (Best Answer)
Thanks for the help
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 200,
'myspider.comm.random_proxy.RandomProxyMiddleware': 300,
Here Retry middleware gets run first, so it will retry the request before it makes it to the Proxy middleware. In my situation, scrapy needs the proxies to crawl the website, or it will timeout endlessly.
So I've reverse the priority between these two download middle wares:
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 300,
'myspider.comm.random_proxy.RandomProxyMiddleware': 200,
谢谢你的帮助从@nyov Scrapy IRC频道。
scrapy.contrib.downloadermiddleware.retry。RetryMiddleware”:200年,
“myspider.comm.random_proxy.RandomProxyMiddleware”:300年,
这里重试中间件运行第一,所以它将重试请求之前代理中间件。���我的情况下,scrapy需要爬行网站的代理,或者它将超时没完没了地。
所以我改变这两个下载中间产品之间的优先级:
scrapy.contrib.downloadermiddleware.retry。RetryMiddleware”:300年,
“myspider.comm.random_proxy.RandomProxyMiddleware”:200年,
答案 (Answer) 2
it seem that your proxy download middleware -> process_response is not playing by the rules and hence breaking the middlewares chain
process_response() should either: return a Response object, return a Request object or raise a IgnoreRequest exception.
If it returns a Response (it could be the same given response, or a brand-new one), that response will continue to be processed with the process_response() of the next middleware in the chain.
...
看起来,你的process_response不按规则玩,因此打破了
process_response():返回一个响应对象,返回一个请求对象或提高IgnoreRequest异常。
如果它返回一个响应(可以是相同的反应,或一个全新的),响应将继续处理与process_response链中的下一个
...