Scrapy 2.6.2
在最新版本的 Scrapy 2.6.2 (2022-07-25) 中,更新了 Proxy-Authorization 的处理逻辑,代理的设置方式也需进行相应的更改:
官方文档:Release notes — Scrapy 2.6.2 documentation
httpproxy 源码:scrapy.downloadermiddlewares.httpproxy — Scrapy 2.6.2 documentation
根据官方描述,相关逻辑编写在 HttpProxyMiddleware 类中,源码文件在本地的路径为:
Python 安装路径\Lib\site-packages\scrapy\downloadermiddlewares\httpproxy.py,相关代码位置:
def process_request(self, request, spider):
creds, proxy_url = None, None
if 'proxy' in request.meta:
if request.meta['proxy'] is not None:
creds, proxy_url = self._get_proxy(request.meta['proxy'], '')
elif self.proxies:
parsed = urlparse_cached(request)
scheme = parsed.scheme
if (
(
# 'no_proxy' is only supported by http schemes
scheme not in ('http', 'https')
or not proxy_bypass(parsed.hostname)
)
and scheme in self.proxies
):
creds, proxy_url = self.proxies[scheme]
self._set_proxy_and_creds(request, proxy_url, creds)
def _set_proxy_and_creds(self, request, proxy_url, creds):
if proxy_url:
request.meta['proxy'] = proxy_url
elif request.meta.get('proxy') is not None:
request.meta['proxy'] = None
if creds:
request.headers[b'Proxy-Authorization'] = b'Basic ' + creds
request.meta['_auth_proxy'] = proxy_url
elif '_auth_proxy' in request.meta:
if proxy_url != request.meta['_auth_proxy']:
if b'Proxy-Authorization' in request.headers:
del request.headers[b'Proxy-Authorization']
del request.meta['_auth_proxy']
elif b'Proxy-Authorization' in request.headers:
del request.headers[b'Proxy-Authorization']
要想在保留代理的条件下删除代理凭据,需要删除 Proxy-Authorization,若一开始在请求中定义了 proxy,例如 request.meta['proxy'] = “http://username:password@some_proxy_server:port”,Scrapy 会自动在请求头中设置 Proxy-Authorization,若请求头中已经设置了 Proxy-Authorization,则会导致其被删除掉,所以如果同时定义了 proxy 和 Proxy-Authorization,代码就会报错,例如:scrapy.core.downloader.handlers.http11.TunnelError: Could not open CONNECT tunnel with proxy XXXXXX:XXX [{'status': 407, 'reason': b'XXXXXXX'}]
以往在 scrapy 中设置代理的方法
1. 在 middlewares.py 中配置
方法一:
class ProxyDownloaderMiddleware:
def process_request(self, request, spider):
proxy = "some_proxy_server:port"
request.meta['proxy'] = "http://%(proxy)s" % {'proxy': proxy}
# 用户名密码认证
request.headers['Proxy-Authorization'] = basic_auth_header('username', 'password')
request.headers["Connection"] = "close"
return None
方法二:
class ProxyDownloaderMiddleware:
def process_request(self, request, spider):
request.meta['proxy'] = "http://some_proxy_server:port"
proxy_user_pass = "username:password"
encoded_user_pass = base64.encodestring(proxy_user_pass)
request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
request.headers["Connection"] = "close"
return None
上述两种方法还需要在 settings.py 中进行如下配置:
DOWNLOADER_MIDDLEWARES = {
'dltest.middlewares.ProxyDownloaderMiddleware': 100,
}
ROBOTSTXT_OBEY = False
2. 在 spider.py 中配置
def start_requests(self):
url = "代理测试网址"
proxy = {'proxy': 'https://username:password@some_proxy_server:port'}
yield scrapy.Request(url, callback=self.parse, meta=proxy)
def parse(self, response):
print(response.text)
Scrapy 2.6.2 中设置代理的方法
在 Scrapy 2.6.2 中前两个方法都会报错,可以改为以下方式,HttpProxyMiddleware 会自动在请求头中添加 Proxy-Authorization:
class ProxyDownloaderMiddleware:
def process_request(self, request, spider):
proxy = "username:password@some_proxy_server:port"
request.meta['proxy'] = "http://%s" % proxy
request.headers["Connection"] = "close"
return None
跟之前一样直接在 spider.py 中配置也是可以的,还有一种方式就是将 httpproxy.py 文件中的最后两行代码注释掉也能正常运行,但是不建议使用这种方式,因为官方这么更改是为了防止代理凭据的意外泄漏,肯定是做过大量调研后决定的方案,详细安全漏洞修复说明可以阅读官方文档:
# elif b'Proxy-Authorization' in request.headers:
# del request.headers[b'Proxy-Authorization']