scrapy_代理ip报错twisted.web.error.SchemeNotSupported: Unsupported scheme: b''

27 篇文章 0 订阅
23 篇文章 0 订阅

问题描述:

在使用scrapy框架进行网络爬虫的时候,是没有问题的,我在DOWNLOADER_MIDDLEWARES中间件中添加了代理ip后,就开始报错,报错如下:

Traceback (most recent call last):
  File "F:\scrapy\venv\lib\site-packages\twisted\internet\defer.py", line 1416, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "F:\scrapy\venv\lib\site-packages\twisted\python\failure.py", line 491, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "F:\scrapy\venv\lib\site-packages\scrapy\core\downloader\middleware.py", line 43, in process_request
    defer.returnValue((yield download_func(request=request,spider=spider)))
  File "F:\scrapy\venv\lib\site-packages\scrapy\utils\defer.py", line 45, in mustbe_deferred
    result = f(*args, **kw)
  File "F:\scrapy\venv\lib\site-packages\scrapy\core\downloader\handlers\__init__.py", line 65, in download_request
    return handler.download_request(request, spider)
  File "F:\scrapy\venv\lib\site-packages\scrapy\core\downloader\handlers\http11.py", line 67, in download_request
    return agent.download_request(request)
  File "F:\scrapy\venv\lib\site-packages\scrapy\core\downloader\handlers\http11.py", line 331, in download_request
    method, to_bytes(url, encoding='ascii'), headers, bodyproducer)
  File "F:\scrapy\venv\lib\site-packages\scrapy\core\downloader\handlers\http11.py", line 252, in request
    proxyEndpoint = self._getEndpoint(self._proxyURI)
  File "F:\scrapy\venv\lib\site-packages\twisted\web\client.py", line 1635, in _getEndpoint
    return self._endpointFactory.endpointForURI(uri)
  File "F:\scrapy\venv\lib\site-packages\twisted\web\client.py", line 1513, in endpointForURI
    raise SchemeNotSupported("Unsupported scheme: %r" % (uri.scheme,))
twisted.web.error.SchemeNotSupported: Unsupported scheme: b''
2019-04-24 14:54:40 [scrapy.core.engine] INFO: Closing spider (finished)

原因分析:

在中间件中虽然使用了代理ip,但是代理ip的前面没有加上“http://”

解决方案:

代理ip前面加上“http://”即可

正确代码:

    def process_request(self, request, spider):
        print("process_request")
        print("使用ip代理:")
        proxy1 = "http://115.213.174.158:4223"
        # print(request.url)
        # 代理为空时
        print("使用ip代理:", proxy1)
        request.meta['proxy'] = proxy1

代码运行成功。

参考:

https://blog.csdn.net/jss19940414/article/details/85256424

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值