scrapy运行爬虫URLError信息如下:
scrapy crawl book
/home/xxxxxxx/work/python/tutorial/tutorial/spiders/book_spider.py:1: ScrapyDeprecationWarning: Module `scrapy.spider` is deprecated, use `scrapy.spiders` instead
from scrapy.spider import Spider
2017-06-09 11:34:15 [scrapy] INFO: Scrapy 1.0.1 started (bot: tutorial)
2017-06-09 11:34:15 [scrapy] INFO: Optional features available: ssl, http11, boto
2017-06-09 11:34:15 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'}
2017-06-09 11:34:15 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2017-06-09 11:34:15 [boto] DEBUG: Retrieving credentials from metadata server.
2017-06-09 11:34:16 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/usr/lib/python2.7/urllib2.py", line 429, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 447, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1228, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1198, in do_open
raise URLError(err)
URLError: <urlopen error timed out>
2017-06-09 11:34:16 [boto] ERROR: Unable to read instance data, giving up
2017-06-09 11:34:16 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, HttpProxyMiddleware, ChunkedTransferMiddleware, DownloaderStats
2017-06-09 11:34:16 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2017-06-09 11:34:16 [scrapy] INFO: Enabled item pipelines:
2017-06-09 11:34:16 [scrapy] INFO: Spider opened
2017-06-09 11:34:16 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-06-09 11:34:16 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-06-09 11:34:16 [scrapy] DEBUG: Crawled (200) <GET http://www.xxxxx.com/> (referer: None)
2017-06-09 11:34:17 [scrapy] INFO: Closing spider (finished)
2017-06-09 11:34:17 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 210,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 6280,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 6, 9, 3, 34, 17, 18397),
'log_count/DEBUG': 3,
'log_count/ERROR': 2,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 6, 9, 3, 34, 16, 575498)}
2017-06-09 11:34:17 [scrapy] INFO: Spider closed (finished)
原因如下:
That particular error message is being generated by boto
(boto 2.38.0 py27_0), which is used to connect to Amazon S3. Scrapy doesn't have this enabled by default.
解决办法:
网络上广泛给出的解决办法是在项目的settings.py文件中,加上
DOWNLOAD_HANDLERS = {'S3': None,}
但是我按照这个方法添加完运行错误还在,最后在spider.py文件中加入
from scrapy import optional_features
optional_features.remove('boto')
问题得以解决。