scrapy:使用response.follow()方法时出现AttributeError: 'HtmlResponse' object has no attribute 'follow'

运行scrapy出现AttributeError: ‘HtmlResponse’ object has no attribute ‘follow’

详细错误:

2017-05-20 22:58:44 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: my_project)
2017-05-20 22:58:44 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'my_project', 'FEED_FORMAT': 'jl', 'FEED_URI': 'author.jl', 'NEWSPIDER_MODULE': 'my_project.spiders', 'ROBOTS
TXT_OBEY': True, 'SPIDER_MODULES': ['my_project.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 5.1; rv:5.0) Gecko/20100101 Firefox/5.0'}
Traceback (most recent call last):
  File "I:\Anaconda3\lib\site-packages\scrapy\spiderloader.py", line 53, in load
    return self._spiders[spider_name]
KeyError: 'author'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "I:\Anaconda3\Scripts\scrapy-script.py", line 5, in <module>
    sys.exit(scrapy.cmdline.execute())
  File "I:\Anaconda3\lib\site-packages\scrapy\cmdline.py", line 142, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "I:\Anaconda3\lib\site-packages\scrapy\cmdline.py", line 88, in _run_print_help
    func(*a, **kw)
  File "I:\Anaconda3\lib\site-packages\scrapy\cmdline.py", line 149, in _run_command
    cmd.run(args, opts)
  File "I:\Anaconda3\lib\site-packages\scrapy\commands\crawl.py", line 57, in run
    self.crawler_process.crawl(spname, **opts.spargs)
  File "I:\Anaconda3\lib\site-packages\scrapy\crawler.py", line 162, in crawl
    crawler = self.create_crawler(crawler_or_spidercls)
  File "I:\Anaconda3\lib\site-packages\scrapy\crawler.py", line 190, in create_crawler
    return self._create_crawler(crawler_or_spidercls)
  File "I:\Anaconda3\lib\site-packages\scrapy\crawler.py", line 194, in _create_crawler
    spidercls = self.spider_loader.load(spidercls)
  File "I:\Anaconda3\lib\site-packages\scrapy\spiderloader.py", line 55, in load
    raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: author'

E:\python\my_project>scrapy crawl author -o author.jl
2017-05-20 22:59:30 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: my_project)
2017-05-20 22:59:30 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'my_project', 'FEED_FORMAT': 'jl', 'FEED_URI': 'author.jl', 'NEWSPIDER_MODULE': 'my_project.spiders', 'ROBOTS
TXT_OBEY': True, 'SPIDER_MODULES': ['my_project.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 5.1; rv:5.0) Gecko/20100101 Firefox/5.0'}
2017-05-20 22:59:30 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats']
2017-05-20 22:59:31 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-05-20 22:59:31 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-05-20 22:59:31 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-05-20 22:59:31 [scrapy.core.engine] INFO: Spider opened
2017-05-20 22:59:31 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-20 22:59:31 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-05-20 22:59:32 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://quotes.toscrape.com/robots.txt> (referer: None)
2017-05-20 22:59:32 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/> (referer: None)
2017-05-20 22:59:32 [scrapy.core.scraper] ERROR: Spider error processing <GET http://quotes.toscrape.com/> (referer: None)
Traceback (most recent call last):
  File "I:\Anaconda3\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
    yield next(it)
  File "I:\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
    for x in result:
  File "I:\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 22, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "I:\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "I:\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "E:\python\my_project\spiders\author_spider.py", line 12, in parse
    yield response.follow(href, self.parse_author)
AttributeError: 'HtmlResponse' object has no attribute 'follow'
2017-05-20 22:59:32 [scrapy.core.engine] INFO: Closing spider (finished)
2017-05-20 22:59:32 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 504,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 2701,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/404': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 5, 20, 14, 59, 32, 573099),
 'log_count/DEBUG': 3,
 'log_count/ERROR': 1,
 'log_count/INFO': 7,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'spider_exceptions/AttributeError': 1,
 'start_time': datetime.datetime(2017, 5, 20, 14, 59, 31, 285241)}
2017-05-20 22:59:32 [scrapy.core.engine] INFO: Spider closed (finished)

解决方法:

请检查您当前使用的scrapy版本。我查看了官方的几个介绍文档,对比后发现response.follow()是在 scrapy1.4以后才有的,所以,如果你的scrapy版本低于1.4的话则不能使用此方法。你可以查看对应版本的官方文档以了解怎么使用。
比如我用的是1.3.3版本,则应该类似这样写(使用scrapy.Request()方法)

for next_page in response.css('li.nexta::attr(href)').extract():
    if next_page is not None:
        next_page = response.urljoin(next_page)
        yield scrapy.Request(next_page, self.parse)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值