我试图用下面的代码对存储在桌面上的本地HTML文件进行爬网,但是在爬网过程之前遇到了以下错误,例如“没有这样的文件或目录:”/机器人.txt'". 在是否可以在本地计算机(Mac)中爬网本地HTML文件?在
如果可能,我该怎么做
设置参数如“允许的域”和“起始网址”?在
[恶心的命令]$ scrapy crawl test -o test01.csv
[痒蜘蛛]
^{pr2}$
[错误]2018-11-16 01:57:52 [scrapy.core.engine] INFO: Spider opened
2018-11-16 01:57:52 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-11-16 01:57:52 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2018-11-16 01:57:52 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying (failed 1 times): [Errno 2] No such file or directory: '/robots.txt'
2018-11-16 01:57:56 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying (failed 2 times): [Errno 2] No such file or directory: '/robots.txt'