1.常用命令
全局指令:
bench Run quick benchmark test # 测试
fetch Fetch a URL using the Scrapy downloader # 使用或者下载某个网页
genspider Generate new spider using pre-defined templates # 创建爬虫文件
runspider Run a self-contained spider (without creating a project) # 运行独立的爬虫
settings Get settings values # 配置设置
shell Interactive scraping console #交互式页面 startproject Create new project # 创建一个爬虫项目
version Print Scrapy version #显示版本信息
view Open URL in browser, as seen by Scrapy # 在浏览器打开一个URL
项目指令:
check Check spider contracts #检查协议
crawl Run a spider #运行某一个爬虫文件
edit Edit spider #命令编写爬虫文件
list List available spiders #展示这个文件夹下可以爬虫的所有文件
parse Parse URL (using its spider) and print the results # 解析url(使用它的spider)并打印结果
创建项目
C:\Users\Administrator>D: #进入D盘
D:\>cd scrapy #进入scrapy的文件夹
D:\scrapy>scrapy startproject shop #在scrapy文件夹下创建一个名叫shop的爬虫项目
生成以下文件
一级文件夹/文件 | 二级文件夹/文件 | shop文件夹下的文件夹/文件 | spiders文件夹下的文件夹/文件 |
---|---|---|---|
scrapy | shop | _pycache_:做缓存 | _init_.py:初始文件 |
. | scrapy.cfg:配置文件 | spiders:里面放置爬虫文件 | |
. | . | _init_.py:初始文件 | |
. | . | items.py:爬取需要的目标数据的文件 | |
. | . | middlewares.py:中间件的文件 | |
. | . | pipelines.py:爬后处理的文件 | |
. | . | settings.py:设置文件 |
fetch命令
D:\PyTest\shop>scrapy fetch http://www.baidu.com
2019-10-29 09:55:07 [scrapy.utils.log] INFO: Scrapy 1.7.4 started (bot: shop)
2019-10-29 09:55:07 [scrapy.utils.log] INFO: Versions: lxml 4.4.1.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.7.0, Python 3.6.6 (v3.6.6:4cf1f54eb7, Jun 27 2018, 03:37:03) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.0j 20 Nov 2018), cryptography 2.4.2, Platform Windows-10-10.0.18362-SP0
2019-10-29 09:55:07 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'shop', 'NEWSPIDER_MODULE': 'shop.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['shop.spiders']}
2019-10-29 09:55:07 [scrapy.extensions.telnet] INFO: Telnet Password: ea2a77de7191b5be
2019-10-29 09:55:07 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2019-10-29 09:55:07 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-10-29 09:55:07 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-10-29 09:55:07 [scrapy.middleware] INFO: Enabled item pipelines:
['shop.pipelines.ShopPipeline']
2019-10-29 09:55:07 [scrapy.core.engine] INFO: Spider opened
2019-10-29 09:55:07 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-10-29 09:55:07 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-10-29 09:55:07 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.baidu.com/robots.txt> (referer: None)
2019-10-29 09:55:07 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET http://www.baidu.com>
2019-10-29 09:55:07 [scrapy.core.engine] INFO: Closing spider (finished)
2019-10-29 09:55:07 [scrapy.core.engine] ERROR: Scraper close failure
Traceback (most recent call last):
File "c:\program files\python\lib\site-packages\twisted\internet\defer.py", line 654, in _runCallbacks
current.result = callback(current.result, *args, **kw)
TypeError: close_spider() takes 1 positional argument but 2 were given
2019-10-29 09:55:07 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,
'downloader/exception_type_count/scrapy.exceptions.IgnoreRequest': 1,
'downloader/request_bytes': 222,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 677,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'elapsed_time_seconds': 0.363772,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2019, 10, 29, 1, 55, 7, 959617),
'log_count/DEBUG': 2,
'log_count/ERROR': 1,
'log_count/INFO': 10,
'response_received_count': 1,
'robotstxt/forbidden': 1,
'robotstxt/request_count': 1,
'robotstxt/response_count': 1,
'robotstxt/response_status_count/200': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2019, 10, 29, 1, 55, 7, 595845)}
2019-10-29 09:55:07 [scrapy.core.engine] INFO: Spider closed (finished)
D:\PyTest\shop>
查看爬虫模板命令
D:\PyTest>scrapy genspider -l
Available templates:
basic #基本模板
crawl #自动模板,和上面的指令crawl不一样
csvfeed #爬出csv格式的数据
xmlfeed #爬出xml格式的数据
D:\PyTest>
生成爬虫文件
D:\PyTest\shop>scrapy genspider -t basic test baidu.com
Created spider 'test' using template 'basic' in module:
shop.spiders.test
D:\PyTest\shop>
测试文件是否合格
D:\PyTest\shop>scrapy check test
----------------------------------------------------------------------
Ran 0 contracts in 0.000s
OK
D:\PyTest\shop>
运行文件
D:\PyTest\shop>scrapy crawl test
2019-10-29 13:13:58 [scrapy.utils.log] INFO: Scrapy 1.7.4 started (bot: shop)
2019-10-29 13:13:58 [scrapy.utils.log] INFO: Versions: lxml 4.4.1.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.7.0, Python 3.6.6 (v3.6.6:4cf1f54eb7, Jun 27 2018, 03:37:03) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.0j 20 Nov 2018), cryptography 2.4.2, Platform Windows-10-10.0.18362-SP0
2019-10-29 13:13:58 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'shop', 'NEWSPIDER_MODULE': 'shop.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['shop.spiders']}
2019-10-29 13:13:58 [scrapy.extensions.telnet] INFO: Telnet Password: bc18b295f6e16b99
2019-10-29 13:13:58 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2019-10-29 13:13:59 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-10-29 13:13:59 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-10-29 13:13:59 [scrapy.middleware] INFO: Enabled item pipelines:
['shop.pipelines.ShopPipeline']
2019-10-29 13:13:59 [scrapy.core.engine] INFO: Spider opened
2019-10-29 13:13:59 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-10-29 13:13:59 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-10-29 13:13:59 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://baidu.com/robots.txt> (referer: None)
2019-10-29 13:13:59 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET http://baidu.com/>
2019-10-29 13:13:59 [scrapy.core.engine] INFO: Closing spider (finished)
2019-10-29 13:13:59 [scrapy.core.engine] ERROR: Scraper close failure
Traceback (most recent call last):
File "c:\program files\python\lib\site-packages\twisted\internet\defer.py", line 654, in _runCallbacks
current.result = callback(current.result, *args, **kw)
TypeError: close_spider() takes 1 positional argument but 2 were given
2019-10-29 13:13:59 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,
'downloader/exception_type_count/scrapy.exceptions.IgnoreRequest': 1,
'downloader/request_bytes': 218,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 2680,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'elapsed_time_seconds': 0.333113,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2019, 10, 29, 5, 13, 59, 436161),
'log_count/DEBUG': 2,
'log_count/ERROR': 1,
'log_count/INFO': 10,
'response_received_count': 1,
'robotstxt/forbidden': 1,
'robotstxt/request_count': 1,
'robotstxt/response_count': 1,
'robotstxt/response_status_count/200': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2019, 10, 29, 5, 13, 59, 103048)}
2019-10-29 13:13:59 [scrapy.core.engine] INFO: Spider closed (finished)
D:\PyTest\shop>
运行但不显示
D:\PyTest\shop>scrapy crawl test --nolog
D:\PyTest\shop>
查看当前文件下有哪些可用的爬虫文件
D:\PyTest\shop>scrapy list
test
D:\PyTest\shop>
Xpath表达式
Xpath表达式与正则表达式简单对比:
- Xpath表达式效率会高一点
- 正则表达式功能会强大一点
- 一般来说,优先选择Xpath,但是Xpath解决不了的问题我们就选正则去解决.
使用:
/ : 逐层提取
text(): 标签下面的文本
//标签名: 提取所有名为**的标签
//标签名[@属性=‘属性值’]:提取属性为xx的标签
@属性名:代表去某个属性
如提取某个页面的标题: /html/head/title/text()
提取所有的div标签: //div
如果有个为<div class=“tools”></div>标签的内容: //div[@class=‘tools’]/text()