Scrapy爬虫实践 —— 一、入门篇
前言
有朋友最近让我帮忙做个爬虫,想想也没咋做过,顺便就来补个课。本文就介绍了一下做爬虫过程中踩下的坑。
一、选择爬虫框架——Scrapy
一直都听说说Python用来爬虫是最合适的,所以当然就找了Scrapy来练手。
Scrapy基本大家想要入门都会知道,它是一个爬虫的应用框架,主要包括爬取网页、结构化解构。主要用于数据挖掘、信息处理或者历史归档。
二、Scrapy安装
我的python是在visual studio code中编写的,安装的是python3.8的版本,这部分就不赘述了。
1.引入库
Scrapy由纯python写就,因此它依赖于以下的库:
- lxml
- parsel
- w3lib
- twisted
- cryptography and pyOpenSSL
上述库均可以直接用pip install安装
2.安装
pip install scrapy
3.验证
打开bash,进入工作区
$ scrapy
Scrapy 2.5.0 - no active project
Usage:
scrapy <command> [options] [args]
Available commands:
bench Run quick benchmark test
commands
fetch Fetch a URL using the Scrapy downloader
genspider Generate new spider using pre-defined templates
runspider Run a self-contained spider (without creating a project)
settings Get settings values
shell Interactive scraping console
startproject Create new project
version Print Scrapy version
view Open URL in browser, as seen by Scrapy
[ more ] More commands available when run from project directory
Use "scrapy <command> -h" to see more info about a command
大功告成,安装成功
这一步执行的时候出错,VS Code的老毛病了,关掉VS Code重启一下就好了
三、Scrapy的第一个爬虫工程
1. 使用框架创建新工程
scrapy startproject quote
创建后,可以用tree命令看一下:
E:\PythonWork\Scrapy\quote>tree /f
文件夹 PATH 列表
卷序列号为 78FD-091E
E:.
│ scrapy.cfg #爬虫配置文件
│
└─quote #项目python模块,可以在此引入我们的代码
│ items.py #item定义文件
│ middlewares.py #中间件文件,包含IP代理设置等可以在此文件中处理
│ pipelines.py #管道处理文件
│ settings.py #设定文件
│ __init__.py
│
└─spiders #爬虫所在文件夹
__init__.py
入门的场景下我们基本关注爬虫所在的文件夹即可
2. 第一只虫
在Scrapy中,爬虫是定义爬取行为和页面的一个类。应该是Spider子类并且定义初始化的请求。例如爬取的链接,以及定义如何处理分析爬取的页面的内容并解析数据。
我们在quote\spiders文件夹下创建quote_spider.py文件,文件内容如下:
# This package will contain the spiders of your Scrapy project
#
# Please refer to the documentation for information on how to create and manage
# your spiders.
import scrapy
#定义我们的爬虫类,该类为Spider的子类
class QuotesSpider(scrapy.Spider):
#爬虫名,项目中唯一,后续在命令行启动调用时使用
name = "quotes"
#返回一个请求的可迭代对象,爬虫在爬取前执行该函数
def start_requests(self):
#定义待爬取的链接
urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
#在单个爬取request执行后的处理行为, response中存储了爬取后的数据
def parse(self, response):
page = response.url.split("/")[-2]
filename = f'quotes-{page}.html'
with open(filename, 'wb') as f:
f.write(response.body)
self.log(f'Saved file {filename}')
parse()方法通常用来处理返回内容,将爬取的数据提取至字典,或寻找下一个要爬取的URL(通过调用Request方法进行下一步的爬取)
注意,如果提示编码错误,把中文注释删除即可
3. 开门、放虫
scrapy crawl quotes
这条命令运行名为quotes的爬虫,该名字在我们刚才添加的文件中定义了
2021-05-20 16:11:29 [scrapy.utils.log] INFO: Scrapy 2.5.0 started (bot: quote)
2021-05-20 16:11:29 [scrapy.utils.log] INFO: Versions: lxml 4.6.3.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:48:03) [MSC v.1928 64 bit (AMD64)], pyOpenSSL 20.0.1 (OpenSSL 1.1.1k 25 Mar 2021), cryptography 3.4.7, Platform Windows-7-6.1.7601-SP1
2021-05-20 16:11:29 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2021-05-20 16:11:29 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'quote',
'NEWSPIDER_MODULE': 'quote.spiders',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['quote.spiders']}
2021-05-20 16:11:29 [scrapy.extensions.telnet] INFO: Telnet Password: cf9b8b15e70bb2c3
2021-05-20 16:11:29 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2021-05-20 16:11:30 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2021-05-20 16:11:30 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2021-05-20 16:11:30 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2021-05-20 16:11:30 [scrapy.core.engine] INFO: Spider opened
2021-05-20 16:11:30 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2021-05-20 16:11:30 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2021-05-20 16:11:31 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://quotes.toscrape.com/robots.txt> (referer: None)
2021-05-20 16:11:31 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/1/> (referer: None)
2021-05-20 16:11:31 [quotes] DEBUG: Saved file quotes-1.html
2021-05-20 16:11:31 [scrapy.core.scraper] DEBUG: Scraped from <200 http://quotes.toscrape.com/page/1/>
看到这样的结果,我们就能够认为爬虫已经正常工作了。
查看我们的目录下,目录下生成了quotes-1.html和quotes-2.html
参考
- https://docs.scrapy.org/en/latest/intro/tutorial.html