目录
一、Scrapy框架原理
1、Scrapy特点
是一个用Python实现的为了爬取网站数据、提取数据的应用框架
Scrapy使用Twisted异步网络来处理网络通讯
使用Scrapy框架可以高效(爬取效率和开发效率)完成数据爬取
2.Scrapy框架
(1)组件及其工作流程:
五大组件:
引擎(Engine):整个框架的核心
调度器(Scheduler):维护请求队列
下载器(Downloader):获取响应对象
爬虫文件(Spider):数据解析提取
项目管道(Pipeline):数据入库处理
工作流程描述:
引擎向爬虫程序索要第一批要爬取的URL,交给调度器入队列
调度器处理请求后出队列,通过下载器中间件交给下载器去下载
下载器得到响应对象后,通过蜘蛛中间件交给爬虫程序
爬虫程序进行数据提取:
数据交给管道文件去入库处理
对于需要继续跟进的URL,再次交给调度器入队列,如此循环
scrapy框架及流程图:
(2)Scrapy架构图(绿线是数据流向):
(3)两个中间件及其功能:
下载器中间件(Downloader Middlewares):请求对象 -> 引擎 -> 下载器,包装请求(随即代理等)
蜘蛛中间件(SpiderMiddlewares):响应对象 -> 引擎 -> 爬虫文件,可修改响应对象属性
二、Scrapy配置文件详解
1.setting.py文件常用配置
(1)设置User-Agent:USER-AGENT = ' '
(2)设置最大并发数(默认为32):CONCURRENT_REQUESTS = 16
(3)设置下载延迟时间(每隔多久访问一个网页):DOWNLOAD_DELAY = 0.1
(4)设置请求头:DEFAULT)REQUEST_HEADERS = {}
(5)设置robots:robots必须为False
ROBOTSTXT_OBEY = False
(6)设定日志级别:默认DEBUG < INFO < WARNING < ERROR < CRITICAL:
LOG_LEVELF = 'WARNING' # 控制台只会出现WARNING及比WARNING级别高的日志详情(7)设置日志保存至日志文件:
LOG_FILE = 'xxx.log' # 一般不设置,则默认在控制台打印出日志信息
(8)设置数据导出编码:
FEED_EXPORT_ENCODING = 'gb18030'
FEED_EXPORT_ENCODING = 'utf8' (主要针对的是json文件)
(9)设置项目管道:优先级为1-1000,数字越小优先级越高
ITEM_PIPLINES = {'项目目录名.piplines.类名' : 优先级}
(10)设置cookie(默认禁用,取消注释-True|False都为开启)
COOKIES_ENABLE = False
(11)设置下载器中间件
DOWNLOADER_MIDDLEWARES = {'项目目录名.middlewares.类名' : 优先级}
2.items.py详解
(1)Scrapy提供了Item类,可以自定义爬取字段
(2)Item类似字典,我们需要抓取哪些字段直接在此处定义即可,当爬虫文件中对Item类进行实例化后,会有方法将数据交给管道文件处理
import scrapy
class QuotesItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
# 定义需要抓取的数据名称
text = scrapy.Field()
author = scrapy.Field()
tags = scrapy.Field()
3. 爬虫文件详解
(1)常用配置:
name:爬虫名,当运行爬虫项目时使用
allowed_domains:允许爬取的域名,非本域的URL地址会被过滤
start_urls:爬虫项目启动时起始的URL地址
import scrapy
from ..items import QuotesItem
class QuotesSpider(scrapy.Spider):
name = 'quotes'
allowed_domains = ['quotes.toscrape.com']
start_urls = ['https://quotes.toscrape.com/page/1/']
i = 1 # 记录当前抓取的页数,默认为1
def parse(self, response):
pass
(2)爬虫文件运行流程描述
爬虫项目启动,引擎找到此爬虫文件,将start_urls中URL地址拿走交给调度器入队列,然后队列交给下载器下载,得到response;response通过引擎又交给此爬虫文件,在爬虫文件的parse函数中解析数据
三、所使用命令
1.创建scrapy项目
例:scrapy startproject Baidu
scrapy startproject 项目名称
2.创建爬虫文件
例:scrapy genspider baidu www.baidu.com
scrapy genspider 爬虫文件名 域名
3.运行爬虫
例:scrapy crawl baidu
scrapy crawl 爬虫文件名
4.运行爬虫-直接生成csv文件
例:cmdline.execute('scrapy crawl baidu -o baidu.csv)
scrapy crawl 爬虫文件名 -o csv文件名
注意:windows下生成的csv文件,用excel打开可能会出现中文乱码,
解决如下:
在settings.py文件中设置数据导出的编码:
FEED_EXPORT_ENCODING = 'gb18030'
FEED_EXPORT_ENCODING = 'utf8'
5.运行爬虫-直接生成json文件
例:cmdline.execute('scrapy crawl baidu -o baidu.json')
scrapy crawl 爬虫文件名 -o json文件名
四、函数运用
1.爬虫文件,parse()-解析提取函数
response.xpath('')
response.xpath('').extract()
response.xpath('').extract_first() 等同于 response.xpath('').get()
2.爬虫文件-将抓取的数据交给管道处理
yield item
3.爬虫文件-生成下一页URL,交给调度器入队列
yield scrapy.Request(url=url, callback = self.parse)
注意:callback值为函数名,而不是函数的调用
五、案例运用
案例:
Target URL:Quotes to Scrape
Columns in need:sentence, author,tags
Storage:CSV file
代码示例:
1.items.py(定义爬取的数据字段名称)
import scrapy
class QuotesItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
# 定义需要抓取的数据名称
text = scrapy.Field()
author = scrapy.Field()
tags = scrapy.Field()
2.爬虫文件:quotes.py (解析响应数据)
import scrapy
from ..items import QuotesItem
class QuotesSpider(scrapy.Spider):
name = 'quotes'
allowed_domains = ['quotes.toscrape.com']
start_urls = ['https://quotes.toscrape.com/page/1/']
i = 1 # 记录当前抓取的页数,默认为1
def parse(self, response):
""""""
# 找到每一个板块的div标签
div_list = response.xpath(r'//div[@class="col-md-8"]/div')
item = QuotesItem()
# 解析数据,获得text,author,tags
for div in div_list:
item['text'] = div.xpath('./*[@class="text"]/text()').get() # text
item['author'] = div.xpath('.//*[@class="author"]/text()').get() # author
item['tags'] = '-'.join(div.xpath('.//*[@class="tag"]/text()').extract()) # tags
print(item)
# 把抓取的数据交给管道文件处理
yield item
# 生成下一页地址url,交给调度器入队列
if self.i < 6:
self.i += 1
url = 'https://quotes.toscrape.com/page/{}/'.format(self.i)
# 把url交给调度器入队列
yield scrapy.Request(url=url, callback=self.parse)
3.piplines.py (处理获取的数据,在这里即将获取的数据保存为csv文件)
import csv
class QuotesPipeline:
def open_spider(self, spider):
"""爬虫程序开始时,只执行一次,一般用于数据库的连 接"""
def process_item(self, item, spider):
# 管道文件处理数据:将获取得到的数据保存为csv文件
# 打开一个csv文件
file = open('quotes.csv', 'a', newline='')
# 初始化写入对象
writer = csv.writer(file)
# 写入一行数据
writer.writerow([item['text'], item['author'], item['tags']])
file.close()
return item
4.配置文件:setting.py
# Scrapy settings for Quotes project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'Quotes'
SPIDER_MODULES = ['Quotes.spiders']
NEWSPIDER_MODULE = 'Quotes.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0) Gecko/20100101 Firefox/6.0'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
CONCURRENT_REQUESTS = 8
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0) Gecko/20100101 Firefox/6.0'
}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'Quotes.middlewares.QuotesSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'Quotes.middlewares.QuotesDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'Quotes.pipelines.QuotesPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
# 设置数据导出的编码
# FEED_EXPORT_ENCODING = 'gb18030'
FEED_EXPORT_ENCODING = 'utf8'
5.运行爬虫文件:run.py
from scrapy import cmdline
cmdline.execute('scrapy crawl quotes'.split())
六、总结Scrapy项目流程
1.创建爬虫项目:scrapy startproject Quotes
2.cd到项目文件夹:cd Quotes
3.创建爬虫文件:scrapy genspider Quotes quotes.toscrape.com
4.定义要爬取的数据结构-items.py
import scrapy
class QuotesItem(scrapy.Item):
text = scrapy.Field()
author = scrapy.Field()
tags = scrapy.Field()
5.爬虫文件解析提取数据-quotes.py
6.管道文件处理爬虫文件提取的数据-pipelines.py
class QuotesPipeline(object):
def process_item(self, item, spider):
# 具体处理数据的代码
return item
7.全局配置-settings.py
8.运行爬虫文件-run.py
from scrapy import cmdline
cmdline.execute('scrapy crawl quotes'.split())