文章目录
1. CrawlSpiders
- 原理图
sequenceDiagram
start_urls ->>调度器: 初始化url
调度器->>下载器: request
下载器->>rules: response
rules->>数据提取: response
rules->>调度器: 新的url
- 通过下面的命令可以快速创建 CrawlSpider 模板 的代码
scrapy genspider -t crawl 文件名 (allowed_url)
首先在说下Spider,它是所有爬虫的基类,而CrawSpiders就是Spider的派生类。对于设计原则是只爬取start_url列表中的网页,而从爬取的网页中获取link并继续爬取的工作CrawlSpider类更适合
2. Rule对象
Rule类与CrawlSpider类都位于scrapy.contrib.spiders模块中
class scrapy.contrib.spiders.Rule (
link_extractor, callback=None,cb_kwargs=None,follow=None,process_links=None,process_request=None )
参数含义:
link_extractor为LinkExtractor
,用于定义需要提取的链接callback
参数:当link_extractor获取到链接时参数所指定的值作为回调函数- callback参数使用注意:
当编写爬虫规则时,请避免使用parse作为回调函数。于CrawlSpider使用parse方法来实现其逻辑,如果您覆盖了parse方法,crawlspider将会运行失败
- callback参数使用注意:
follow
:指定了根据该规则从response提取的链接是否需要跟进。当callback为None,默认值为Trueprocess_links
:主要用来过滤由link_extractor获取到的链接process_request
:主要用来过滤在rule中提取到的request
3.LinkExtractors
3.1 概念
顾名思义,链接提取器
3.2 作用
response对象中获取链接,并且该链接会被接下来爬取
每个LinkExtractor有唯一的公共方法是 extract_links(),它接收一个 Response 对象,并返回一个 scrapy.link.Link 对象
3.3 使用
class scrapy.linkextractors.LinkExtractor(
allow = (),
deny = (),
allow_domains = (),
deny_domains = (),
deny_extensions = None,
restrict_xpaths = (),
tags = ('a','area'),
attrs = ('href'),
canonicalize = True,
unique = True,
process_value = None
)
主要参数:
allow
:满足括号中“正则表达式”的值会被提取,如果为空,则全部匹配。deny
:与这个正则表达式(或正则表达式列表)不匹配的URL一定不提取。allow_domains
:会被提取的链接的domains。deny_domains
:一定不会被提取链接的domains。restrict_xpaths
:使用xpath表达式,和allow共同作用过滤链接(只选到节点,不选到属性)restrict_css
:使用css表达式,和allow共同作用过滤链接(只选到节点,不选到属性)
3.3.1 查看效果(shell中验证)
- 首先运行:
scrapy shell http://www.fhxiaoshuo.com/read/33/33539/17829387.shtml
- 继续import相关模块:
from scrapy.linkextractors import LinkExtractor
- 提取当前网页中获得的链接
link = LinkExtractor(restrict_xpaths=(r'//div[@class="bottem"]/a[4]')
- 调用LinkExtractor实例的extract_links()方法查询匹配结果
link.extract_links(response)
3.3.2 查看效果 CrawlSpider版本
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from xiaoshuo.items import XiaoshuoItem
class XiaoshuoSpiderSpider(CrawlSpider):
name = 'xiaoshuo_spider'
allowed_domains = ['fhxiaoshuo.com']
start_urls = ['http://www.fhxiaoshuo.com/read/33/33539/17829387.shtml']
rules = [
Rule(LinkExtractor(restrict_xpaths=(r'//div[@class="bottem"]/a[4]')), callback='parse_item'),]
def parse_item(self, response):
info = response.xpath("//div[@id='TXT']/text()").extract()
it = XiaoshuoItem()
it['info'] = info
yield it
注意:
rules = [
Rule(LinkExtractor(restrict_xpaths=(r'//div[@class="bottem"]/a[4]')), callback='parse_item'),]
- callback后面函数名用引号引起
- 函数名不能是parse
- 格式问题
4. 代码实例
获取笔趣网上的小说《武动乾坤》的每一章节的内容
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class Bqg1Spider(CrawlSpider):
name = 'bqg2'
allowed_domains = ['biqutxt.com']
start_urls = ['https://www.biqutxt.com/69_69889/22672760.html']
rules = (
Rule(LinkExtractor(restrict_xpaths=r'//div[@class="bottem1"]/a[4]'), callback='parse_item', follow=True),
)
def process_results(self, response, results):
title = response.xpath('//h1/text()').extract_first()
content = '\n'.join(response.xpath('//div[@id="content"]/text()').extract())
yield {'title': title, 'content': content}
def parse_item(self, response):
title = response.xpath('//h1/text()').extract_first()
content = '\n'.join(response.xpath('//div[@id="content"]/text()').extract())
yield {'title': title, 'content': content}