创建项目
scrapy startproject mySpider
创建爬虫
scrapy genspider -t crawl cf cbirc.gov.cn
运行爬虫
scrapy crawl cf
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
import re # 需要引入
class CfSpider(CrawlSpider):
name = 'cf'
allowed_domains = ['csdn.net']
start_urls = ['https://blog.csdn.net/qq_23586923']
rules = (
# 写规则
Rule(LinkExtractor(allow=r'https://blog.csdn.net/qq_23586923/article/details/\d+'), callback='parse_item'),
# Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),
)
def parse_item(self, response):
item = {}
item["title"] = re.findall("id=\"articleContentId\">(.*?)</h1>", response.body.decode())[0]
item["time"] = response.xpath("//div[@class='bar-content']/span[1]/text()").extract_first()
print(item)
# return item
-
常见爬虫 scrapy genspider -t crawl 爬虫名 allow_domain
-
指定start_url,对应的响应会进过rules提取url地址
-
完善rules,添加Rule
Rule(LinkExtractor(allow=r'/web/site0/tab5240/info\d+\.htm'), callback='parse_item'),
-
注意点:
-
url地址不完整,crawlspider会自动补充完整之后在请求
-
parse函数不能定义,他有特殊的功能需要实现
-
callback:连接提取器提取出来的url地址对应的响应交给他处理
-
follow:连接提取器提取出来的url地址对应的响应是否继续被rules来过滤
-