1.在安装好scrapy后,使用scrapy startproject +项目名来创建一个scrapy项目
2.进入爬虫根目录下,使用:scrapy genspider +文件名+网址 命令来创建一个爬虫文件
创建之后目录结构如下:
3.编写quotes.py
4.更改配置.
5.在项目目录下输入scrapy crawl quotes -o quote.csv(数据保存类型.)
例子:
爬取名人名言:
import scrapy
class QuoteSpider(scrapy.Spider):
name = 'quote'
start_urls = ['http://quotes.toscrape.com/']
def parse(self, response):
quotelist=[]
divs = response.xpath('//div[@class="quote"]')
for div in divs:
author = div.xpath('./span/small/text()').extract_first()
text = div.xpath('./span[@class="text"]/text()').extract_first()
dic = {
'author':author,
'text':text
}
quotelist.append(dic)
return quotelist
目录结构如下:
运行爬虫: