文章目录
1. 数据提取
1. Scrapy 提取项目
- 从网页中提取数据,Scrapy 使用基于 XPath 和 CSS 表达式的技术叫做选择器。以下是 XPath 表达式的一些例子:
-
这将选择 HTML 文档中的
<head>
元素中的<title>
元素/html/head/title
-
这将选择
<title>
元素中的文本/html/head/title/text()
-
这将选择所有的
<td>
元素//td
-
选择 div 包含一个属性 class=”slice” 的所有元素
//div[@class=”slice”]
- 选择器有四个基本的方法,如下所示:
S.N. | 方法 & 描述 |
---|---|
extract() | 它返回一个unicode字符串以及所选数据 |
extract_first() | 它返回第一个unicode字符串以及所选数据 |
re() | 它返回Unicode字符串列表,当正则表达式被赋予作为参数时提取 |
xpath() | 它返回选择器列表,它代表由指定XPath表达式参数选择的节点 |
css() | 它返回选择器列表,它代表由指定CSS表达式作为参数所选择的节点 |
2. Scrapy Shell
如果使用选择器想快速的到到效果,我们可以使用Scrapy Shell:
scrapy shell "http://www.163.com"
注意windows系统必须使用双引号
2.1. 举例
从一个普通的HTML网站提取数据,查看该网站得到的 XPath 的源代码。检测后,可以看到数据将在UL标签,并选择 li 标签中的 元素。
代码的下面行显示了不同类型的数据的提取:
-
选择 li 标签内的数据:
response.xpath('//ul/li')
-
对于选择描述:
response.xpath('//ul/li/text()').extract()
-
对于选择网站标题:
response.xpath('//ul/li/a/text()').extract()
-
对于选择网站的链接:
response.xpath('//ul/li/a/@href').extract()
2. Scrapy 数据保存
1. 数据的提取
1.1. 控制台打印
import scrapy
class DoubanSpider(scrapy.Spider):
name = 'douban'
allwed_url = 'douban.com'
start_urls = [
'https://movie.douban.com/top250/'
]
def parse(self, response):
movie_name = response.xpath("//div[@class='item']//a/span[1]/text()").extract()
movie_core = response.xpath("//div[@class='star']/span[2]/text()").extract()
yield {
'movie_name':movie_name,
'movie_core':movie_core
}
执行以上代码,我可以在控制看到:
2018-01-24 15:17:14 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: spiderdemo1)
2018-01-24 15:17:14 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.3.1, w3lib 1.18.0, Twiste
d 17.9.0, Python 3.6.3 (v3.6.3:2c5fed8, Oct 3 2017, 18:11:49) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 17.5.0 (OpenSSL 1.1.0g 2 Nov 201
7), cryptography 2.1.4, Platform Windows-10-10.0.10240-SP0
2018-01-24 15:17:14 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'spiderdemo1', 'NEWSPIDER_MODULE': 'spiderdemo1.spiders',
'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['spiderdemo1.spiders']}
2018-01-24 15:17:14 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2018-01-24 15:17:14 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-01-24 15:17:14 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-01-24 15:17:14 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-01-24 15:17:14 [scrapy.core.engine] INFO: Spider opened
2018-01-24 15:17:14 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-01-24 15:17:14 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-01-24 15:17:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://movie.douban.com/robots.txt> (referer: None)
2018-01-24 15:17:15 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://movie.douban.com/top250> from <GET
https://movie.douban.com/top250/>
2018-01-24 15:17:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://movie.douban.com/top250> (referer: None)
2018-01-24 15:17:15 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250>
{'movie_name': ['肖申克的救赎', '霸王别姬', '这个杀手不太冷', '阿甘正传', '美丽人生', '千与千寻', '泰坦尼克号', '辛德勒的名单', '盗梦空
间', '机器人总动员', '海上钢琴师', '三傻大闹宝莱坞', '忠犬八公的故事', '放牛班的春天', '大话西游之大圣娶亲', '教父', '龙猫', '楚门的世
界', '乱世佳人', '熔炉', '触不可及', '天堂电影院', '当幸福来敲门', '无间道', '星际穿越'], 'movie_core': ['9.6', '9.5', '9.4', '9.4', '9
.5', '9.2', '9.2', '9.4', '9.3', '9.3', '9.2', '9.1', '9.2', '9.2', '9.2', '9.2', '9.1', '9.1', '9.2', '9.2', '9.1', '9.1', '8.9', '9.0
', '9.1']}
2018-01-24 15:17:15 [scrapy.core.engine] INFO: Closing spider (finished)
2018-01-24 15:17:15 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 651,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'downloader/response_bytes': 13900,
'downloader/response_count': 3,
'downloader/response_status_count/200': 2,
'downloader/response_status_count/301': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 1, 24, 7, 17, 15, 247183),
'item_scraped_count': 1,
'log_count/DEBUG': 5,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2018, 1, 24, 7, 17, 14, 784782)}
2018-01-24 15:17:15 [scrapy.core.engine] INFO: Spider closed (finished)
1.2. 以文件方式输出
-
python原生方式:
with open("movie.txt", 'wb') as f: for n, c in zip(movie_name, movie_core): str = n+":"+c+"\n" f.write(str.encode())
-
以scrapy内置方式:
-
scrapy 内置主要有四种:JSON,JSON lines,CSV,XML
-
我们将结果用最常用的JSON导出,命令如下:
scrapy crawl dmoz -o douban.json -t json
-
-o 后面是导出文件名,-t 后面是导出类型
2. 提取内容的封装 Item
-
Scrapy进程可通过使用蜘蛛提取来自网页中的数据。Scrapy使用Item类生成输出对象用于收刮数据。
-
Item 对象是自定义的python字典,可以使用标准字典语法获取某个属性的值。
2.1. 定义
import scrapy
class InfoItem(scrapy.Item):
# define the fields for your item here like:
movie_name = scrapy.Field()
movie_core = scrapy.Field()
2.2. 使用
def parse(self, response):
movie_name = response.xpath("//div[@class='item']//a/span[1]/text()").extract()
movie_core = response.xpath("//div[@class='star']/span[2]/text()").extract()
for n, c in zip(movie_name, movie_core):
movie = InfoItem()
movie['movie_name'] = n
movie['movie_core'] = c
yield movie
3. 示例
爬取起点网中的小说名称和作者
# -*- coding: utf-8 -*-
import scrapy
class QidianSpider(scrapy.Spider):
name = 'qidian'
allowed_domains = ['qidian.com']
start_urls = ['https://www.qidian.com/finish']
def parse(self, response):
names = response.xpath('//div[@class="book-mid-info"]/h4/a/text()').extract() # 选择器选完的数据,再提取结果
authors = response.xpath('//p[@class="author"]/a[1]/text()').extract()
book_list = []
for name, author in zip(names, authors):
# 最好以字典形式保存数据
book_list.append({
'name': name,
'author': author
})
return book_list
-
以 json 形式导出(之后可以在json解析网站解析爬取的内容)
-
以 CSV 格式导出数据;可以在file explorer中以 Excel 文件心事打开;但是可能会出现换行的问题,需要手动在源文件中添加一行代码。