python3运行scrapy_Scrapy:python3下的第一次运行测试

1,引言

《Scrapy的架构初探》一文讲解了Scrapy的架构,本文就实际来安装运行一下Scrapy爬虫。本文以官网的tutorial作为例子,完整的代码可以在github上下载。

2,运行环境配置

本次测试的环境是:Windows10, Python3.4.3 32bit

安装Scrapy :   $ pip install Scrapy                 #实际安装时,由于服务器状态的不稳定,出现好几次中途退出的情况

3,编写运行第一个Scrapy爬虫

3.1. 生成一个新项目:tutorial

$ scrapy startproject tutorial

项目目录结构如下:

3.2.  定义要抓取的item

# -*- coding: utf-8 -*-

# Define here the models for your scraped items

#

# See documentation in:

# http://doc.scrapy.org/en/latest/topics/items.html

importscrapy

classDmozItem(scrapy.Item):

title =scrapy.Field()

link =scrapy.Field()

desc = scrapy.Field()

3.3. 定义Spider

importscrapy

from tutorial.items importDmozItem

classDmozSpider(scrapy.Spider):

name = "dmoz"allowed_domains = ["dmoz.org"]

start_urls =[

"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",

"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"]

defparse(self, response):

for sel in response.xpath('//ul/li'):

item =DmozItem()

item['title'] = sel.xpath('a/text()').extract()

item['link'] = sel.xpath('a/@href').extract()

item['desc'] = sel.xpath('text()').extract()

yield item

3.4. 运行

$ scrapy crawl dmoz -o item.json

1) 结果报错:

A) ImportError: cannot import name '_win32stdio'

B) ImportError: No module named 'win32api'

2) 查错过程:查看官方的FAQ和stackoverflow上的信息,原来是scrapy在python3上测试还不充分,还有小问题。

3) 解决过程:

A) 需要手工去下载twisted/internet下的 _win32stdio 和 _pollingfile,存放到python目录的lib\sitepackages\twisted\internet下

B) 下载并安装pywin32

再次运行,成功!在控制台上可以看到scrapy的输出信息,待运行完成退出后,到项目目录打开结果文件items.json, 可以看到里面以json格式存储的爬取结果

[

{"title": [" About "], "desc": [" ", " "], "link": ["/docs/en/about.html"]},

{"title": [" Become an Editor "], "desc": [" ", " "], "link": ["/docs/en/help/become.html"]},

{"title": [" Suggest a Site "], "desc": [" ", " "], "link": ["/docs/en/add.html"]},

{"title": [" Help "], "desc": [" ", " "], "link": ["/docs/en/help/helpmain.html"]},

{"title": [" Login "], "desc": [" ", " "], "link": ["/editors/"]},

{"title": [], "desc": [" ", " Share via Facebook "], "link": []},

{"title": [], "desc": [" ", " Share via Twitter "], "link": []},

{"title": [], "desc": [" ", " Share via LinkedIn "], "link": []},

{"title": [], "desc": [" ", " Share via e-Mail "], "link": []},

{"title": [], "desc": [" ", " "], "link": []},

{"title": [], "desc": [" ", " "], "link": []},

{"title": [" About "], "desc": [" ", " "], "link": ["/docs/en/about.html"]},

{"title": [" Become an Editor "], "desc": [" ", " "], "link": ["/docs/en/help/become.html"]},

{"title": [" Suggest a Site "], "desc": [" ", " "], "link": ["/docs/en/add.html"]},

{"title": [" Help "], "desc": [" ", " "], "link": ["/docs/en/help/helpmain.html"]},

{"title": [" Login "], "desc": [" ", " "], "link": ["/editors/"]},

{"title": [], "desc": [" ", " Share via Facebook "], "link": []},

{"title": [], "desc": [" ", " Share via Twitter "], "link": []},

{"title": [], "desc": [" ", " Share via LinkedIn "], "link": []},

{"title": [], "desc": [" ", " Share via e-Mail "], "link": []},

{"title": [], "desc": [" ", " "], "link": []},

{"title": [], "desc": [" ", " "], "link": []}

]

第一次运行scrapy的测试成功

4,接下来的工作

接下来,我们将使用GooSeeker API来实现网络爬虫,省掉对每个item人工去生成和测试xpath的工作量。目前有2个计划:

在gsExtractor中封装一个方法:从xslt内容中自动提取每个item的xpath

从gsExtractor的提取结果中自动提取每个item的结果

具体选择哪个方案,将在接下来的实验中确定,并发布到gsExtractor新版本中

5,文档修改历史

2016-06-17:V1.0,首次发布

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值