由于业务需要,要爬托福100网站的数据。源代码如下,我是参照网上scrapy教程写的:
#coding:utf-8
from scrapy.spider import Spider,BaseSpider
from scrapy.http import Request
from scrapy.contrib.spiders import Rule,CrawlSpider
from scrapy.contrib.linkextractors import LinkExtractor
from tpo.items import TpoItem
from scrapy import log
class TPOSpider(Spider):
name = "tpo"
allowed_domains = ['http://toefl.100.com']
start_urls=[
"http://toefl.100.com/t/kouyu/1.html", #设置起始url
]
def parse(self,response):
url_list = list(set([ url for url in response.xpath('//a/@href').extract() if url.startswith('http://toefl.100.com/t/kouyu/')]))
self.url_list = url_list
for url in self.url_list:
print url
yield Request(url, callback=self.parse_item)
def parse_item(self,response):
print 'call !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!'
item = TpoItem()
title = ','.join(response.xpath('//h3[6]/text()').extract())
question = ','.join(response.xpath('//div[@class="c-english"]/h6/text()').extract())
analysis1 = ','.join(response.xpath('//div[@class="c-cont c-separate no-b"]/text()').extract())
analysis2 = ','.join(response.xpath('//div[@class="c-cont c-separate "]/text()').extract())
sample_anser_part1 = response.xpath('//div[@class="no-b c-separate"]/text()').extract()
sample_anser_part2 = response.xpath('//div[@class=" c-separate"]/text()').extract()
sample_anser_part1.extend(sample_anser_part2)
sample_anser = ','.join(sample_anser_part1)
item["title"] = title.encode('utf-8')
item["question"] =question.encode('utf-8')
item["analysis1"] = analysis1.encode('utf-8')
item["analysis2"] = analysis2.encode('utf-8')
item["sample_anser"] = sample_anser.encode('utf-8')
yield item
思路是:
parse函数中先根据start_urls爬一个网页的数据,上面有所有要爬网页数据的url,然后把url都记录下来,然后对每一个url,都调用parse_item函数,取出里面的数据,parse_item里面定义了要取出哪些数据。
但是运行的时候根本就不会调用parse_item,这是运行的报告,请问错在什么地方?
D:\tpo\tpo>C:\Python27\Scripts\scrapy.exe crawl tpo
2014-09-28 16:25:14+0800 [scrapy] INFO: Scrapy 0.24.4 started (bot: tpo)
2014-09-28 16:25:14+0800 [scrapy] INFO: Optional features available: ssl, http
, django
2014-09-28 16:25:14+0800 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODUL
: 'tpo.spiders', 'SPIDER_MODULES': ['tpo.spiders'], 'BOT_NAME': 'tpo'}
2014-09-28 16:25:14+0800 [scrapy] INFO: Enabled extensions: LogStats, TelnetCo
ole, CloseSpider, WebService, CoreStats, SpiderState
2014-09-28 16:25:14+0800 [scrapy] INFO: Enabled downloader middlewares: HttpAu
Middleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, D
aultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, Redir
tMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-09-28 16:25:14+0800 [scrapy] INFO: Enabled spider middlewares: HttpErrorM
dleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddl
are
2014-09-28 16:25:14+0800 [scrapy] INFO: Enabled item pipelines: TpoPipeline
2014-09-28 16:25:14+0800 [tpo] INFO: Spider opened
2014-09-28 16:25:14+0800 [tpo] INFO: Crawled 0 pages (at 0 pages/min), scraped
items (at 0 items/min)
2014-09-28 16:25:14+0800 [scrapy] DEBUG: Telnet console listening on 127.0.0.1
023
2014-09-28 16:25:14+0800 [scrapy] DEBUG: Web service listening on 127.0.0.1:60
2014-09-28 16:25:15+0800 [tpo] DEBUG: Crawled (200)
kouyu/1.html> (referer: None)
2014-09-28 16:25:15+0800 [tpo] DEBUG: Filtered offsite request to 'toefl.100.c
':
2014-09-28 16:25:15+0800 [tpo] INFO: Closing spider (finished)
2014-09-28 16:25:15+0800 [tpo] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 226,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 10365,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 9, 28, 8, 25, 15, 163000),
'log_count/DEBUG': 4,
'log_count/INFO': 7,
'offsite/domains': 1,
'offsite/filtered': 202,
'request_depth_max': 1,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2014, 9, 28, 8, 25, 14, 764000)}
2014-09-28 16:25:15+0800 [tpo] INFO: Spider closed (finished)
你好,请问你后面是怎么解决这个问题的呢,我也遇到了同样的问题,实在搞不懂啊,急求,谢谢。
官方对这个的解释,是你要request的地址和allow_domain里面的冲突,从而被过滤掉。可以停用过滤功能。
yield Request(url, callback=self.parse_item, dont_filter=True)
你好 最近遇到和你类似的问题。yield Request()之前的内容都可以运行,但就是无法调用Request中callback指向的函数。请问你后来如何解决该问题,多谢
看你的xpath到底能不能匹配到你要的元素啊,你可以试试用chrome的一个XPath helper插件
玩蛇网文章,转载请注明出处和文章网址:https://www.iplaypy.com/wenda/wd19373.html
相关文章 Recommend