scrapy获取html不能写入到文件中,scrapy 正确结束但是并没有将信息存到文件中为什么?...

按照视屏编写代码,但是就是没有运行结果(正确的运行结果会得到一个有内容的txt文件,但是我所得到的txt文件时一个空的)。请各位指教

下面是代码:

stocks.py

# -*- coding: utf-8 -*-

import scrapy

import re

class StocksSpider(scrapy.Spider):

name = 'stocks'

start_urls = ['http://quote.eastmoney.com/stocklist.html']

def parse(self, response):#由start_urls经过默认的request方法爬取下来的response对象

headers = {

'Accept-Encoding':'gzip, deflate, sdch, br',

'Accept-Language':'zh-CN,zh;q=0.8',

'Connection':'keep-alive',

'Referer':'https://gupiao.baidu.com/',

'User-Agent':ua

}

for href in response.css("a::attr(href)").extract():#获取的是href的属性值

try:

stock=re.findall(r"[s][zh]\d{6}",href)[0]#获取股票代码

url="http:/gupiao.baidu.com/stock/"+"".join(stock)+".html"#构建一个百度股票代码

yield scrapy.Request(url,callback=self.parse_stock,headers=headers)#产生生成器,Request将response修改为新生成的与百度相关的连接

except:

continue

def parse_stock(self,response):

infoDict={}

stockInfo=response.css(".stock-bets")

name=stockInfo.css(".bets-name").extract()[0]

keyList=stockInfo.css("dt").extract()#获取文本内容

valueList=stockInfo.css("dd").extract()

for i in range(len(keyList)):

key=re.findall(r">.*",keyList[i])[0][1:-5]

try:

val=re.findall(r"\d\.?.*",valueList[i])[0][0:-5]

except:

val="--"

infoDict[key]=val

infoDict.update({'股票名称': re.findall('\s.*\(',name)[0].split()[0] +re.findall('\>.*\

yield infoDict

piplines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here

#

# Don't forget to add your pipeline to the ITEM_PIPELINES setting

# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

class BaiduscrapyPipeline(object):

def process_item(self, item, spider):

return item

class BaidustockInfoPipline(object):

def open_spider(self,spider):#启动自动调用爬虫时的方法

self.f=open("BaiduStockInfo.txt","w")

def close_spider(self,spider):#关闭时自动调用

self.f.close()

def process_item(self,item,spider):#对每一个item处理时调用

try:

line=str(dict(item))+"\n"

self.f.write(line)

except:

pass

return item

settings.py

BOT_NAME = 'BaiduScrapy'

SPIDER_MODULES = ['BaiduScrapy.spiders']

NEWSPIDER_MODULE = 'BaiduScrapy.spiders'

ROBOTSTXT_OBEY = True

ITEM_PIPELINES = {

'BaiduScrapy.pipelines.BaidustockInfoPipline': 300,

}

执行结果:

2018-01-30 19:33:13 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: BaiduScrapy)

2018-01-30 19:33:13 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.3.1, w3lib 1.19.0, Twisted 17.9.0, Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 17.5.0 (OpenSSL 1.1.0g 2 Nov 2017), cryptography 2.1.4, Platform Windows-10-10.0.14393-SP0

2018-01-30 19:33:13 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'BaiduScrapy', 'NEWSPIDER_MODULE': 'BaiduScrapy.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['BaiduScrapy.spiders']}

2018-01-30 19:33:13 [scrapy.middleware] INFO: Enabled extensions:

['scrapy.extensions.corestats.CoreStats',

'scrapy.extensions.telnet.TelnetConsole',

'scrapy.extensions.logstats.LogStats']

2018-01-30 19:33:13 [scrapy.middleware] INFO: Enabled downloader middlewares:

['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',

'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',

'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',

'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',

'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',

'scrapy.downloadermiddlewares.retry.RetryMiddleware',

'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',

'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',

'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',

'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',

'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',

'scrapy.downloadermiddlewares.stats.DownloaderStats']

2018-01-30 19:33:13 [scrapy.middleware] INFO: Enabled spider middlewares:

['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',

'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',

'scrapy.spidermiddlewares.referer.RefererMiddleware',

'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',

'scrapy.spidermiddlewares.depth.DepthMiddleware']

2018-01-30 19:33:13 [scrapy.middleware] INFO: Enabled item pipelines:

['BaiduScrapy.pipelines.BaidustockInfoPipline']

2018-01-30 19:33:13 [scrapy.core.engine] INFO: Spider opened

2018-01-30 19:33:13 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

2018-01-30 19:33:13 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023

2018-01-30 19:33:13 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (meta refresh) to from

2018-01-30 19:33:14 [scrapy.core.engine] DEBUG: Crawled (200) (referer: None)

2018-01-30 19:33:14 [scrapy.core.engine] DEBUG: Crawled (200) (referer: None)

2018-01-30 19:33:14 [scrapy.core.scraper] ERROR: Spider error processing (referer: None)

Traceback (most recent call last):

File "e:\software\python36\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback

yield next(it)

File "e:\software\python36\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 30, in process_spider_output

for x in result:

File "e:\software\python36\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in

return (_set_referer(r) for r in result or ())

File "e:\software\python36\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in

return (r for r in result or () if _filter(r))

File "e:\software\python36\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in

return (r for r in result or () if _filter(r))

File "F:\GitHubCode\Code\Python\BaiduScrapy\BaiduScrapy\spiders\stocks.py", line 15, in parse

'User-Agent':ua

NameError: name 'ua' is not defined

2018-01-30 19:33:14 [scrapy.core.engine] INFO: Closing spider (finished)

2018-01-30 19:33:14 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

{'downloader/request_bytes': 676,

'downloader/request_count': 3,

'downloader/request_method_count/GET': 3,

'downloader/response_bytes': 686187,

'downloader/response_count': 3,

'downloader/response_status_count/200': 2,

'downloader/response_status_count/404': 1,

'finish_reason': 'finished',

'finish_time': datetime.datetime(2018, 1, 30, 11, 33, 14, 925090),

'log_count/DEBUG': 4,

'log_count/ERROR': 1,

'log_count/INFO': 7,

'response_received_count': 2,

'scheduler/dequeued': 1,

'scheduler/dequeued/memory': 1,

'scheduler/enqueued': 1,

'scheduler/enqueued/memory': 1,

'spider_exceptions/NameError': 1,

'start_time': datetime.datetime(2018, 1, 30, 11, 33, 13, 484838)}

2018-01-30 19:33:14 [scrapy.core.engine] INFO: Spider closed (finished)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值