功能描述
• 目标:获取上证A股股票名称和交易信息
• 输出:保存到文件中
• 技术路线:采用scrapy框架进行爬取
此处选取股票信息静态存储在HTML页面中的页面进行爬取,之后会写一篇动态的爬取方式
程序结构设计
(1)首先得到股票代码,此处选取证券之星获得上证A股股票代码
(2)根据股票代码到网易财经获取个股详细信息
(3)将结果存储到文件
代码实现
此处在spiders文件下新创建了一个stocks.py文件
stocks.py
# -*- coding: utf-8 -*-
import re
import scrapy
class StocksSpider(scrapy.Spider):
name = 'stocks'
start_urls = ['http://quote.stockstar.com/stock/stock_index.htm']
def parse(self, response):
for href in response.css('a::attr(href)').extract():
try:
stock = re.search(r'/gs/sh_\d{6}.shtml', href).group(0).split('_')[1].split('.')[0]
url = "http://quotes.money.163.com/" + '0' + stock + '.html'
yield scrapy.Request(url=url, callback=self.parse_stock)
except:
continue
def parse_stock(self, response):
infoDict = {}
script = response.xpath('//div[@class="relate_stock clearfix"]/script[1]').extract()
info = script[0].strip().split(',')
infoDict['股票名称'] = eval(re.search(r'name\: \'.*\'', info[0]).group(0).split(':')[1])
infoDict['股票代码'] = eval(re.search(r'code\: \'\d{6}\'', info[1]).group(0).split(":")[1])
infoDict['现价'] = eval(re.search(r'price\: \'.*\'', info[2]).group(0).split(":")[1])
infoDict['涨跌幅'] = re.search(r'change\: \'.*%', info[3]).group(0).split("'")[1]
infoDict['昨收'] = eval(re.search(r'yesteday\: \'.*\'', info[4]).group(0).split(":")[1])
infoDict['今开'] = eval(re.search(r'today\: \'.*\'', info[5]).group(0).split(":")[1])
infoDict['最高'] = eval(re.search(r'high\: \'.*\'', info[6]).group(0).split(":")[1])
infoDict['最低'] = eval(re.search(r'low\: \'.*\'', info[7]).group(0).split(":")[1])
yield infoDict
pipelines.py
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
class ScrapystocksPipeline(object):
def process_item(self, item, spider):
return item
class ScrapystocksInfoPipeline(object):
def open_spider(self, spider):
self.f = open('ScrapyStockInfo.txt', 'w')
def close_spider(self, spider):
self.f.close()
def process_item(self, item, spider):
try:
line = str(dict(item)) + '\n'
self.f.write(line)
except:
pass
return item
记得在settings.py中把对应的pipeline注释去掉,使得pipelines.py可以成功执行
ITEM_PIPELINES = {
'ScrapyStocks.pipelines.ScrapystocksInfoPipeline': 300,
}
然后运行爬虫就可以成功爬取了,对比上一次用requests + bs + re的方法,用scrapy就要快得多了,只会花费四分钟左右,不过由于是异步执行,因此会产生顺序混乱的问题,当然后期进行数据清洗也可以达到顺序排列的效果