股票数据scrapy爬虫
功能描述
技术路线:scrapy
目标:获取上交所和深交所所有股票的名称和交易信息
输出:保存到文件中
数据网站的确定
获取股票列表: 东方财富网:http://quote.eastmoney.com/stock_list.html
获取个股信息: 腾讯证券:http://gu.qq.com/
单个股票: http://gu.qq.com/sh600859/gp
实例
“股票数据Scrapy爬虫”实例编写
步骤
步骤1:建立工程和Spider模板
- scrapy startproject BaiduStocks
- cd BaiduStocks
- scrapy genspider stocks baidu.com
- 进一步修改spiders/stocks.py
这一步自行完成~
步骤2:编写Spider
- 配置stocks.py文件
- 修改对返回页面的处理
- 修改对新增URL爬取请求的处理(stocks.py)
# -*- coding: utf-8 -*-
import scrapy
import re
class StocksSpider(scrapy.Spider):
name = 'stocks'
start_urls = ['http://quote.eastmoney.com/stock_list.html']
def parse(self, response):
for href in response.css('a::attr(href)').extract():
try:
stock = re.findall(r"[s][hz]\d{6}", href)[0]
url = 'http://gu.qq.com/' + stock + '/gp'
yield scrapy.Request(url, callback=self.parse_stock)
except:
continue
def parse_stock(self, response):
infoDict = {}
stockName = response.css('.title_bg')
stockInfo = response.css('.col-2.fr')
name = stockName.css('.col-1-1').extract()[0]
code = stockName.css('.col-1-2').extract()[0]
info = stockInfo.css('li').extract()
for i in info[:13]:
key = re.findall('>.*?<', i)[1][1:-1]
key = key.replace('\u2003', '')
key = key.replace('\xa0', '')
try:
val = re.findall('>.*?<', i)[3][1:-1]
except:
val = '--'
infoDict[key] = val
infoDict.update({'股票名称': re.findall('\>.*\<', name)[0][1:-1] + \
re.findall('\>.*\<', code)[0][1:-1]})
yield infoDict
其中的key=re.replace('\u2003',''),key=re.replace('\xa0','')分别是为了除去爬取的字符串中的无用部分,如 等,网页抓取时会因为编码原因转化成\xa0,所以我们需要进行替换,得到较为美观的字符串.
步骤3:编写ITEM Pipelines
- 配置pipelines.py文件
- 定义对爬取项(Scrapy Item)的处理类(pipelines.py)
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
class ScrapyGupiaoPipeline:
def process_item(self, item, spider):
return item
class ScrapyGupiaoPipeline:
def open_spider(self, spider):
self.f = open('gupiao.txt', 'w')
def close_spider(self, spider):
self.f.close()
def process_item(self, item, spider):
try:
line = str(dict(item)) + '\n'
self.f.write(line)
except:
pass
return item
- 配置ITEM_PIPELINES选项(settings.py)
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'scrapy_gupiao.pipelines.ScrapyGupiaoPipeline': 300,
}
执行程序:scrapy crawl stocks
#输出结果
详见下面网盘链接的gupiao.txt文件
”股票数据Scrapy爬虫“实例优化
”股票数据Scrapy爬虫“实例优化
配置并发连接选项(settings.py)
选项 | 说明 |
CONCURRENT_REQUESTS | Downloader最大并发请求下载数量,默认32 |
CONCURRENT_ITEMS | Item Pipeline最大并发ITEM处理数量,默认100 |
CONCURRENT_REQUESTS_PER_DOMAI N | 每个目标域名最大的并发请求数量,默认8 |
CONCURRENT_REQUESTS_PER_IP | 每个目标IP最大的并发请求数量,默认0,非0有效 |