【Scrapy爬虫入门】股票数据爬取

  • pip3 install scrapy -i https://pypi.tuna.tsinghua.edu.cn/simple安装库
  • scrapy startproject Stocks创建工程
  • cd Stocks/
  • scrapy genspider stocks qq.com创建爬虫
  • 东方财富网 + 腾讯证券

stocks.py

# -*- coding: utf-8 -*-
import scrapy
import re

class StocksSpider(scrapy.Spider):
    name = 'stocks'
    start_urls = ['http://quote.eastmoney.com/stock_list.html']

    def parse(self, response):
        for href in response.css('a::attr(href)').extract():
            try:
                stock = re.findall(r"[s][hz]\d{6}",href)[0]
                url = 'http://gu.qq.com/' + stock + '/gp'
                yield scrapy.Request(url, callback = self.parse_stock)
            except:
                continue

    def parse_stock(self, response):
        infoDict = {}
        stockName = response.css('.title_bg')
        stockInfo = response.css('.col-2.fr')
        name = stockName.css('.col-1-1').extract()[0]
        code = stockName.css('.col-1-2').extract()[0]
        info = stockInfo.css('li').extract()
        for i in info[:13]:
            key = re.findall('>.*?<', i)[1][1:-1]
            key = key.replace('\u2003','')
            key = key.replace('\xa0','')
            try:
                val = re.findall('>.*?<', i)[3][1:-1]
            except:
                val = '--'
            infoDict[key] = val
            
        infoDict.update({ '股票名称':re.findall('\>.*\<', name)[0][1:-1] + \
                          re.findall('\>.*\<', code)[0][1:-1]})
        yield infoDict

pipelines.py需修改文件路径

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


class StocksPipeline(object):
    def process_item(self, item, spider):
        return item

class StocksInfoPipeline(object):
    def open_spider(self, spider):
        self.f = open('/home/lwy/Spiders/Stocks/Stocks/spiders/StockInfo.txt', 'w')

    def close_spider(self, spider):
        self.f.close()

    def process_item(self, item, spider):
        try:
            line = str(dict(item)) + '\n'
            self.f.write(line)
        except:
            pass
        return item

settings.py 打开文件,找到如下内容并修改

ITEM_PIPELINES = {
    'Stocks.pipelines.StocksInfoPipeline': 300,
}
  • scrapy crawl stocks运行爬虫
  • 3
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 7
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值