Python股票爬虫小案例

声明:本案例只是学习爬虫使用。

股票行情中心网址:http://quote.stockstar.com/stock/stock_index.htm
在这里插入图片描述
点击某一股票编码,打开如下网页,网址只有编码部分不同:
如:
http://stock.quote.stockstar.com/600023.shtml
http://stock.quote.stockstar.com/600018.shtml
在这里插入图片描述

爬虫结果:
在这里插入图片描述

一、 安装需要的模块

pip install scrapy -i https://pypi.tuna.tsinghua.edu.cn/simple/
pip install beautifulsoup4 -i https://pypi.tuna.tsinghua.edu.cn/simple/
pip install Twisted==22.10.0

二、 创建项目

PyCharm下的Terminal:

scrapy startproject gupiao

在这里插入图片描述

三、 工程中新建一个Scrapy爬虫

1、使用命令行执行cd命令,进入工程文件夹:

cd gupiao 

2、用Scrapy的genspider创建一个爬虫文件

scrapy genspider stock quote.stockstar.com

注意:stock为爬虫名称、quote.stockstar.com为域名。

在这里插入图片描述

3、stock.py参考源代码

网址:http://quote.stockstar.com/stock/stock_index.htm
在这里插入图片描述

四、 完整代码

stock.py

import scrapy
from bs4 import BeautifulSoup
import re


class StockSpider(scrapy.Spider):
    name = "stock"
    start_urls = ["http://quote.stockstar.com/stock/stock_index.htm"]

    def parse(self, response):
        for href in response.css('a::attr(href)').extract():
            try:
                # 提取股票代码
                stock = re.search(r"\d{6}", href).group(0).upper()
                # 构造股票详情页面的URL
                url = f'http://stock.quote.stockstar.com/{stock}.shtml'
                yield scrapy.Request(url, callback=self.parse_stock)
            except:
                continue

    def parse_stock(self, response):
        try:
            soup = BeautifulSoup(response.text, 'lxml')
            name = soup.h2.string.strip()  # 清理字符串
            ltsz = soup.find(id="stock_quoteinfo_ltsz").string.strip()
            zsz = soup.find(id="stock_quoteinfo_zsz").string.strip()
            ltgb = soup.find(id="stock_quoteinfo_ltgb").string.strip()
            zgb = soup.find(id="stock_quoteinfo_zgb").string.strip()

            # 构造股票信息字典
            infoDict = {'股票名称': name, '流通市值': ltsz, '总市值': zsz,
                        '流通股本': ltgb, '总股本': zgb}

            yield infoDict
        except:
            print("error")



pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


class DemoPipeline(object):
    def process_item(self, item, spider):
        return item


class stockPipeline(object):
    def open_spider(self, spider):
        self.f = open('XueQiuStock.txt', 'w')

    def close_spider(self, spider):
        self.f.close()

    def process_item(self, item, spider):
        try:
            line = str(dict(item)) + '\n'
            self.f.write(line)
        except:
            pass
        return item

settings.py

# -*- coding: utf-8 -*-
BOT_NAME = 'gupiao'

SPIDER_MODULES = ['gupiao.spiders']
NEWSPIDER_MODULE = 'gupiao.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6"

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'gupiao.pipelines.stockPipeline': 300,
}

五、运行

cd gupiao/gupiao
scrapy crawl stock

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值