1.准备工作
功能描述:
目标:获取上交所和深交所所有股票的名称和交易信息
输出:保存到文件中
所用技术:requests、bs4、re、csv
数据网站选择:
新浪股票:http://finance.sina.com.cn/stock/ (动态)
网易股票:http://quotes.money.163.com/ (静态)
凤凰财经:http://app.finance.ifeng.com/list/stock.php (静态)
原则:选择将股票数据静态写在html页面中的,而不是用js动态生成的,同时确保robots协议限制
选取方法:浏览器F12,源代码查看等,不要纠结于某个网站,多找信息源尝试。
程序的结构设计:
1.从凤凰财经网获取源代码
2.获取个股信息
3.将结果存储到文件
2.代码实现
import requests
import re
from bs4 import BeautifulSoup
import csv
def getHTMLText(url):
try:
r = requests.get(url,)
r.raise_for_status()
r.encoding = r.apparent_encoding
return r.text
except:
print("爬取失败")
return ""
def getStockInfo(html,stock_info,):
try:
if html=="":
pass
soup = BeautifulSoup(html,'html.parser')
for tr in soup.find('table').children:
if isinstance(tr, bs4.element.Tag):
tds = tr('td')
if tds == []:
continue
elif re.findall(r'colspan="11"',str(tds)):
continue
else:
stock_info.append([tds[0].string,tds[1].string,tds[2].string])
return stock_info
except:
print("出现异常")
def createFile(stock_info,file_path):
title = ["代码","名字","现价"]
with open(file_path,'w',encoding="utf8") as f:
writer = csv.writer(f)
writer.writerow(title)
for i in range(len(stock_info)):
writer.writerow(stock_info[i])
print("success!")
if __name__ == '__main__':
base_url = "http://app.finance.ifeng.com/list/stock.php?"
page =2
file_path = "D://pachong/StockInfo.csv"
stock_info = []
for i in range(page):
url = base_url + '&p=' +str(i+1)
html = getHTMLText(url)
getStockInfo(html,stock_info)
createFile(stock_info,file_path)