目录
一、数据同步目的
在上一章节中,我们做了几件事,第一、思考搭建个人金融系统的意义;第二、思考和评估了数据存储的方案,主要是对数据进行选型,明确我们存储金融数据的稳定、安全、实用、易用的环境。第三、对数据库进行设计,按数据库的范式要求、数据存储事情做了数据库表设计。第四、思考和设计了数据流,对数据的采集、计算、存储做了总结。
本章,我们设计数据同步到数据库并且能自动计算一些金融(股票)方面的指标。另外,设计无人干预的自动机器人。
最后,将相关代码进行开源。(仅供学习和科研,请勿商业化)
二、数据同步及计算的流程
1、启动
程序启动,主要是定时任务启动程序,目前本人是设置第一次4点启动一次,第二次5点启动一次,最后一次22点启动,主要作用是4点基本上已经可以同步到当天的数据,第二次可以防止出现特殊问题再次补充同步一次,最后一次22点是最后的兜底(有时候周线数据很晚才能同步到)。
本人用了快10年都是这样用的。
2、数据同步控制
数据同步控制,主要是判断已经采集到数据就不需要采集了,或者已经采集了的数据可以根据最新采集到的数据更新。
3、同步股票基本信息
每天同步股票基本信息,如果数据库不存在则开始同步,如果数据库存在则跳过。
4、同步日线数据
每天同步日线数据,如果存在则删除后重新同步。
5、同步复权因子数据
每天同步复权因子,复权因子的作用是有变化则需要重新日线数据和周线数据。
6、同步周线数据
每天同步周线数据,如果同步到周线数据则写入数据库,当天没同步到数据表示当天没有周线数据。
7、计算日线前复权数据
每天同步日线数据的时候,如果复权因子没变化则使用当前价格当复权因子,如果复权因子发生变化则需要重新计算股票的价格。
8、计算周线前复权数据
每次同步周线数据的时候,如果周线的交易日内有复权因子发生变化则需要重新计算股票的价格。
9、计算日线指标
计算日线的macd、dea、diff、ma5、ma10、ma20等指标。
10、计算周线指标
计算周线的macd、dea、diff、ma5、ma10、ma20等指标。
三、数据源选型
1. 数据范围与全面性
- Tushare:提供了丰富的财经数据,包括股票、基金、期货等,覆盖面广。Tushare不仅提供历史数据,还支持实时数据的获取,便于进行回测和策略研究。此外,Tushare还扩展了数字货币行情等区块链数据的获取能力,成为了一个全数据品类的金融大数据平台。
- AkShare:专注于股票、期货、期权、基金、外汇、债券、指数、加密货币等金融产品的基本面数据、实时和历史行情数据、衍生数据的获取。AkShare的特点是获取的数据来自相对权威的财经数据网站,保证了数据的准确性和可靠性。
- Baostock:主要提供中国股市的数据支持,包括股票、指数、基金等金融数据。Baostock提供的数据以历史数据为主,适合个人和小型团队的研究和开发。
2. 使用便捷性
- Tushare:使用简单便捷,提供了丰富的API接口,可以通过Python轻松获取所需数据。Tushare的社区支持活跃,可以通过社区交流和学习,解决使用过程中遇到的问题。
- AkShare:API设计简洁明了,易于学习和使用。AkShare提供了丰富的数据接口,可以通过这些接口获取所需的金融数据,并进行数据清洗和分析。
- Baostock:提供了简洁的API,能够方便地获取和处理金融数据。Baostock的文档清晰,用户可以根据文档快速上手。
3. 收费与开源
- Tushare:Tushare有一个免费版本和一个Pro版本。免费版本提供了一定的数据接口和调用频次,但Pro版本需要注册并积分才能访问更多数据。积分获取可能对于部分用户来说有一定的难度。
- AkShare:AkShare是开源的,用户可以免费使用其提供的数据接口和工具。但需要注意的是,随着数据源的更新和变化,AkShare的数据获取能力也可能受到影响。
- Baostock:Baostock提供免费的金融数据接口,适合个人和小型团队的研究和开发。无需注册或积分即可使用。
4. 数据处理与可视化
- Tushare:Tushare返回的绝大部分数据都是pandas DataFrame类型,非常便于用pandas/NumPy/Matplotlib进行数据分析和可视化。可以根据自己的需求对数据进行进一步的处理和分析。
- AkShare:AkShare同样支持数据清洗和整合的功能,可以将原始数据转化为可用于分析的数据格式。同时,AkShare也支持对数据进行统计分析和可视化展示。
- Baostock:Baostock提供的数据同样可以方便地用于数据分析和可视化。可以使用pandas等库对数据进行处理和分析,并使用matplotlib等库进行可视化展示。
本人觉得Tushare、AkShare和Baostock是国内用的比较多的,而且数据都比较稳定,但是个人还是比较喜欢使用Tushare,因为可以找到客服帮忙处理一些疑难杂症,毕竟完全免费的东西是最贵的。
四、功能代码实现
1、启动
Runner.py:启动程序,用定时任务在每天4点、5点、22点启动数据同步程序。
class Runner( ):
def __init__(self,runner ):
self.logger = logUtil.Logger('stockstore')
self.logger.info(runner)
def initCron(self):
self.executor = None
self.db = ConfigCron()
self.data = self.db.config_db_read()
scheduler = BlockingScheduler(timezone='Asia/Shanghai')
scheduler.add_job(RunnerSpiderRunner("").start_runner, 'cron', month=str( self.data[0]["monthly"]), day=str( self.data[0]["daily"]), hour=str( self.data[0]["house"]), minute=str( self.data[0]["minute"]), second=str( self.data[0]["second"]))
try:
scheduler.start()
except SystemExit as e:
self.logger.error(e)
if __name__ == '__main__':
Runner('' ).initCron()
[
{
"monthly": "*",
"daily": "*",
"house": "4,5,22",
"minute": "*",
"second": "*",
"status": 1
}
]
2、数据同步控制
数据同步控制,主要是判断已经采集到数据就不需要采集了,或者已经采集了的数据可以根据最新采集到的数据更新。
class RunnerSpiderRunner( ):
def __init__(self ,runner):
self.logger = logUtil.Logger('stockstore')
self.logger.info( "自动采集计算启动")
def start_runner(self):
df = daoCal.DaoQueryCalDateByLast4AnyTradeDate()
for index,row in df.iterrows() :
self.spider_runner(row['cal_date'])
def spider_runner(self,date):
self.logger.info( "自动采集执行...")
if (date == None ):
date = time.strftime("%Y-%m-%d", time.localtime())
df = daoCal.DaoQueryCalDateByLastTradeDate(date)
if ( df.empty or len(df)<1 ) :
self.logger.info( "查询没有可采集的计划")
return
if (df["stock_basic_spider"][0] != 1):
self.logger.info( "1、采集基本信息")
RunerStockBasic().run(str(date).replace('-', ''))
if (df["daily_spider"][0] != 1):
self.logger.info( "2、采集日线数据")
RunnerSpiderDaily().runByDate(str(date).replace('-', ''))
if (df["adj_factor_spider"][0] != 1):
self.logger.info( "3、采集复权因子")
RunnerSpiderAdjfactor().runByDate(str(date).replace('-', ''),df["daily_spider"][0])
if (df["daily_spider"][0] != 1):
self.logger.info( "4、计算前复权日线数据")
RunnerDailyCompCommons().reCompQfqByDate(str(date).replace('-', ''))
if (df["daily_basic_spider"][0] != 1):
self.logger.info( "5、采集每日指标")
RunnerSpiderDailyBasic().runByDate(str(date))
df = daoCal.DaoQueryCalDateByTradeDate(date)
if (df["daily_compute"][0] != 1 and df["adj_factor_spider"][0] == 1 and df["daily_spider"][0] == 1):
self.logger.info( "6、计算日线指标")
RunnerDailyCompCommons().runCompDailyCommonsByDate(str(date).replace('-', ''))
df = daoCal.DaoQueryCalDateByTradeDate(date)
if (df["weekly_spider"][0] != 1 ):
self.logger.info( "7、采集周线数据")
RunnerSpiderWeekly().runByDate(str(date).replace('-', ''))
if (df["weekly_spider"][0] != 1):
self.logger.info( "8、计算前复权周线数据")
RunnerWeeklyCompCommons().reCompQfqByDate(str(date).replace('-', ''))
df = daoCal.DaoQueryCalDateByTradeDate(date)
if (df["weekly_compute"][0] != 1 and df["adj_factor_spider"][0] == 1 and df["weekly_spider"][0] == 1 ):
self.logger.info( "9、计算周线指标")
RunnerWeeklyCompCommons().runCompWeeklyCommonsByDate(str(date))
df = daoCal.DaoQueryCalDateByTradeDate(date)
if (df["daily_compute"][0] == 1 and df["daily_spider"][0] == 1):
self.logger.info( "10、拷贝数据")
RunnerCopy("准备拷贝数据").runCopyByDate(date)
3、同步股票基本信息
每天同步股票基本信息,如果未同步数据,则开始同步,已经同步数据则跳过。包括股票代码、名称、上市日期、退市日期等。
def __init__(self ):
self.logger = logUtil.Logger('stockstore')
def run(self,date):
self.logger.info( "采集基本信息开始")
try:
start_time = time.time()
cal_df = calDao.DaoQuaryBasicSpideredTradeDate(str(date))
# 当天采集过了则返回
if ["stock_basic_spider"][0] == 1: return
# 没采集过用最近一天采集
trade_date = cal_df["cal_date"][0]
df = TushareHelper().pro.stock_basic(exchange='',
fields='ts_code,symbol,name,area,industry,fullname,enname,market,exchange,curr_type,list_status,list_date,delist_date,is_hs')
dao.stockbasicSave(df,str(trade_date))
end_time = time.time()
self.logger.info( "采集基本信息结束,耗时:"+f'{end_time - start_time:.4f}s')
return 1
except Exception as e :
pass
def stockbasicSave(df,trade_date):
dbtype,db_info = ConfigDb().getDbInfo()
if dbtype == 'mysql':
engine = mysqlh.getEngine(db_info.loc[0,"user"],db_info.loc[0,"password"],db_info.loc[0,"url"],db_info.loc[0,"port"],db_info.loc[0,"database"])
session = mysqlh.getSession(db_info.loc[0,"user"],db_info.loc[0,"password"],db_info.loc[0,"url"],db_info.loc[0,"port"],db_info.loc[0,"database"])
if (len(df) > 500):
for index , row in tqdm(df.iterrows()):
df2 = pd.read_sql_query(sql=r""" SELECT * FROM runner_stock_basic WHERE symbol ='{symbol}' LIMIT 1 """.format(symbol = row["symbol"]), con=engine)
# 已经存在则跳过
if len(df2)>0 :continue
if row["symbol"][0] == "0": row["ts_code"] = row["symbol"]+".SZ"
if row["symbol"][0] == "3": row["ts_code"] = row["symbol"]+".SZ"
if row["symbol"][0] == "4": row["ts_code"] = row["symbol"]+".BJ"
if row["symbol"][0] == "6": row["ts_code"] = row["symbol"]+".SH"
if row["symbol"][0] == "8": row["ts_code"] = row["symbol"]+".BJ"
if "area" in row.keys() : area = row["area"]
else : area = ""
if "industry" in row.keys() : industry = row["industry"]
else : industry = ""
if "fullname" in row.keys() : fullname = row["fullname"]
else : fullname = ""
if "enname" in row.keys() : enname = row["enname"]
else : enname = ""
if "market" in row.keys() : market = row["market"]
else : market = ""
if "exchange" in row.keys() : exchange = row["exchange"]
else : exchange = ""
if "curr_type" in row.keys() : curr_type = row["curr_type"]
else : curr_type = ""
if "list_status" in row.keys() : list_status = row["list_status"]
else : list_status = ""
if "list_date" in row.keys() : list_date = row["list_date"]
else : list_date = ""
if "delist_date" in row.keys() : delist_date = row["delist_date"]
else : delist_date = ""
if "is_hs" in row.keys() : is_hs = row["is_hs"]
else : is_hs = ""
session.execute( text(""" INSERT INTO runner_stock_basic
(`symbol`,`name`,`area`,`industry`,`fullname`,`enname`,`market`,`exchange`,
`curr_type`,`list_status`,`list_date`,`delist_date`,`is_hs`,`ts_code`)
values(
:symbol,:name,:area,:industry,:fullname,:enname,:market,:exchange,
:curr_type,:list_status,:list_date,:delist_date,:is_hs,:ts_code
) """), params={ "symbol": row["symbol"],"name":row["name"] , "area": area , "industry":industry , "fullname": fullname,"enname":enname , "market":market , "exchange":exchange, "curr_type": curr_type,"list_status":list_status ,
"list_date":list_date , "delist_date":delist_date, "is_hs":is_hs , "ts_code":row["ts_code"] })
session.commit()
print("更新股票:"+row["ts_code"])
session.execute(text(""" update runner_stock_trade_cal t set t.stock_basic_spider = 1 WHERE cal_date = :cal_date and ( t.stock_basic_spider != 1 or t.stock_basic_spider is NULL) """), params={ "cal_date": trade_date})
session.commit()
session.remove()
return 1
4、同步日线数据
每天同步日线数据,如果存在则删除后重新同步。
lass RunnerSpiderDaily( ):
def __init__(self ):
self.dailyData = []
self.hist_df = pd.DataFrame()
self.logger = logUtil.Logger('stockstore')
def runByDate(self,trade_date):
start_time = time.time()
self.logger.info( "采集日线数据开始")
self.dailyData = TushareHelper().pro.daily(trade_date=str(trade_date).replace('-',''))
if (len(self.dailyData) > 0):
self.dailyData["p_open"] = self.dailyData["open"]
self.dailyData["p_close"] = self.dailyData["close"]
self.dailyData["p_pre_close"] = self.dailyData["pre_close"]
self.dailyData["p_high"] = self.dailyData["high"]
self.dailyData["p_low"] = self.dailyData["low"]
dailyDao.DaoSaveStockDailyByDate(self.dailyData, str(trade_date).replace('-', ''))
calDao.DaoUpdateDailySpiderByCalDate(len(self.dailyData), str(trade_date).replace('-', ''))
stop_time = time.time()
self.logger.info( "采集日线数据结束,耗时:"+f'{stop_time - start_time:.4f}s')
return 1
def runAdjfactoryByDate(self,trade_date):
start_time = time.time()
self.logger.info( "采集前复权日线数据开始")
df = adjfactorDao.DaoQueryAllAdjFactorByDailyDate(trade_date,trade_date)
for index , row in df.iterrows():
if (row["pre_adj_factor"] == row["adj_factor"]):
continue
try:
import tushare as ts
self.dailyData = ts.pro_bar(ts_code=row["ts_code"], adj='qfq' )
if (len(self.dailyData) > 0):
qfqDao.reCompQfqByDF(self.dailyData )
qfqDao.runComp(trade_date )
except Exception as e:
self.logger.error("出现如下异常%s"%e)
pass
stop_time = time.time()
self.logger.info( "采集日线数据结束,耗时:"+f'{stop_time - start_time:.4f}s')
return 1
def DaoSaveStockDailyByDate(df,trade_date):
dbtype,db_info = ConfigDb().getDbInfo()
if dbtype == 'mysql':
engine = mysqlh.getEngine(db_info.loc[0,"user"],db_info.loc[0,"password"],db_info.loc[0,"url"],db_info.loc[0,"port"],db_info.loc[0,"database"])
session = mysqlh.getSession(db_info.loc[0,"user"],db_info.loc[0,"password"],db_info.loc[0,"url"],db_info.loc[0,"port"],db_info.loc[0,"database"])
session.execute(text(""" delete from runner_stock_daily WHERE trade_date =:trade_date"""), params={ "trade_date": str(trade_date).replace('-','')})
session.commit()
session.remove()
pd.io.sql.to_sql(df, 'runner_stock_daily', con=engine, index=False, if_exists='append')
session.execute(text(""" delete FROM runner_stock_daily_index WHERE trade_date =:trade_date """), params={ "trade_date": str(trade_date).replace('-','')})
session.commit()
session.remove()
pd.io.sql.to_sql(df[["ts_code","trade_date"]], 'runner_stock_daily_index', con=engine, index=False, if_exists='append')
return 1
5、同步复权因子数据
每天同步复权因子,如果已经存在当天复权因子数据则删除后写入最新采集的复权因子数据。采集并写入数据库后需要计算上一交易日的数据和最最近一个交易日的数据,作用是更新历史数据的时候保证数据是最新并且准确的。
class RunnerSpiderAdjfactor( ):
def __init__(self):
self.adjfactorData = []
self.daily_df = pd.DataFrame()
self.logger = logUtil.Logger('stockstore')
def runByDate(self, trade_date,daily_spider):
start_time = time.time()
self.logger.info( "采集复权因子开始")
self.fqyz_df = TushareHelper().pro.adj_factor(trade_date=str(trade_date).replace('-', ''))
if (len(self.fqyz_df) > 0):
pre_date = calDao.DaoQuaryPreTradeDate(trade_date=str(trade_date).replace('-', ''))
if (pre_date != None):
fqyz_pre_df = TushareHelper().pro.adj_factor(trade_date=str(pre_date).replace('-', ''))
self.fqyz_df["pre_adj_factor"] = self.fqyz_df["adj_factor"]
self.fqyz_df.set_index('ts_code', inplace=True)
fqyz_pre_df.set_index('ts_code', inplace=True)
self.fqyz_df["pre_adj_factor"].update(fqyz_pre_df["adj_factor"])
self.fqyz_df.reset_index(inplace=True)
else:
self.fqyz_df["pre_adj_factor"] = self.fqyz_df["adj_factor"]
self.fqyz_df["symbol"] = self.fqyz_df["ts_code"].replace(".SZ", "", regex=True).replace(".SH", "", regex=True).replace(".BJ", "", regex=True)
adjfactorDao.DaoSaveAdjfactorByDateWithPreAdjfactor(self.fqyz_df, str(trade_date).replace('-', ''))
adjfactorDao.DaoUpdateAdjfactorByCalDate(len(self.fqyz_df), str(trade_date).replace('-', ''))
stop_time = time.time()
self.logger.info("采集复权因子结束,耗时:"+f'{stop_time - start_time:.4f}s')
return 1
def DaoSaveAdjfactorByDateWithPreAdjfactor(df, trade_date):
lock.acquire() # 线程锁定
dbtype, db_info = ConfigDb().getDbInfo()
if dbtype == 'mysql':
engine = mysqlh.getEngine(db_info.loc[0, "user"], db_info.loc[0, "password"], db_info.loc[0, "url"], db_info.loc[0, "port"], db_info.loc[0, "database"])
session = mysqlh.getSession(db_info.loc[0, "user"], db_info.loc[0, "password"], db_info.loc[0, "url"], db_info.loc[0, "port"], db_info.loc[0, "database"])
try:
print("1003.复权因子,删除" + trade_date + "数据")
session.execute(text(""" delete from runner_stock_adjfactor WHERE trade_date =:trade_date"""), params={"trade_date": str(trade_date).replace('-', '')})
session.commit()
pd.io.sql.to_sql(df, 'runner_stock_adjfactor', con=engine, index=False, if_exists='append')
print("1003.复权因子,入库" + trade_date + "数据")
session.execute(text(
""" UPDATE runner_stock_adjfactor a ,
(select ts_code,max(trade_date) as max_trade_date FROM runner_stock_adjfactor GROUP BY ts_code ) as b1,
runner_stock_adjfactor b2
SET a.last_adj_factor = b2.adj_factor
WHERE b1.ts_code = b2.ts_code
AND b1.max_trade_date = b2.trade_date
AND b2.ts_code = a.ts_code
and (a.last_adj_factor != b2.adj_factor OR a.last_adj_factor is NULL ) """))
session.commit()
session.remove()
print("1003.复权因子,设置最新复权因子数据")
except Exception as e:
print("error:" + str(e))
lock.release() # 线程解锁
return 1
6、计算日线前复权数据
每天同步日线数据的时候,如果复权因子没变化则使用当前价格当复权因子,如果复权因子发生变化则需要重新计算股票的价格。
先查询当天复权因子发生变化的数据,对复权因子发生变化的股票进行重新计算前复权数据。
def reCompQfqByDate( trade_date):
# print("前复权计算")
dbtype, db_info = ConfigDb().getDbInfo()
if dbtype == 'mysql':
engine = mysqlh.getEngine(db_info.loc[0, "user"], db_info.loc[0, "password"], db_info.loc[0, "url"],db_info.loc[0, "port"], db_info.loc[0, "database"])
session = mysqlh.getSession(db_info.loc[0, "user"], db_info.loc[0, "password"], db_info.loc[0, "url"],db_info.loc[0, "port"], db_info.loc[0, "database"])
df = pd.read_sql_query(
sql=r""" SELECT distinct ts_code FROM runner_stock_adjfactor a WHERE a.adj_factor !=a.pre_adj_factor and a.last_adj_factor is not null and a.pre_adj_factor AND a.trade_date ={trade_date} """.
format(trade_date=str(trade_date).replace('-', '')),con=engine)
if len(df) == 0:
return
for tup in zip(df['ts_code']):
ts_codes = tup[0]
sql = text(""" UPDATE runner_stock_daily d ,runner_stock_adjfactor a
SET
p_high = (d.high * a.adj_factor) /a.last_adj_factor ,
p_low = (d.low * a.adj_factor) /a.last_adj_factor ,
p_close = (d.close * a.adj_factor) /a.last_adj_factor ,
p_open = (d.open * a.adj_factor) /a.last_adj_factor ,
p_pre_close = (d.pre_close * a.adj_factor) /a.last_adj_factor
where d.ts_code=a.ts_code and d.trade_date = a.trade_date
AND d.ts_code =:ts_code and a.last_adj_factor !=0 """)
session.execute(sql, params={ "ts_code":ts_codes})
session.commit()
print(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()) + " 信息 重新计算日线前复权"+ts_codes+"数据...")
7、同步每日指标
每日指标数据在tushare上需要2000多积分才能调用接口,主要指标有换手率、市盈率、市净率、总股本、流通股本、总市值等。
同步逻辑如果存在则删除后写入数据库,如果不存在则直接写入数据库。
class RunnerSpiderDailyBasic( ):
def __init__(self ):
self.dailyData = []
def runByDate(self,trade_date):
try:
self.dailyBasicData = TushareHelper().pro.daily_basic(trade_date=str(trade_date) )
if (len(self.dailyData)>0) :
dailyBasicDao.DaoSaveStockDailyBasicByDate(self.dailyBasicData,str(trade_date) )
dailyBasicDao.DaoUpdateStockTradeCal4DailyBasic(str(trade_date),1,len(self.dailyBasicData) )
except Exception as e:
print("error:" + str(e))
def DaoSaveStockDailyBasicByDate(df,trade_date):
dbtype,db_info = ConfigDb().getDbInfo()
if dbtype == 'mysql':
engine = mysqlh.getEngine(db_info.loc[0,"user"],db_info.loc[0,"password"],db_info.loc[0,"url"],db_info.loc[0,"port"],db_info.loc[0,"database"])
session = mysqlh.getSession(db_info.loc[0,"user"],db_info.loc[0,"password"],db_info.loc[0,"url"],db_info.loc[0,"port"],db_info.loc[0,"database"])
# try:
session.execute(""" delete FROM runner_stock_daily_basic WHERE trade_date =:trade_date """, params={ "trade_date": trade_date})
session.commit()
session.remove()
pd.io.sql.to_sql(df, 'runner_stock_daily_basic', con=engine, index=False, if_exists='append')
return 1
8、同步周线数据
每天同步周线数据,如果同步到周线数据则写入数据库,当天没同步到数据表示当天没有周线数据。
数据库存在周线数据则删除后写入数据库。
class RunnerSpiderWeekly( ):
def __init__(self ):
self.logger = logUtil.Logger('stockstore')
self.weeklyData = []
self.stock_num = 0
self.hist_df = pd.DataFrame()
def runByDate(self,trade_date):
start_time = time.time()
self.logger.info( "采集周线数据开始")
try:
self.weeklyData = TushareHelper().pro.weekly(trade_date=str(trade_date).replace('-',''))
self.weeklyData["p_open"] = self.weeklyData["open"]
self.weeklyData["p_close"] = self.weeklyData["close"]
self.weeklyData["p_pre_close"] = self.weeklyData["pre_close"]
self.weeklyData["p_high"] = self.weeklyData["high"]
self.weeklyData["p_low"] = self.weeklyData["low"]
except Exception as e :
print("error:", str(e))
if (len(self.weeklyData)>0) :
weeklyDao.DaoSaveStockWeeklyByDate(self.weeklyData,str(trade_date).replace('-',''))
weeklyDao.DaoUpdateStockTradeCal4Weekly(str(trade_date).replace('-',''),1,self.weeklyData.shape[0])
stop_time = time.time()
self.logger.info( "采集周线数据结束,耗时:"+f'{stop_time - start_time:.4f}s')
return 1
def DaoSaveStockWeeklyByDate(df ,trade_date):
dbtype ,db_info = ConfigDb().getDbInfo()
if dbtype == 'mysql':
engine = mysqlh.getEngine(db_info.loc[0 ,"user"] ,db_info.loc[0 ,"password"] ,db_info.loc[0 ,"url"] ,db_info.loc[0 ,"port"] ,db_info.loc[0 ,"database"])
session = mysqlh.getSession(db_info.loc[0 ,"user"] ,db_info.loc[0 ,"password"] ,db_info.loc[0 ,"url"] ,db_info.loc[0 ,"port"] ,db_info.loc[0 ,"database"])
session.execute(text(""" delete from runner_stock_weekly WHERE trade_date =:trade_date"""), params={ "trade_date": str(trade_date).replace('-' ,'')})
session.commit()
session.remove()
pd.io.sql.to_sql(df, 'runner_stock_weekly', con=engine, index=False, if_exists='append')
session.execute(text(""" delete FROM runner_stock_weekly_index WHERE trade_date =:trade_date """), params={ "trade_date": str(trade_date).replace('-' ,'')})
session.commit()
session.remove()
pd.io.sql.to_sql(df[["ts_code" ,"trade_date"]], 'runner_stock_weekly_index', con=engine, index=False, if_exists='append')
return 1
9、计算周线前复权数据
每次同步周线数据的时候,如果周线的交易日内有复权因子发生变化则需要重新计算股票的价格。
def reCompQfqByDate( trade_date):
# print("前复权计算")
dbtype, db_info = ConfigDb().getDbInfo()
if dbtype == 'mysql':
engine = mysqlh.getEngine(db_info.loc[0, "user"], db_info.loc[0, "password"], db_info.loc[0, "url"],db_info.loc[0, "port"], db_info.loc[0, "database"])
session = mysqlh.getSession(db_info.loc[0, "user"], db_info.loc[0, "password"], db_info.loc[0, "url"],db_info.loc[0, "port"], db_info.loc[0, "database"])
df = pd.read_sql_query(
sql=r""" SELECT distinct ts_code FROM runner_stock_adjfactor a
WHERE a.adj_factor !=a.pre_adj_factor and a.last_adj_factor is not null
and a.pre_adj_factor is not null AND ( a.trade_date > (SELECT max(cal_date) ts_code FROM runner_stock_trade_cal where weekly_spider = 1 and cal_date <{trade_date})
and a.trade_date <={trade_date} ) """.
format(trade_date=str(trade_date).replace('-', '')),con=engine)
if len(df) == 0:
return
for tup in zip(df['ts_code']):
ts_codes = tup[0]
sql = text(""" UPDATE runner_stock_weekly w ,runner_stock_adjfactor a
SET
p_high = (w.high * a.adj_factor) /a.last_adj_factor ,
p_low = (w.low * a.adj_factor) /a.last_adj_factor ,
p_close = (w.close * a.adj_factor) /a.last_adj_factor ,
p_open = (w.open * a.adj_factor) /a.last_adj_factor ,
p_pre_close = (w.pre_close * a.adj_factor) /a.last_adj_factor
where w.ts_code=a.ts_code and w.trade_date = a.trade_date
AND w.ts_code =:ts_code and a.last_adj_factor !=0 """)
session.execute(sql, params={ "ts_code":ts_codes})
session.commit()
print(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()) + " 信息 重新计算日线前复权"+ts_codes+"数据...")
10、计算日线指标
计算日线的macd、dea、diff、ma5、ma10、ma20等指标。由于需要计算每只股票的数据,需要启动多线程进行计算。
def runCompDailyCommonsByDate(trade_date):
qh_path = os.path.abspath(os.path.join(os.getcwd(), "..")) # 返回当前工作目录 先到达 QH_网络配置 的父文件夹路径
sys.path.append(qh_path) # 添加自己指定的搜索路径
from concurrent.futures import ThreadPoolExecutor
pool = ThreadPoolExecutor(max_workers=10) # 定义两个线程
dbtype,db_info = ConfigDb().getDbInfo()
if dbtype == 'mysql':
engine = mysqlh.getEngine(db_info.loc[0,"user"],db_info.loc[0,"password"],db_info.loc[0,"url"],db_info.loc[0,"port"],db_info.loc[0,"database"])
session = mysqlh.getSession(db_info.loc[0,"user"],db_info.loc[0,"password"],db_info.loc[0,"url"],db_info.loc[0,"port"],db_info.loc[0,"database"])
# 按交易日更新
df1 = pd.read_sql_query(sql=r""" SELECT cal_date FROM runner_stock_trade_cal c WHERE c.cal_date ='{trade_date}' ORDER BY cal_date ASC
""". format(trade_date=str(trade_date).replace('-', '')), con=engine)
for index1, row1 in df1.iterrows():
df2 = pd.read_sql_query(sql=r""" SELECT ts_code,trade_date FROM runner_stock_daily_index d WHERE d.trade_date = '%s' """ % ( str(row1['cal_date']).replace('-', '') ), con=engine)
df3 = pd.read_sql_query(
sql=r""" SELECT DISTINCT ts_code FROM runner_stock_adjfactor a, runner_stock_trade_cal c WHERE c.cal_date = a.trade_date and a.adj_factor !=a.pre_adj_factor
and a.last_adj_factor is not null and a.pre_adj_factor AND a.trade_date >= {trade_date} """.
format(trade_date=str(trade_date).replace('-', '')), con=engine)
tasks = pool.map(runByByDate, df2['ts_code'], df2['trade_date'])
#
for data in tasks:
pass
print("重新计算的股票列表:")
print(df3['ts_code'].values)
tasks = pool.map(runByCode, df3['ts_code'])
for data in tasks:
pass
session.execute(text(
""" update runner_stock_trade_cal set daily_compute = :daily_compute WHERE cal_date =:cal_date """),
params={"daily_compute": 1, "cal_date": str(row1['cal_date']).replace('-', '')})
session.commit()
session.remove()
print(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()) + "---计算日线指标结束")
11、计算周线指标
同“计算日线指标”。
五、总结
本应用实现了以下功能:
- 定时启动数据同步功能,实现无需人工干预。
- 通过数据控制防止重复采集和数据重复。
- 正常情况下每天4点半可以完成数据同步,为当天的分析提供数据保障。
- 可以方便根据需要扩展现有功能。
附录:
近期将整理相关数据库脚本和源代码,进行开源和共享。整理现有代码大概还需要两天时间。到时上传到github,敬请期待。