TCN时空卷积网络 python 简单实现

本文基本参照 【python量化】用时间卷积神经网络(TCN)进行股价预测_敲代码的quant的博客-CSDN博客_卷积神经网络对时间序列的预测

对TCN时空卷积网络进行简单的python实现,用于理解TCN网络运行机制并以备后查,运行环境为python3.8.6 ,创建项目目录如下:

 1.其中test.csv和train.csv分别为测试和训练数据,为随机创建的回归数据,columns =[

a1,a2,a3,a4,a5,a6,a7,a8,y] 其中y是标签列;

2.run.py为执行脚本,实现训练-输出模型-测试-输出测试结果的功能,运行方式为 python run.py

3.运行结果:将在项目目录下生成model.ckpt和result两个目录,分别存储模型和预测结果

        

4.项目目录百度云盘

                                链接:https://pan.baidu.com/s/1CW1CLZ48bXo7vonyWtEIsg 
                                提取码:tua6

5.run.py 代码

#coding:utf-8
import numpy as np
from keras.datasets import mnist
from keras.utils import to_categorical
from tcn.tcn import TCN
# from darts.models import  TCN
import tensorflow as tf
from keras import layers,Input
import keras
import math
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
import os
bpath = os.getcwd()
import pickle
from keras.models import load_model, Model, Sequential

"""
TCN时空卷积模型
"""
class TCN_demo:
    def __init__(self,
               chunk_size=500000,                   #分块读取文件时,块的大小
               window_size = 7,                     #按窗口创建数据集时,时间窗口的大小
               nb_filters = 20,                     #tcn网络卷积核数
               kernel_size = 46,                    #卷积核大小
               optimizer ="Adam",                   #模型优化器
               loss = "mae",                        #模型损失定义
               epochs =5,                          #训练伦次
               batch_size = 128,                    #批次大小
               validation_split = 0.3,              #验证集比例
               dilations=[int(math.pow(2, i + 1)) for i in range(8)]  # 膨胀大小为2的次方
                 ):
        self.chunk_size = chunk_size
        self.scaler = None
        self.window_size = window_size
        self.nb_filters = nb_filters
        self.kernel_size = kernel_size
        self.optimizer = optimizer
        self.loss = loss
        self.epochs = epochs
        self.batch_size = batch_size
        self.validation_split = validation_split
        self.dilations = dilations
    def load_data(self,path):
        """
        加载数据集
        :param chunk_size:
        :param path:
        :return:
        """
        #分块读取文件
        reader = pd.read_csv(path, header=0, iterator=True, encoding="gb18030")
        chunks = []
        loop = True
        while loop:
            try:
                chunk = reader.get_chunk(self.chunk_size)
                chunks.append(chunk)
            except StopIteration:
                loop = False
        #将块拼接为pandas dataFrame格式
        df = pd.concat(chunks, ignore_index=True)
        return df

    def maxmin_scaler(self,df):
        """
        max min归一化
        :param df:
        :param scaler:
        :return:
        """
        if self.scaler:
            df_s = self.scaler.transform(df.values)
        else:
            self.scaler = MinMaxScaler()
            df_s = self.scaler.fit_transform(df.values)
        return df_s

    def creat_feature(self,df_s,column_len):
        """
        在归一化基础上,根据时间窗口生成特征集和标签集
        :param df_s:
        :param window_size:
        :param column_len:
        :return:
        """
        #按滑窗生成数据集
        X = []
        label = []
        for i in range(len(df_s) - self.window_size):
            X.append(df_s[i:i + self.window_size, :].tolist())
            label.append(df_s[i + self.window_size, :1].tolist()[0])
        #转换数据shape ,转化为(window_size*cols,2)
        X = np.array(X).reshape(-1, self.window_size * column_len, 1)
        label = np.array(label)
        return X, label

    def rmse(self,pred, true):
        """
        根计算预测值和真实值的rmse分数
        :param pred:
        :param true:
        :return:
        """
        return np.sqrt(np.mean(np.square(pred - true)))

    def plot(self,pred, true):
        """
        绘制预测值和真实值曲线
        :param pred:
        :param true:
        :return:
        """
        pred = pred
        true = true
        fig = plt.figure()
        ax = fig.add_subplot(111)
        ax.plot(range(len(pred)), pred)
        ax.plot(range(len(true)), true)
        plt.show()

    def do_train(self,path):
        """
        训练并输出模型
        :param path:
        :return:
        """
        ###1.加载数据,标准化,生成数据集
        df_train = self.load_data(path)
        df_s = self.maxmin_scaler(df_train)
        column_len = df_train.shape[1]
        x_train, y_train = self.creat_feature(df_s, column_len)

        ###2.构建网络层级
        inputs = layers.Input(shape=(x_train.shape[1], x_train.shape[2]), name='inputs')
        #神经元(卷积核)20个,卷积核大小6,膨胀大小为2的次方
        t=TCN(return_sequences=False,nb_filters=self.nb_filters,kernel_size=self.kernel_size,dilations=self.dilations)(inputs)
        outputs=layers.Dense(units=1, activation='sigmoid')(t)

        tcn_model=tf.keras.Model(inputs,outputs)
        tcn_model.compile(optimizer=self.optimizer,
                               loss='mae',
                               metrics=['mae'])

        ###3.训练并保存模型
        tcn_model.fit(x_train, y_train, epochs=self.epochs, validation_split=self.validation_split,batch_size = self.batch_size)
        tcn_model.summary()
        tcn_model.save(os.path.join(bpath, 'model.ckpt'))

    def do_predict(self,path):
        """
        预测并输出预测结果
        :param path:
        :return:
        """
        ###1.加载数据,标准化,生成数据集
        df_test = self.load_data( path)
        df_s = self.maxmin_scaler(df_test)
        column_len = df_test.shape[1]
        print("column_len:",column_len)
        x_test, y_test = self.creat_feature(df_s, column_len)

        ###2.加载模型并预测
        tcn_model =  keras.models.load_model(os.path.join(bpath, 'model.ckpt'))
        predict = tcn_model.predict(x_test)

        ###3.预测结果反标准化
        pre_copies = np.repeat(predict, column_len, axis=-1)
        label_copies = np.repeat(y_test, column_len, axis=-1)
        pred = self.scaler.inverse_transform(np.reshape(pre_copies, (len(predict), column_len)))[:, 0].reshape(-1)
        test_label = self.scaler.inverse_transform(np.reshape(label_copies, (len(y_test), column_len)))[:, 0].reshape( -1)

        ###4.评价/绘图
        print('RMSE ', self.rmse(pred, test_label))
        self.plot(pred, test_label)

        ###5.预测结果输出
        df_out = pd.DataFrame()
        df_out["predict"] = pred
        df_out["y"] = test_label
        if not os.path.exists(bpath + r"\result"):
            os.makedirs(bpath + r"\result")
        df_out.to_csv(bpath + r"\result\pre_tcn.csv", index=False)

if __name__ == "__main__":
    demo = TCN_demo()     #加载demo类
    demo.do_train(path=bpath + r"\data\train.csv")  #训练
    demo.do_predict(path=bpath + r"\data\test.csv") #测试

参考文献:【python量化】用时间卷积神经网络(TCN)进行股价预测_敲代码的quant的博客-CSDN博客_卷积神经网络对时间序列的预测

  • 1
    点赞
  • 59
    收藏
    觉得还不错? 一键收藏
  • 7
    评论
时空卷积网络(Temporal Convolutional Network, TCN)是一种深度学习架构,特别适用于处理时序数据,如视频、音频和时间序列信号。在Python中,你可以使用深度学习库如Keras或PyTorch来实现TCN。下面是简要的介绍和实现步骤: 1. **原理**: TCN通过堆叠多个1D卷积层并使用残差连接(Residual Connections)和可扩展的因果卷积(causal convolution)来捕捉长期依赖性。这些设计有助于避免梯度消失和过拟合。 2. **核心组件**: - **1D卷积**:用于提取时序特征。 - **残差块(Residual Blocks)**:包含一个或多个1D卷积层,用于稳定训练过程。 - **扩张卷积(Dilated Convolution)**:增加感受野( receptive field)而不会增加计算复杂度。 - **归一化层(Normalization)**:如Batch Normalization,用于加速收敛。 3. **Python实现**: - **Keras**:可以使用`tf.keras.layers.Conv1D`和`TimeDistributed`等函数构建TCN层。 - **PyTorch**:`torch.nn.Conv1d`和`torch.nn.utils.rnn.PackedSequence`用于处理时序数据。 4. **示例代码片段**(简化版): ```python from keras.models import Sequential from keras.layers import Conv1D, LeakyReLU, Reshape, TimeDistributed def create_tcn_block(filters, kernel_size, dilation_rate=1): return [ Conv1D(filters, kernel_size, dilation_rate=dilation_rate), LeakyReLU(), Conv1D(filters, kernel_size, dilation_rate=dilation_rate) ] model = Sequential() model.add(TimeDistributed(Conv1D(64, 3), input_shape=(None, input_channels))) model.add(Reshape((None, 64, 1))) # For residual connection model.add(Conv1D(64, 3, padding='causal')) # First causal conv for _ in range(num_blocks): model.add(TimeDistributed(create_tcn_block(64, 3))) ``` **相关问题**: 1. 如何在实际项目中选择合适的TCN结构? 2. 如何在PyTorch中实现扩张卷积? 3. 使用TCN时,如何调整超参数以优化模型性能?
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值