全网最全基于informer的时间序列预测python代码(附赠原文翻译和总结)

Informer是一种用于长序列时间序列预测的Transformer模型,但是它与传统的Transformer模型又有些不同点,与传统的Transformer模型相比,Informer具有以下几个独特的特点:

1. ProbSparse自注意力机制:Informer引入了ProbSparse自注意力机制,该机制在时间复杂度和内存使用方面达到了O(Llog L)的水平,能够有效地捕捉序列之间的长期依赖关系。

2. 自注意力蒸馏:通过减少级联层的输入,自注意力蒸馏技术可以有效处理极长的输入序列,提高了模型处理长序列的能力。

3. 生成式解码器:Informer采用生成式解码器,可以一次性预测整个长时间序列,而不是逐步进行预测。这种方式大大提高了长序列预测的推理速度。

与LSTM技术的对比

上图展示了一个真实数据集上的预测结果,其中LSTM网络从短期(12个点,0.5天)预测电力变压器站的小时温度到长期(480个点,20天)。当预测长度大于48个点时(图1b中的实心星号),整体性能差距显著,均方误差升高,推理速度急剧下降,LSTM模型开始失效。

图解LSTM的机制原理

         上图为Informer模型概述:左侧:编码器接收大规模的长序列输入(绿色序列)。我们使用提出的ProbSparse自注意力替代传统的自注意力。蓝色梯形表示自注意力蒸馏操作,用于提取主导的注意力,大幅减小网络大小。层堆叠的副本增加了模型的稳健性。右侧:解码器接收长序列输入,将目标元素填充为零,测量特征图的加权注意力组合,并以生成式风格即时预测输出元素(橙色序列)

总结:Informer模型,成功提高了在LSTF问题中的预测能力,验证了类似Transformer的模型在捕捉长序列时间序列输出和输入之间的个体长期依赖关系方面的潜在价值。

1、提出了ProbSparse自注意力机制,以高效地替代传统的自注意力机制,

2、提出了自注意力蒸馏操作,可优化J个堆叠层中主导的注意力得分,并将总空间复杂度大幅降低。

3、提出了生成式风格的解码器,只需要一步前向传播即可获得长序列输出,同时避免在推理阶段累积误差的传播。

基于以上,笔者原创基于Infrmer的时间序列预测python代码,质量优异,出图精美,注释清晰,适合新手学习或者,文章发表改进使用。

详细注释:

精美出图:

中文翻译:

部分核心代码:

import argparse
import os
import torch

from exp.exp_informer import Exp_Informer

parser = argparse.ArgumentParser(description='[Informer] Long Sequences Forecasting')

# 选择模型(去掉required参数,选择informer模型)
parser.add_argument('--model', type=str, default='informer',help='model of experiment, options: [informer, informerstack, informerlight(TBD)]')

# 数据选择(去掉required参数)
#parser.add_argument('--data', type=str, default='WTH', help='data')
parser.add_argument('--data', type=str, default='65', help='data')
# 数据上级目录
parser.add_argument('--root_path', type=str, default='./data/', help='root path of the data file')
# 数据名称
#parser.add_argument('--data_path', type=str, default='WTH.csv', help='data file')
parser.add_argument('--data_path', type=str, default='65.csv', help='data file')
# 预测类型(多变量预测、单变量预测、多元预测单变量)
#parser.add_argument('--features', type=str, default='M', help='forecasting task, options:[M, S, MS]; M:multivariate predict multivariate, S:univariate predict univariate, MS:multivariate predict univariate')
parser.add_argument('--features', type=str, default='MS', help='forecasting task, options:[M, S, MS]; M:multivariate predict multivariate, S:univariate predict univariate, MS:multivariate predict univariate')
# 数据中要预测的标签列
#parser.add_argument('--target', type=str, default='OT', help='target feature in S or MS task')
parser.add_argument('--target', type=str, default='temp', help='target feature in S or MS task')
# 数据重采样(h:小时)
#parser.add_argument('--freq', type=str, default='h', help='freq for time features encoding, options:[s:secondly, t:minutely, h:hourly, d:daily, b:business days, w:weekly, m:monthly], you can also use more detailed freq like 15min or 3h')
parser.add_argument('--freq', type=str, default='d', help='freq for time features encoding, options:[s:secondly, t:minutely, h:hourly, d:daily, b:business days, w:weekly, m:monthly], you can also use more detailed freq like 15min or 3h')
# 模型保存位置
parser.add_argument('--checkpoints', type=str, default='./checkpoints/', help='location of model checkpoints')

# 输入序列长度
#parser.add_argument('--seq_len', type=int, default=96, help='input sequence length of Informer encoder')
parser.add_argument('--seq_len', type=int, default=48, help='input sequence length of Informer encoder')
# 先验序列长度
#parser.add_argument('--label_len', type=int, default=48, help='start token length of Informer decoder')
parser.add_argument('--label_len', type=int, default=32, help='start token length of Informer decoder')
# 预测序列长度
#parser.add_argument('--pred_len', type=int, default=24, help='prediction sequence length')
parser.add_argument('--pred_len', type=int, default=30, help='prediction sequence length')
# Informer decoder input: concat[start token series(label_len), zero padding series(pred_len)]

# 编码器default参数为特征列数
parser.add_argument('--enc_in', type=int, default=7, help='encoder input size')
# 解码器default参数与编码器相同
parser.add_argument('--dec_in', type=int, default=7, help='decoder input size')
parser.add_argument('--c_out', type=int, default=7, help='output size')

# 模型宽度
parser.add_argument('--d_model', type=int, default=512, help='dimension of model')
# 多头注意力机制头数
parser.add_argument('--n_heads', type=int, default=8, help='num of heads')
# 模型中encoder层数
parser.add_argument('--e_layers', type=int, default=2, help='num of encoder layers')
# 模型中decoder层数
parser.add_argument('--d_layers', type=int, default=1, help='num of decoder layers')
# 网络架构循环次数
parser.add_argument('--s_layers', type=str, default='3,2,1', help='num of stack encoder layers')
# 全连接层神经元个数
parser.add_argument('--d_ff', type=int, default=2048, help='dimension of fcn')
# 采样因子数
parser.add_argument('--factor', type=int, default=5, help='probsparse attn factor')
# 1D卷积核
parser.add_argument('--padding', type=int, default=0, help='padding type')
# 是否需要序列长度衰减
parser.add_argument('--distil', action='store_false', help='whether to use distilling in encoder, using this argument means not using distilling', default=True)
# 神经网络正则化操作
parser.add_argument('--dropout', type=float, default=0.05, help='dropout')
# attention计算方式
parser.add_argument('--attn', type=str, default='prob', help='attention used in encoder, options:[prob, full]')
# 时间特征编码方式
parser.add_argument('--embed', type=str, default='timeF', help='time features encoding, options:[timeF, fixed, learned]')
# 激活函数
parser.add_argument('--activation', type=str, default='gelu',help='activation')
# 是否输出attention
parser.add_argument('--output_attention', action='store_true', help='whether to output attention in ecoder')
# 是否需要预测
#parser.add_argument('--do_predict', action='store_true', help='whether to predict unseen future data')
parser.add_argument('--do_predict', action='store_false', help='whether to predict unseen future data')
parser.add_argument('--mix', action='store_false', help='use mix attention in generative decoder', default=True)
# 数据读取
parser.add_argument('--cols', type=str, nargs='+', help='certain cols from the data files as the input features')
# 多核训练(windows下选择0,否则容易报错)
parser.add_argument('--num_workers', type=int, default=0, help='data loader num workers')
# 训练轮数
parser.add_argument('--itr', type=int, default=2, help='experiments times')
# 训练迭代次数
parser.add_argument('--train_epochs', type=int, default=6, help='train epochs')
# mini-batch大小
parser.add_argument('--batch_size', type=int, default=32, help='batch size of train input data')
# 早停策略
parser.add_argument('--patience', type=int, default=3, help='early stopping patience')
# 学习率
parser.add_argument('--learning_rate', type=float, default=0.0001, help='optimizer learning rate')
parser.add_argument('--des', type=str, default='test',help='exp description')
# loss计算方式
parser.add_argument('--loss', type=str, default='mse',help='loss function')
# 学习率衰减参数
parser.add_argument('--lradj', type=str, default='type1',help='adjust learning rate')
# 是否使用自动混合精度训练
parser.add_argument('--use_amp', action='store_true', help='use automatic mixed precision training', default=False)
# 是否反转输出结果
#parser.add_argument('--inverse', action='store_true', help='inverse output data', default=False)
parser.add_argument('--inverse', action='store_true', help='inverse output data', default=True)
# 是否使用GPU加速训练
parser.add_argument('--use_gpu', type=bool, default=True, help='use gpu')
parser.add_argument('--gpu', type=int, default=0, help='gpu')
# GPU分布式训练
parser.add_argument('--use_multi_gpu', action='store_true', help='use multiple gpus', default=False)
# 多GPU训练
parser.add_argument('--devices', type=str, default='0,1,2,3',help='device ids of multile gpus')

# 取参数值
args = parser.parse_args()
# 获取GPU
args.use_gpu = True if torch.cuda.is_available() and args.use_gpu else False

if args.use_gpu and args.use_multi_gpu:
    args.devices = args.devices.replace(' ','')
    device_ids = args.devices.split(',')
    args.device_ids = [int(id_) for id_ in device_ids]
    args.gpu = args.device_ids[0]

# 数据参数
data_parser = {'65':{'data':'65.csv','T':'temp','M':[7,7,7],'S':[1,1,1],'MS':[7,7,1]},
}

# data_parser = {
#     'ETTh1':{'data':'ETTh1.csv','T':'OT','M':[7,7,7],'S':[1,1,1],'MS':[7,7,1]},
#     'ETTh2':{'data':'ETTh2.csv','T':'OT','M':[7,7,7],'S':[1,1,1],'MS':[7,7,1]},
#     'ETTm1':{'data':'ETTm1.csv','T':'OT','M':[7,7,7],'S':[1,1,1],'MS':[7,7,1]},
#     'ETTm2':{'data':'ETTm2.csv','T':'OT','M':[7,7,7],'S':[1,1,1],'MS':[7,7,1]},
#     # data:数据文件名,T:标签列,M:预测变量数(如果要预测12个特征,则为[12,12,12]),
#     'WTH':{'data':'WTH.csv','T':'WetBulbCelsius','M':[12,12,12],'S':[1,1,1],'MS':[12,12,1]},
#     'ECL':{'data':'ECL.csv','T':'MT_320','M':[321,321,321],'S':[1,1,1],'MS':[321,321,1]},
#     'Solar':{'data':'solar_AL.csv','T':'POWER_136','M':[137,137,137],'S':[1,1,1],'MS':[137,137,1]},
# }
# 取数据参数
if args.data in data_parser.keys():
    data_info = data_parser[args.data]
    args.data_path = data_info['data']
    args.target = data_info['T']
    args.enc_in, args.dec_in, args.c_out = data_info[args.features]

# 网络架构循环次数
args.s_layers = [int(s_l) for s_l in args.s_layers.replace(' ','').split(',')]
args.detail_freq = args.freq
args.freq = args.freq[-1:]

# 打印所有参数
print('Args in experiment:')
print(args)

Exp = Exp_Informer

for ii in range(args.itr):
    # setting record of experiments
    setting = '{}_{}_ft{}_sl{}_ll{}_pl{}_dm{}_nh{}_el{}_dl{}_df{}_at{}_fc{}_eb{}_dt{}_mx{}_{}_{}'.format(args.model, args.data, args.features, 
                args.seq_len, args.label_len, args.pred_len,
                args.d_model, args.n_heads, args.e_layers, args.d_layers, args.d_ff, args.attn, args.factor, 
                args.embed, args.distil, args.mix, args.des, ii)

    exp = Exp(args) # set experiments
    exp.train(setting)

    exp.test(setting)

    if args.do_predict:
        exp.predict(setting, True)

    torch.cuda.empty_cache()


# 有兴趣欢迎关注笔者V公重号:年轻的战场ssd,进一步学习交流哦。

### Transformer 时间序列预测示例代码 对于时间序列预测,Transformer模型通过其独特的编码器-解码器架构能够有效地捕捉长时间依赖关系[^1]。下面展示了一个简化版的基于PyTorch框架的Transformer用于时间序列预测Python代码: ```python import torch import torch.nn as nn from torch.utils.data import DataLoader, TensorDataset class TimeSeriesTransformer(nn.Module): def __init__(self, input_dim, model_dim, num_heads, num_encoder_layers, num_decoder_layers, target_dim, dropout=0.1): super(TimeSeriesTransformer, self).__init__() self.model_type = "Time Series Transformer" self.input_linear = nn.Linear(input_dim, model_dim) self.transformer = nn.Transformer(d_model=model_dim, nhead=num_heads, num_encoder_layers=num_encoder_layers, num_decoder_layers=num_decoder_layers, dropout=dropout) self.output_linear = nn.Linear(model_dim, target_dim) def forward(self, src, tgt): """ :param src: the sequence to the encoder (time series data), shape [src_len, batch_size, input_dim] :param tgt: the sequence to the decoder (initial part of time series or dummy values), shape [tgt_len, batch_size, input_dim] :return: output with shape [target_sequence_length, batch_size, target_dim] """ src = self.input_linear(src) # transform source into d_model dimensions tgt = self.input_linear(tgt) # same for target transformer_output = self.transformer(src, tgt) prediction = self.output_linear(transformer_output) return prediction # Example usage: input_dim = 1 # For univariate time series forecasting model_dim = 512 num_heads = 8 num_encoder_layers = 6 num_decoder_layers = 6 target_dim = 1 device = 'cuda' if torch.cuda.is_available() else 'cpu' transformer_model = TimeSeriesTransformer( input_dim=input_dim, model_dim=model_dim, num_heads=num_heads, num_encoder_layers=num_encoder_layers, num_decoder_layers=num_decoder_layers, target_dim=target_dim).to(device) criterion = nn.MSELoss() optimizer = torch.optim.AdamW(transformer_model.parameters(), lr=0.001, betas=(0.9, 0.98), eps=1e-9) ``` ### Informer 时间序列预测示例代码 Informer作为改进版本,在处理极长序列方面表现更佳,减少了计算复杂度内存占用量[^2]。以下是利用Informer进行时间序列预测的一个简单实例。 ```python import numpy as np import pandas as pd from pytorch_forecasting.models import TemporalFusionTransformer from pytorch_forecasting.data import TimeSeriesDataSet from pytorch_lightning.callbacks import EarlyStopping from pytorch_lightning import Trainer data = pd.read_csv('your_time_series_data.csv') max_prediction_length = 7 max_encoder_length = 30 training_cutoff = data["time_idx"].max() - max_prediction_length context_length = max_encoder_length prediction_length = max_prediction_length training = TimeSeriesDataSet( data[lambda x: x.time_idx <= training_cutoff], time_idx="time_idx", target="value", # your value column name here group_ids=["series"], min_encoder_length=context_length // 2, # keep encoder length long enough max_encoder_length=context_length, min_prediction_length=1, max_prediction_length=prediction_length, static_categoricals=[], add_relative_time_idx=True, add_target_scales=True, time_varying_unknown_reals=["value"], # your value column name here allow_missing_timesteps=True ) validation = TimeSeriesDataSet.from_dataset(training, data, predict=True, stop_randomization=True) batch_size = 128 train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=0) val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size * 10, num_workers=0) early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=10, verbose=False, mode="min") trainer = Trainer(max_epochs=30, gpus=0, gradient_clip_val=0.1, limit_train_batches=30, callbacks=[early_stop_callback]) tft = TemporalFusionTransformer.from_dataset( training, learning_rate=0.03, hidden_size=16, attention_head_size=1, dropout=0.1, hidden_continuous_size=8, output_size=7, loss=QuantileLoss(), log_interval=10, reduce_on_plateau_patience=4, ) trainer.fit( tft, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader, ) best_model_path = trainer.checkpoint_callback.best_model_path best_tft = TemporalFusionTransformer.load_from_checkpoint(best_model_path) ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值