实战二十二:pytorch transformer 时间序列预测 多变量输入 单变量输出 完整代码数据 可直接运行

 代码演示视频:

 

 完整代码:

# pip install openpyxl -i https://pypi.tuna.tsinghua.edu.cn/simple/
# pip install optuna -i https://pypi.tuna.tsinghua.edu.cn/simple/
import numpy as np
import pandas as pd
from tqdm import tqdm
import optuna
import torch
from torch import nn
import torch.nn.functional as F
from torch import tensor
import torch.utils.data as Data
import math
from matplotlib import pyplot
from datetime import datetime, timedelta
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import seaborn as sns
import torch
import torch.nn as nn
import math
import
  • 1
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
下面是使用PyTorch实现多变量回归预测的Transformer模型的示例代码: ```python import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader, TensorDataset # 构建Transformer模型 class Transformer(nn.Module): def __init__(self, input_dim, output_dim, hidden_dim, num_layers, num_heads): super(Transformer, self).__init__() self.input_embedding = nn.Linear(input_dim, hidden_dim) self.positional_encoding = PositionalEncoding(hidden_dim) self.transformer_encoder = nn.TransformerEncoder( nn.TransformerEncoderLayer(hidden_dim, num_heads), num_layers ) self.output_layer = nn.Linear(hidden_dim, output_dim) def forward(self, x): x = self.input_embedding(x) x = self.positional_encoding(x) x = self.transformer_encoder(x) x = self.output_layer(x) return x # 位置编码 class PositionalEncoding(nn.Module): def __init__(self, hidden_dim): super(PositionalEncoding, self).__init__() self.hidden_dim = hidden_dim def forward(self, x): seq_len = x.size(1) pos_enc = torch.zeros(x.size(0), seq_len, self.hidden_dim) position = torch.arange(0, seq_len).unsqueeze(0) div_term = torch.exp(torch.arange(0, self.hidden_dim, 2) * -(math.log(10000.0) / self.hidden_dim)) pos_enc[:, :, 0::2] = torch.sin(position * div_term) pos_enc[:, :, 1::2] = torch.cos(position * div_term) pos_enc = pos_enc.to(x.device) x = x + pos_enc return x # 准备训练数据 input_data = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float32) target_data = torch.tensor([[10, 20], [30, 40], [50, 60]], dtype=torch.float32) dataset = TensorDataset(input_data, target_data) dataloader = DataLoader(dataset, batch_size=1) # 定义模型参数 input_dim = input_data.size(1) output_dim = target_data.size(1) hidden_dim = 128 num_layers = 2 num_heads = 4 # 创建模型和优化器 model = Transformer(input_dim, output_dim, hidden_dim, num_layers, num_heads) optimizer = optim.Adam(model.parameters(), lr=0.001) criterion = nn.MSELoss() # 模型训练 num_epochs = 100 for epoch in range(num_epochs): for batch_input, batch_target in dataloader: optimizer.zero_grad() output = model(batch_input) loss = criterion(output, batch_target) loss.backward() optimizer.step() print(f"Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}") # 使用模型进行预测 new_input = torch.tensor([[2, 3, 4]], dtype=torch.float32) predicted_output = model(new_input) print("Predicted Output:", predicted_output) ``` 请注意,上述代码中的模型和数据是示例用途,你需要根据你的实际问题进行相应的调整。此外,还可以根据需要添加正则化、调整超参数等来改进模型的性能。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

一枚爱吃大蒜的程序员

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值