基于LSTM的剩余寿命预测(PyTorch实现)

首先,我们需要准备数据。对于剩余寿命预测问题,我们需要有一些历史数据来训练我们的模型,并且需要一些测试数据来验证模型的性能。假设我们有一个包含多个传感器读数的数据集,我们可以将其转化为一个序列预测问题。具体来说,我们可以使用前一段时间的传感器读数来预测未来一段时间内设备的剩余寿命。

我们假设我们的数据集中包含了 N 个序列,每个序列由 T 个时间步长的传感器读数组成。为了方便起见,我们假设每个序列的长度相同。另外,我们假设每个序列的最后一个时间步长对应于设备故障发生的时间。

首先,我们需要加载数据并进行预处理。在这个例子中,我们将使用 Pandas 库来加载数据并进行预处理。

import pandas as pd

data = pd.read_csv('data.csv')

# 对于每个序列,将最后一个时间步长作为标签
labels = data.groupby('sequence_id')['sensor_reading'].last().values

# 将传感器读数按照序列分组,并转化为 PyTorch 张量
data = data.groupby('sequence_id')['sensor_reading'].apply(lambda x: x.values[:-1]).values
data = torch.tensor(list(data))

# 将标签也转化为 PyTorch 张量
labels = torch.tensor(labels)

接下来,我们需要将数据划分为训练集和测试集。我们可以使用 sklearn 库中的 train_test_split 函数来完成这个任务。

from sklearn.model_selection import train_test_split

train_data, test_data, train_labels, test_labels = train_test_split(data, labels, test_size=0.2, random_state=42)

接下来,我们需要定义我们的 LSTM 模型。我们可以使用 PyTorch 中的 LSTM 类来定义一个 LSTM 模型。对于每个时间步长,我们将传感器读数作为输入,并使用 LSTM 模型来预测未来的剩余寿命。我们最后将 LSTM 的输出传递给一个全连接层,以获得最终的预测结果。

import torch.nn as nn

class LSTMModel(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(LSTMModel, self).__init__()
        self.hidden_size = hidden_size
        self.lstm = nn.LSTM(input_size, hidden_size, batch_first=True)
        self.fc = nn.Linear(hidden_size, output_size)

    def forward(self, x):
        h0 = torch.zeros(1, x.size(0), self.hidden_size).to(x.device)
        c0 = torch.zeros(1, x.size(0), self.hidden_size).to(x.device)

        out, _ = self.lstm(x, (h0, c0))

        out = self.fc(out[:, -1, :])

        return out

接下来,我们需要定义一些超参数,包括批大小、学习率、隐藏状态的大小等等。

batch_size = 64
num_epochs = 100
learning_rate = 0.001
input_size = 1
hidden_size = 64
output_size = 1

接下来,我们需要将数据打包成批次并加载到 PyTorch 的 DataLoader 中。这个步骤非常重要,因为它允许我们在训练过程中随机化数据的顺序,并且每次处理一小批数据可以使训练过程更加高效。

from torch.utils.data import TensorDataset, DataLoader

train_dataset = TensorDataset(train_data, train_labels)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

test_dataset = TensorDataset(test_data, test_labels)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=True)

接下来,我们需要定义我们的损失函数和优化器。

criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

接下来,我们可以定义我们的训练循环。在每个周期中,我们将模型的参数更新为使损失函数最小化的值。我们还可以在每个周期结束时计算训练和测试损失,并输出训练和测试损失的变化趋势。

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = LSTMModel(input_size, hidden_size, output_size).to(device)

for epoch in range(num_epochs):
    train_loss = 0.0
    test_loss = 0.0

    model.train()

    for i, (inputs, labels) in enumerate(train_loader):
        inputs = inputs.float().unsqueeze(-1).to(device)
        labels = labels.float().unsqueeze(-1).to(device)

        optimizer.zero_grad()

        outputs = model(inputs)
        loss = criterion(outputs, labels)

        loss.backward()
        optimizer.step()

        train_loss += loss.item()

    model.eval()

    with torch.no_grad():
        for i, (inputs, labels) in enumerate(test_loader):
            inputs = inputs.float().unsqueeze(-1).to(device)
            labels = labels.float().unsqueeze(-1).to(device)

            outputs = model(inputs)
            loss = criterion(outputs, labels)

            test_loss += loss.item()

    train_loss /= len(train_loader)
    test_loss /= len(test_loader)

    print('Epoch [{}/{}], Train Loss: {:.4f}, Test Loss: {:.4f}'.format(epoch+1, num_epochs, train_loss, test_loss))

最后,我们可以使用训练好的模型对测试集进行预测,并计算模型的平均绝对误差和均方根误差来评估模型的性能。

from sklearn.metrics import mean_absolute_error, mean_squared_error

predictions = []

model.eval()

with torch.no_grad():
    for inputs, labels in test_loader:
        inputs = inputs.float().unsqueeze(-1).to(device)
        labels = labels.float().unsqueeze(-1).to(device)

        outputs = model(inputs)

        predictions.append(outputs.cpu().numpy())

predictions = np.concatenate(predictions, axis=0)

mae = mean_absolute_error(test_labels.numpy(), predictions)
rmse = np.sqrt(mean_squared_error(test_labels.numpy(), predictions))
很抱歉,我是一个语言模型,无法为您提供自己编写的代码。但是,下面是一些示例LSTM轴承寿命预测代码,您可以作为参考: 1. 使用Keras框架的LSTM轴承寿命预测代码 ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from sklearn.preprocessing import MinMaxScaler # Load the dataset dataset = pd.read_csv('bearing.csv', header=None) data = dataset.iloc[:, 1:2].values # Feature scaling scaler = MinMaxScaler(feature_range=(0, 1)) data = scaler.fit_transform(data) # Split the data into train and test sets train_size = int(len(data) * 0.8) test_size = len(data) - train_size train_data, test_data = data[0:train_size, :], data[train_size:len(data), :] # Convert the input sequence into a matrix def create_dataset(dataset, look_back=1): X, Y = [], [] for i in range(len(dataset) - look_back - 1): a = dataset[i:(i + look_back), 0] X.append(a) Y.append(dataset[i + look_back, 0]) return np.array(X), np.array(Y) look_back = 25 # Number of previous time steps to use as input features trainX, trainY = create_dataset(train_data, look_back) testX, testY = create_dataset(test_data, look_back) # Reshape the input to be [samples, time steps, features] trainX = np.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1)) testX = np.reshape(testX, (testX.shape[0], testX.shape[1], 1)) # Build the LSTM model model = Sequential() model.add(LSTM(units=50, input_shape=(look_back, 1))) model.add(Dense(units=1)) model.compile(optimizer='adam', loss='mean_squared_error') # Train the model model.fit(trainX, trainY, epochs=100, batch_size=32) # Make predictions on the test data predictions = model.predict(testX) # Plot the results plt.plot(scaler.inverse_transform(testY.reshape(-1, 1)), label='Actual') plt.plot(scaler.inverse_transform(predictions), label='Predicted') plt.legend() plt.show() ``` 2. 使用PyTorch框架的LSTM轴承寿命预测代码 ``` import numpy as np import pandas as pd import torch import torch.nn as nn from sklearn.preprocessing import MinMaxScaler # Load the dataset dataset = pd.read_csv('bearing.csv', header=None) data = dataset.iloc[:, 1:2].values # Feature scaling scaler = MinMaxScaler(feature_range=(0, 1)) data = scaler.fit_transform(data) # Split the data into train and test sets train_size = int(len(data) * 0.8) test_size = len(data) - train_size train_data, test_data = data[0:train_size, :], data[train_size:len(data), :] # Convert the input sequence into a tensor def create_dataset(dataset, look_back=1): X, Y = [], [] for i in range(len(dataset) - look_back): a = dataset[i:(i + look_back)] X.append(a) Y.append(dataset[i + look_back]) return torch.tensor(X), torch.tensor(Y) look_back = 25 # Number of previous time steps to use as input features trainX, trainY = create_dataset(train_data, look_back) testX, testY = create_dataset(test_data, look_back) # Build the LSTM model class LSTM(nn.Module): def __init__(self, input_size=1, hidden_size=50, output_size=1): super().__init__() self.hidden_size = hidden_size self.lstm = nn.LSTM(input_size, hidden_size) self.linear = nn.Linear(hidden_size, output_size) def forward(self, input): lstm_out, _ = self.lstm(input.view(len(input), 1, -1)) prediction = self.linear(lstm_out.view(len(input), -1)) return prediction[-1] model = LSTM() criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.01) # Train the model for epoch in range(500): optimizer.zero_grad() loss = criterion(model(trainX.float()), trainY.float()) loss.backward() optimizer.step() # Make predictions on the test data with torch.no_grad(): predictions = model(testX.float()).numpy() # Plot the results testY_unscaled = scaler.inverse_transform(testY.numpy()) predictions_unscaled = scaler.inverse_transform(predictions) plt.plot(testY_unscaled, label='Actual') plt.plot(predictions_unscaled, label='Predicted') plt.legend() plt.show() ```
评论 8
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值