第R2周:LSTM-火灾温度预测

任务说明:数据集中提供了火灾温度(Tem1)、一氧化碳浓度(CO 1)、烟雾浓度(Soot 1)随着时间变化数据,我们需要根据这些数据对未来某一时刻的火灾温度做出预测(本次任务仅供学习)

🍺要求:
了解LSTM是什么,并使用其构建一个完整的程序
R2达到0.83(完成)

🍻拔高:
使用第1-8个时刻的数据预测第9-10个时刻的温度数据(完成)

一句话介绍LSTM,它是RNN的进阶版,如果说RNN的最大限度是理解一句话,那么LSTM的最大限度则是理解一段话,详细介绍如下:

LSTM,全称为长短期记忆网络(Long Short Term Memory networks),是一种特殊的RNN,能够学习到长期依赖关系。LSTM由Hochreiter & Schmidhuber (1997)提出,许多研究者进行了一系列的工作对其改进并使之发扬光大。LSTM在许多问题上效果非常好,现在被广泛使用。

一.前期准备工作

1.导入数据

import tensorflow as tf
import pandas     as pd
import numpy      as np

gpus = tf.config.list_physical_devices("GPU")
if gpus:
    tf.config.experimental.set_memory_growth(gpus[0], True)  #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpus[0]],"GPU")
print(gpus)

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
df_1 = pd.read_csv("D:/woodpine2.csv")
df_1.head()
TimeTem1CO 1Soot 1
00.00025.00.00.0
10.22825.00.00.0
20.45625.00.00.0
30.68525.00.00.0
40.91325.00.00.0

2.数据可视化

import matplotlib.pyplot as plt
import seaborn as sns

plt.rcParams['savefig.dpi'] = 500 #图片像素
plt.rcParams['figure.dpi']  = 500 #分辨率

fig, ax =plt.subplots(1,3,constrained_layout=True, figsize=(14, 3))

sns.lineplot(data=df_1["Tem1"], ax=ax[0])
sns.lineplot(data=df_1["CO 1"], ax=ax[1])
sns.lineplot(data=df_1["Soot 1"], ax=ax[2])
plt.show()

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qrzltiLi-1665993981350)(output_9_0.png)]

二.构建数据集

dataFrame=df_1.iloc[:,1:]
dataFrame.head()
Tem1CO 1Soot 1
025.00.00.0
125.00.00.0
225.00.00.0
325.00.00.0
425.00.00.0

1.设置X,y

width_X=8
width_y=2

取前8个时间段的Tem1、CO 1、Soot 1为X,第9,10个时间段的Tem1为y。

X = []
y = []

in_start = 0

for _, _ in df_1.iterrows():
    in_end  = in_start + width_X
    out_end = in_end   + width_y
    
    if out_end < len(dataFrame):
        X_ = np.array(dataFrame.iloc[in_start:in_end , ])
        
        X_ = X_.reshape((len(X_)*3))
        y_ = np.array(dataFrame.iloc[in_end  :out_end, 0])
        
        X.append(X_)
        y.append(y_)
    
    in_start += 1

X = np.array(X)
y = np.array(y)

X.shape, y.shape
((5938, 24), (5938, 2))

2.归一化

from sklearn.preprocessing import MinMaxScaler

#将数据归一化,范围是0到1
sc       = MinMaxScaler(feature_range=(0, 1))
X_scaled = sc.fit_transform(X)
X_scaled.shape
(5938, 24)
X_scaled=X_scaled.reshape(len(X_scaled),width_X,3)
X_scaled.shape
(5938, 8, 3)

3.划分数据集

取5000之前的数据为训练集,5000之后的为验证集

X_train=X_scaled[:5000]
y_train=y[:5000]

X_test=X_scaled[5000:,]
y_test=y[5000:,]

X_train.shape
(5000, 8, 3)

三.构建模型

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,LSTM,Bidirectional
from tensorflow.keras        import Input

model_lstm = Sequential()
model_lstm.add(LSTM(units=64, activation='relu', return_sequences=True,
               input_shape=(X_train.shape[1], 3)))
model_lstm.add(LSTM(units=64, activation='relu'))

model_lstm.add(Dense(width_y))
WARNING:tensorflow:Layer lstm_8 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer lstm_9 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU

四.模型训练

1.模型编译

#只观察loss数值,不观察准确率,所以删去metrics选项
model_lstm.compile(optimizer=tf.keras.optimizers.Adam(1e-3),
                  loss='mean_squared_error')

from tensorflow.keras.callbacks import ModelCheckpoint

ModelCheckPointer=ModelCheckpoint('best_model.h5',
                                  monitor='val_loss',
                                  save_best_only=True,
                                  save_weights_only=True,               
                )
X_train.shape,y_train.shape
((5000, 8, 3), (5000, 2))
history_lstm=model_lstm.fit(X_train,y_train,
                            batch_size=64,
                            epochs=50,
                            validation_data=(X_test,y_test),
                            validation_freq=1,
                            callbacks=[ModelCheckPointer])
Epoch 1/50
79/79 [==============================] - 1s 16ms/step - loss: 21.1715 - val_loss: 52.4376
Epoch 2/50
79/79 [==============================] - 1s 13ms/step - loss: 5.5628 - val_loss: 46.9862
Epoch 3/50
79/79 [==============================] - 1s 13ms/step - loss: 6.2596 - val_loss: 55.9755
Epoch 4/50
79/79 [==============================] - 1s 12ms/step - loss: 6.1024 - val_loss: 49.6590
Epoch 5/50
79/79 [==============================] - 1s 12ms/step - loss: 6.3087 - val_loss: 46.9177
Epoch 6/50
79/79 [==============================] - 1s 12ms/step - loss: 5.9893 - val_loss: 53.0820
Epoch 7/50
79/79 [==============================] - 1s 13ms/step - loss: 7.2166 - val_loss: 46.6139
Epoch 8/50
79/79 [==============================] - 1s 12ms/step - loss: 8.3203 - val_loss: 49.4651
Epoch 9/50
79/79 [==============================] - 1s 12ms/step - loss: 7.0174 - val_loss: 85.3318
Epoch 10/50
79/79 [==============================] - 1s 12ms/step - loss: 8.1098 - val_loss: 84.6306
Epoch 11/50
79/79 [==============================] - 1s 12ms/step - loss: 6.8533 - val_loss: 50.6454
Epoch 12/50
79/79 [==============================] - 1s 12ms/step - loss: 7.5618 - val_loss: 46.3002
Epoch 13/50
79/79 [==============================] - 1s 12ms/step - loss: 6.3271 - val_loss: 49.1259
Epoch 14/50
79/79 [==============================] - 1s 12ms/step - loss: 7.2708 - val_loss: 46.7672
Epoch 15/50
79/79 [==============================] - 1s 13ms/step - loss: 9.0250 - val_loss: 45.6259
Epoch 16/50
79/79 [==============================] - 1s 13ms/step - loss: 7.4962 - val_loss: 47.3104
Epoch 17/50
79/79 [==============================] - 1s 13ms/step - loss: 6.3288 - val_loss: 45.8246
Epoch 18/50
79/79 [==============================] - 1s 12ms/step - loss: 6.3963 - val_loss: 46.1642
Epoch 19/50
79/79 [==============================] - 1s 13ms/step - loss: 8.0644 - val_loss: 45.5498
Epoch 20/50
79/79 [==============================] - 1s 13ms/step - loss: 7.6460 - val_loss: 52.4149
Epoch 21/50
79/79 [==============================] - 1s 13ms/step - loss: 7.9108 - val_loss: 45.2785
Epoch 22/50
79/79 [==============================] - 1s 13ms/step - loss: 7.2452 - val_loss: 48.1493
Epoch 23/50
79/79 [==============================] - 1s 13ms/step - loss: 6.8818 - val_loss: 44.7868
Epoch 24/50
79/79 [==============================] - 1s 13ms/step - loss: 8.7648 - val_loss: 45.3644
Epoch 25/50
79/79 [==============================] - 1s 13ms/step - loss: 7.2923 - val_loss: 113.3569
Epoch 26/50
79/79 [==============================] - 1s 13ms/step - loss: 7.2891 - val_loss: 48.8831
Epoch 27/50
79/79 [==============================] - 1s 13ms/step - loss: 9.0204 - val_loss: 54.1173
Epoch 28/50
79/79 [==============================] - 1s 13ms/step - loss: 6.2687 - val_loss: 57.7011
Epoch 29/50
79/79 [==============================] - 1s 14ms/step - loss: 9.3178 - val_loss: 56.6076
Epoch 30/50
79/79 [==============================] - 1s 14ms/step - loss: 7.5619 - val_loss: 45.1185
Epoch 31/50
79/79 [==============================] - 1s 13ms/step - loss: 7.6877 - val_loss: 44.9177
Epoch 32/50
79/79 [==============================] - 1s 13ms/step - loss: 7.7795 - val_loss: 55.4915
Epoch 33/50
79/79 [==============================] - 1s 13ms/step - loss: 6.1562 - val_loss: 55.6177
Epoch 34/50
79/79 [==============================] - 1s 14ms/step - loss: 6.5898 - val_loss: 87.1372
Epoch 35/50
79/79 [==============================] - 1s 14ms/step - loss: 10.0048 - val_loss: 47.4471
Epoch 36/50
79/79 [==============================] - 1s 14ms/step - loss: 6.9753 - val_loss: 45.5101
Epoch 37/50
79/79 [==============================] - 1s 14ms/step - loss: 6.5068 - val_loss: 45.6193
Epoch 38/50
79/79 [==============================] - 1s 14ms/step - loss: 6.8584 - val_loss: 103.6446
Epoch 39/50
79/79 [==============================] - 1s 14ms/step - loss: 6.6473 - val_loss: 44.0263
Epoch 40/50
79/79 [==============================] - 1s 14ms/step - loss: 5.8646 - val_loss: 51.5055
Epoch 41/50
79/79 [==============================] - 1s 14ms/step - loss: 6.6104 - val_loss: 45.9765
Epoch 42/50
79/79 [==============================] - 1s 14ms/step - loss: 7.4796 - val_loss: 50.2358
Epoch 43/50
79/79 [==============================] - 1s 14ms/step - loss: 6.3155 - val_loss: 61.4754
Epoch 44/50
79/79 [==============================] - 1s 14ms/step - loss: 6.8296 - val_loss: 51.9304
Epoch 45/50
79/79 [==============================] - 1s 14ms/step - loss: 7.3016 - val_loss: 45.8511
Epoch 46/50
79/79 [==============================] - 1s 14ms/step - loss: 6.7835 - val_loss: 45.0044
Epoch 47/50
79/79 [==============================] - 1s 14ms/step - loss: 6.6329 - val_loss: 43.3930
Epoch 48/50
79/79 [==============================] - 1s 14ms/step - loss: 6.3484 - val_loss: 50.0709
Epoch 49/50
79/79 [==============================] - 1s 14ms/step - loss: 6.2621 - val_loss: 52.4607
Epoch 50/50
79/79 [==============================] - 1s 18ms/step - loss: 6.8254 - val_loss: 52.6669

五.评估

1.loss图

# 支持中文
plt.rcParams['font.sans-serif'] = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False  # 用来正常显示负号

plt.figure(figsize=(5, 3),dpi=120)

plt.plot(history_lstm.history['loss']    , label='LSTM Training Loss')
plt.plot(history_lstm.history['val_loss'], label='LSTM Validation Loss')

plt.title('Training and Validation Loss')
plt.legend()
plt.show()

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Pw5LoqXV-1665993981351)(output_31_0.png)]

2.调用方模型进行预测

model_lstm.load_weights('best_model.h5')
predicted_y_lstm=model_lstm.predict(X_test)

y_test_one=[i[0] for i in y_test]
predicted_y_lstm_one=[i[0] for i in predicted_y_lstm]

y_test_two=[i[1] for i in y_test]
predicted_y_lstm_two=[i[1] for i in predicted_y_lstm]


fig, ax =plt.subplots(1,2,constrained_layout=True, figsize=(14, 3))
#画出第9个时间段真实数据与预测数据的对比图
ax[0].plot(y_test_one[:1000],color='red',label='真实值')
ax[0].plot(predicted_y_lstm_one[:1000],color='blue',label='预测值')

#画出第10个时间段真实数据与预测数据的对比图
ax[1].plot(y_test_two[:1000],color='red',label='真实值')
ax[1].plot(predicted_y_lstm_two[:1000],color='blue',label='预测值')

ax[0].set(xlabel='X',ylabel='Y',title='第9个时间段')
ax[1].set(xlabel='X',ylabel='Y',title='第10个时间段')

[Text(0.5, 0, 'X'), Text(0, 0.5, 'Y'), Text(0.5, 1.0, '第10个时间段')]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-fg11dSek-1665993981351)(output_34_1.png)]

from sklearn import metrics
"""
RMSE :均方根误差  ----->  对均方误差开方
R2   :决定系数,可以简单理解为反映模型拟合优度的重要的统计量
"""
RMSE_lstm  = metrics.mean_squared_error(predicted_y_lstm, y_test)**0.5
R2_lstm    = metrics.r2_score(predicted_y_lstm, y_test)

print('均方根误差: %.5f' % RMSE_lstm)
print('R2: %.5f' % R2_lstm)
均方根误差: 6.58734
R2: 0.85471

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值