Keras在训练期间可视化训练误差和测试误差

原文来自:https://machinelearningmastery.com/display-deep-learning-model-training-history-in-keras/

详细的解释,读者自行打开这个链接查看,我这里只把最重要的说下

fit() 方法会返回一个训练期间历史数据记录对象,包含 training errortraining accuracyvalidation errorvalidation accuracy 字段,如下打印

# list all data in history
print(history.history.keys())

完整代码

# Visualize training history
from keras.models import Sequential
from keras.layers import Dense
import matplotlib.pyplot as plt
import numpy

# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))

# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# Fit the model
history = model.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10, verbose=0)

# list all data in history
print(history.history.keys())

# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()

# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()

这里写图片描述

  • 2
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,以下为Python实现多步预测以及误差分析和可视化代码: 首先,我们需要导入相关的库: ```python import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.preprocessing import MinMaxScaler from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, LSTM, Dropout ``` 接下来,我们需要准备数据。这里我们以时间序列数据为例,首先读入数据并进行预处理: ```python # 读入数据 data = pd.read_csv('data.csv', index_col=0, parse_dates=True) # 将数据按时间排序 data = data.sort_index() # 将数据归一化到0-1之间 scaler = MinMaxScaler() data_scaled = scaler.fit_transform(data) ``` 然后,我们需要将数据集划分为训练集和测试集: ```python # 划分训练集和测试集 train_size = int(len(data_scaled) * 0.8) test_size = len(data_scaled) - train_size train_data = data_scaled[:train_size, :] test_data = data_scaled[train_size:, :] ``` 接下来,我们需要创建训练集和测试集数据集,这里我们以10个时间步长为输入,1个时间步长为输出进行数据集的创建: ```python # 创建训练集和测试集数据集 def create_dataset(dataset, lookback, lookahead): X, Y = [], [] for i in range(len(dataset) - lookback - lookahead): X.append(dataset[i:(i + lookback), :]) Y.append(dataset[(i + lookback):(i + lookback + lookahead), 0]) return np.array(X), np.array(Y) lookback = 10 lookahead = 1 X_train, Y_train = create_dataset(train_data, lookback, lookahead) X_test, Y_test = create_dataset(test_data, lookback, lookahead) ``` 然后,我们需要构建LSTM模型,并进行训练: ```python # 构建LSTM模型 model = Sequential() model.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1], X_train.shape[2]))) model.add(Dropout(0.2)) model.add(LSTM(units=50, return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(units=50, return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(units=50)) model.add(Dropout(0.2)) model.add(Dense(units=lookahead)) # 编译模型 model.compile(optimizer='adam', loss='mean_squared_error') # 训练模型 history = model.fit(X_train, Y_train, epochs=100, batch_size=32, validation_data=(X_test, Y_test), shuffle=False) ``` 接下来,我们可以使用训练好的模型进行多步预测: ```python # 多步预测 def multi_step_predict(model, X_test, lookahead): predictions = [] for i in range(len(X_test)): input_data = X_test[i] for j in range(lookahead): prediction = model.predict(input_data.reshape(1, lookback, -1))[0][0] predictions.append(prediction) input_data = np.concatenate((input_data[1:], prediction.reshape(1, -1)), axis=0) return np.array(predictions) predictions = multi_step_predict(model, X_test, lookahead) ``` 最后,我们可以进行误差分析和可视化: ```python # 误差分析 rmse = np.sqrt(np.mean((predictions - Y_test.reshape(-1)) ** 2)) print('RMSE:', rmse) # 可视化 fig, ax = plt.subplots(figsize=(10, 6)) ax.plot(Y_test, label='True Value') ax.plot(predictions, label='Predicted Value') ax.set_xlabel('Time') ax.set_ylabel('Value') ax.legend() plt.show() ``` 以上就是Python实现多步预测以及误差分析和可视化的完整代码。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值