keras: Train & Evaluate & Predict Model


model.compile()

model.compile(
    optimizer=keras.optimizers.RMSprop(),					# optimizer='rmsprop'
    loss=keras.losses.SparseCategoricalCrossentropy(),		# loss='sparse_categorical_crossentropy'
    metrics=["accuracy"],
)

optimizer

RMSprop()	# 'rmsprop'

SGD()		# 'sgd'

Adam()		# "adam"
  • learning_rate=0.01
  • momentum=0.9
  • nesterov=True

loss

MeanSquaredError()					# "mse"

CategoricalCrossentropy()			# 'categorical_crossentropy'

SparseCategoricalCrossentropy()		# "sparse_categorical_crossentropy"

KLDivergence()						# "kl_divergence"

CosineSimilarity()
  • from_logits=True

metrics

"acc"	# "accuracy"

AUC()

Precision()

Recall()

MeanAbsoluteError()

MeanAbsolutePercentageError()

CategoricalAccuracy()

SparseCategoricalAccuracy()		# "sparse_categorical_accuracy"

model.fit()

fit()会自己打印训练进度、训练水平。

history = model.fit(
	x_train, y_train, 
	batch_size=64, epochs=2, 
	validation_split=0.2
)
'''
Epoch 1/2
750/750 [==============================] - 2s 2ms/step - loss: 0.5648 - accuracy: 0.8473 - val_loss: 0.1793 - val_accuracy: 0.9474
Epoch 2/2
750/750 [==============================] - 1s 1ms/step - loss: 0.1686 - accuracy: 0.9506 - val_loss: 0.1398 - val_accuracy: 0.9576
313/313 - 0s - loss: 0.1401 - accuracy: 0.9580
'''

Epoch的进度条750表示的批数,而不是样本个数,训练是一批一批样本的。

基础

  • 使用 Numpy data 格式的数据集,则fit()要指定batch_size
# Train the model for 1 epoch from Numpy data
batch_size = 64
history = model.fit(
	x_train, y_train, 
	batch_size=batch_size, epochs=1
)
  • 使用 tf.data.Dataset 格式的数据集,则fit()不用指定batch_size,因为tf.data.Dataset已经指定好了(且必须指定,不然fit()指定了也还是报错)。
# Train the model for 1 epoch using a dataset
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size)	# 必须在这里指定batch(batch_size)
history = model.fit(dataset, epochs=1)

进阶:Validation

  • Numpy data:在fit()中使用validation_split,从训练集中划分一部分出来当作验证集。
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)
  • tf.data.Dataset:不支持在fit()中使用validation_split划分训练集,只能用validation_data指定独立的验证集。
model.fit(train_dataset, epochs=epochs, validation_data=val_dataset)

进阶:Callbacks

You can also use callbacks to do things like periodically changing the learning of your optimizer, streaming metrics to a Slack bot, sending yourself an email notification when training is complete, etc.

  • 在每个epoch结束时保存模型,如同model.save("path_to_my_model")一样。
path_checkpoint = "path_to_my_model_{epoch}"	
modelckpt_callback = keras.callbacks.ModelCheckpoint(
	filepath=path_checkpoint,
	save_freq='epoch'			# 每个epoch结束
)
  • 早早停止
es_callback = keras.callbacks.EarlyStopping(
    monitor="val_loss",		# 检测值
    min_delta=0,				
    patience=5				# 如果5个epoch还没提升,那就停
)
  • 存储最佳模型的权重
path_checkpoint = "model_checkpoint.h5"
modelckpt_callback = keras.callbacks.ModelCheckpoint(
    monitor="val_loss",			# 检测值
    filepath=path_checkpoint,
    verbose=1,
    save_weights_only=True,		# 只保存权重
    save_best_only=True,		# 只保存最好的
)

model.evaluate()

# score = model.evaluate(test_dataset)
score = model.evaluate(x_test, y_test)
print("Test loss:", score[0])
print("Test accuracy:", score[1])

model.predict()

# predictions = model.predict(x_test, batch_size=batch_size)
predictions = model.predict(x_test)
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
import numpy as np import matplotlib.pyplot as plt from keras.layers import Dense,LSTM,Dropout from keras.models import Sequential # 加载数据 X = np.load("X_od.npy") Y = np.load("Y_od.npy") # 数据归一化 max = np.max(X) X = X / max Y = Y / max # 划分训练集、验证集、测试集 train_x = X[:1000] train_y = Y[:1000] val_x = X[1000:1150] val_y = Y[1000:1150] test_x = X[1150:] test_y = Y # 构建LSTM模型 model = Sequential() model.add(LSTM(units=64, input_shape=(5, 109))) model.add(Dropout(0.2)) model.add(Dense(units=109, activation='linear')) model.summary() # 编译模型 model.compile(optimizer='adam', loss='mse') # 训练模型 history = model.fit(train_x, train_y, epochs=50, batch_size=32, validation_data=(val_x, val_y), verbose=1, shuffle=False) # 评估模型 test_loss = model.evaluate(test_x, test_y) print('Test loss:', test_loss) # 模型预测 train_predict = model.predict(train_x) val_predict = model.predict(val_x) test_predict = model.predict(test_x) # 预测结果可视化 plt.figure(figsize=(20, 8)) plt.plot(train_y[-100:], label='true') plt.plot(train_predict[-100:], label='predict') plt.legend() plt.title('Training set') plt.show() plt.figure(figsize=(20, 8)) plt.plot(val_y[-50:], label='true') plt.plot(val_predict[-50:], label='predict') plt.legend() plt.title('Validation set') plt.show() plt.figure(figsize=(20, 8)) plt.plot(test_y[:50], label='true') plt.plot(test_predict[:50], label='predict') plt.legend() plt.title('Test set') plt.show()如何用返回序列修改这段程序
最新发布
05-17
如果你想使用返回序列的LSTM来修改这段程序,需要在LSTM层中设置return_sequences参数为True,并修改Dense层的输出单元数。下面是修改后的代码: ``` import numpy as np import matplotlib.pyplot as plt from keras.layers import Dense, LSTM, Dropout from keras.models import Sequential # 加载数据 X = np.load("X_od.npy") Y = np.load("Y_od.npy") # 数据归一化 max = np.max(X) X = X / max Y = Y / max # 划分训练集、验证集、测试集 train_x = X[:1000] train_y = Y[:1000] val_x = X[1000:1150] val_y = Y[1000:1150] test_x = X[1150:] test_y = Y # 构建LSTM模型 model = Sequential() model.add(LSTM(units=64, input_shape=(5, 109), return_sequences=True)) model.add(Dropout(0.2)) model.add(Dense(units=1, activation='linear')) model.summary() # 编译模型 model.compile(optimizer='adam', loss='mse') # 训练模型 history = model.fit(train_x, train_y, epochs=50, batch_size=32, validation_data=(val_x, val_y), verbose=1, shuffle=False) # 评估模型 test_loss = model.evaluate(test_x, test_y) print('Test loss:', test_loss) # 模型预测 train_predict = model.predict(train_x) val_predict = model.predict(val_x) test_predict = model.predict(test_x) # 预测结果可视化 plt.figure(figsize=(20, 8)) plt.plot(train_y[-100:], label='true') plt.plot(train_predict[-100:], label='predict') plt.legend() plt.title('Training set') plt.show() plt.figure(figsize=(20, 8)) plt.plot(val_y[-50:], label='true') plt.plot(val_predict[-50:], label='predict') plt.legend() plt.title('Validation set') plt.show() plt.figure(figsize=(20, 8)) plt.plot(test_y[:50], label='true') plt.plot(test_predict[:50], label='predict') plt.legend() plt.title('Test set') plt.show() ``` 需要注意的是,由于返回序列的LSTM层输出的是一个序列,所以在Dense层中输出单元数应该为1而不是109。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值