训练集、验证集、测试集

validation_data: tuple (x_val, y_val) or tuple (x_val, y_val, val_sample_weights) on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. This will override validation_split.
验证数据集是用来对每一个epoch都会做一次评估,模型在验证数据集上不进行训练
validation loss is the value of cost function for cross validation set and loss is the value of cost function for training set.”
val_loss is the value of cost function for your cross validation data and loss is the value of cost function for your training data. On validation data, neurons using drop out do not drop random neurons. The reason is that during training we use drop out in order to add some noise for avoiding over-fitting. During calculating cross validation, we are in recall phase and not in training phase. We use all the capabilities of the network.

在训练中我们使用dropout是为了增加一些噪声,避免过度拟合。
而在验证数据上,使用dropout的神经元不会随机丢弃神经元。

Thanks to one of our dear friends, I quote and explain the contents from here which are definitely useful.

validation_split: Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling.

validation_data: tuple (x_val, y_val) or tuple (x_val, y_val, val_sample_weights) on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. This will override validation_split.

As you can see

fit(self, x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
fit method used in Keras has a parameter named validation_split, which specifies the percentage of data used for evaluating the model which is created after each epoch. After evaluating the model using this amount of data, that will be reported by val_loss if you’ve set verbose to 1; moreover, as the documentation clearly specifies, you can use either validation_data or validation_split. Cross validation data is used to investigate whether your model over-fits the data or does not. This is what we can understand whether our model has generalization capability or not.

Keras difference beetween val_loss and loss during training
https://datascience.stackexchange.com/questions/25267/keras-difference-beetween-val-loss-and-loss-during-training?rq=1

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值