这里介绍keras中的loss function.
keras loss function的介绍在这里
loss function 或objective function 或 optimization score function是模型训练两个必不可少的参数之一,
例如:
from keras import losses
model.compile(loss='mean_squared_error', optimizer='sgd')
model.compile(loss=losses.mean_squared_error, optimizer='sgd')
对于keras而言,可以直接传入已有loss function的名称,或传入TensorFlow/Theano 的符号函数(symbolic function),符号函数需要传入以下两个参数,并且返回一个逐点计算的张量(scalar).
y_true: True labels. TensorFlow/Theano tensor.
y_pred: Predictions. TensorFlow/Theano tensor of the same shape as y_true.
实际需要优化的目标值是逐点计算的损失的平均值.(The actual optimized objective is the mean of the output array across all datapoints.)
可用的loss function
- mean_squared_error
mean_squared_error(y_true, y_pred)
均方误差(mean squared error, MSE)是比较经典常用的损失函数.
- mean_absolute_error
mean_absolute_error(y_true, y_pred)
pingju
- mean_absolute_percentage_error
mean_absolute_percentage_error(y_true, y_pred)
- mean_squared_logarithmic_error
mean_squared_logarithmic_error(y_true, y_pred)
- squared_hinge
squared_hinge(y_true, y_pred)
- hinge
hinge(y_true, y_pred)
- categorical_hinge
categorical_hinge(y_true, y_pred)
- logcosh
logcosh(y_true, y_pred)
arithm of the hyperbolic cosine of the prediction error.
log(cosh(x)) is approximately equal to (x ** 2) / 2 for small x and to abs(x) - log(2) for large x. This means that ‘logcosh’ works mostly like the mean squared error, but will not be so strongly affected by the occasional wildly incorrect prediction.
参数
y_true: # 真实标签 (tensor of true targets.)
y_pred: # 预测值 (tensor of predicted targets.)
返回
每个样本一个张量loss值. (Tensor with one scalar loss entry per sample.)
- categorical_crossentropy
categorical_crossentropy(y_true, y_pred)
- sparse_categorical_crossentropy
sparse_categorical_crossentropy(y_true, y_pred)
- binary_crossentropy
binary_crossentropy(y_true, y_pred)
- kullback_leibler_divergence
kullback_leibler_divergence(y_true, y_pred)
- poisson
poisson(y_true, y_pred)
- cosine_proximity
cosine_proximity(y_true, y_pred)
Note: when using the categorical_crossentropy loss, your targets should be in categorical format (e.g. if you have 10 classes, the target for each sample should be a 10-dimensional vector that is all-zeros except for a 1 at the index corresponding to the class of the sample). In order to convert integer targets into categorical targets, you can use the
Keras utility to_categorical:
from keras.utils.np_utils import to_categorical
categorical_labels = to_categorical(int_labels, num_classes=None)