2020.12.03更新:pytorch不香么…
1、损失函数
先贴官网链接:https://keras.io/api/losses/
变量定义:
- y_true: 真实标签
- y_pred: 预测值
损失函数的分类:
Probabilistic losses
BinaryCrossentropy class
- y_true,y_pred的shape都是 [batch_size]
- y_true的不是0就1,整型
# 输入示例
# y_true = [[0., 1.], [0., 0.]]
# y_pred = [[0.6, 0.4], [0.4, 0.6]]
tf.keras.losses.BinaryCrossentropy(
from_logits=False, label_smoothing=0, reduction="auto", name="binary_crossentropy"
)
CategoricalCrossentropy class
- y_true,y_pred的shape都是 [batch_size, num_classes]
- y_true 是 one_hot 形式
# 输入示例
# y_true = [[0, 1, 0], [0, 0, 1]]
# y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]
tf.keras.losses.CategoricalCrossentropy(
from_logits=False,
label_smoothing=0,
reduction="auto",
name="categorical_crossentropy",
)
CategoricalCrossentropy class
- y_true,y_pred的shape分别是,[batch_size] , [batch_size, num_classes]
- y_true 是整型,相比 CategoricalCrossentropy class 使用简便
# y_true = [1, 2]
# y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]
tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=False, reduction="auto", name="sparse_categorical_crossentropy"
)
Poisson class
泊松损失
参考:泊松回归详细介绍
定义:
loss = y_pred - y_true * log(y_pred)
KLDivergence class
KL散度:
定义:loss = y_true * log(y_true / y_pred)
参考:https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
Regression losses
MeanSquaredError class
均方差
loss = square(y_true - y_pred)
MeanAbsoluteError class
绝对值均差
loss = abs(y_true - y_pred)
MeanAbsolutePercentageError class
绝对值相对差
loss = 100 * abs(y_true - y_pred) / y_true
MeanSquaredLogarithmicError class
词穷了,咋翻译?
loss = square(log(y_true + 1.) - log(y_pred + 1.))
CosineSimilarity class
余弦相似度
loss = -sum(l2_norm(y_true) * l2_norm(y_pred))
Huber class
去掉离群点,减轻离群点对loss的影响
loss = 0.5 * x^2 if |x| <= d
loss = 0.5 * d^2 + d * (|x| - d) if |x| > d
Huber class
logcosh = log((exp(x) + exp(-x))/2), where x is the error y_pred - y_true.
2、Metrics
Keras的官方doc竟然也把loss作为metrics
Classification metrics based on True/False positives & negatives
AUC class (Area under the curve)
tf.keras.metrics.AUC(
num_thresholds=200,
curve="ROC",
summation_method="interpolation",
name=None,
dtype=None,
thresholds=None,
multi_label=False,
label_weights=None,
)
Precision class
Recall class
PrecisionAtRecall class
计算当 recall is >= specified value 时的 precision
TruePositives class,TrueNegatives class, FalsePositives class, FalseNegatives class
SensitivityAtSpecificity class
https://en.wikipedia.org/wiki/Sensitivity_and_specificity
SpecificityAtSensitivity class
https://en.wikipedia.org/wiki/Sensitivity_and_specificity