cross-entropy loss
L
=
−
[
y
l
o
g
y
^
+
(
1
−
y
)
l
o
g
(
1
−
y
^
)
]
L=-[ylog\ \hat y+(1-y)log\ (1-\hat y)]
L=−[ylog y^+(1−y)log (1−y^)]
y为真实值,
y
^
\hat y
y^为预测,交叉熵损失函数可以衡量y与
y
^
\hat y
y^的相似性。
cross-entropy loss用于二分类问题(假设是0,1),最后一层要使用sigmoid
函数配合,输出值使用one-hot
编码.(此外Categorical cross-entropy
配合softmax
层用于多分类问题)
下例中有4个样本,y_pred
和y_true
都有[batch_size]
bce = tf.keras.losses.BinaryCrossentropy()
loss = bce([0., 0., 1., 1.], [1., 1., 1., 0.])
print('Loss: ', loss.numpy()) # Loss: 11.522857
init
__init__(
from_logits=False,
label_smoothing=0,
reduction=losses_utils.ReductionV2.AUTO,
name='binary_crossentropy'
)
参数 | 描述 |
---|---|
from_logits | |
label_smoothing | |
reduction | |
name |
call
__call__(
y_true,
y_pred,
sample_weight=None
)
参数 | 描述 |
---|---|
y_true | |
y_pred | |
sample_weight |
参考:
https://blog.csdn.net/koreyoshichen/article/details/84823636
https://blog.csdn.net/zwqjoy/article/details/78952087
https://blog.csdn.net/red_stone1/article/details/80735068