一、总结
原文:
标签平滑(Label Smoothing)详解
https://www.cnblogs.com/irvingluo/p/13873699.html
目的:防止模型在训练的时候,过于自信的预测标签,改善泛化能力。
why:logits->z->softmax->prob->target=1,正样本为1,使得z趋向于无穷大,过大的logit z差值使模型缺乏适应性,导致过拟合,泛化能力差。
how:y_new = y_hot*(1-a) + a/k,a为超参数,k为类别个数,使得正负样本的差值区别不那么大,避免过拟合,提高泛化能力,y_hot在0的位置相当于填充的是a/k,在不为0的位置相当于填充的是1-a。
code:
https://github.com/OpenNMT/OpenNMT-py/blob/e8622eb5c6117269bb3accd8eb6f66282b5e67d9/onmt/utils/loss.py#L186
二、代码解读
class LabelSmoothingLoss(nn.Module):Ben Peters, 3 years ago: • Sparse attention and outputs (#856)
"""
With label smoothing,
KL-divergence between q_{smoothed ground truth prob.}(w)
and p_{prob. computed by model}(w) is minimized.
"""
def __init__(self, label_smoothing, tgt_vocab_size, ignore_index=-100):
# 参数值在0-1之间
assert 0.0 < label_smoothing <= 1.0
# ignore_index默认为100
self.ignore_index = ignore_index
super(LabelSmoothingLoss, self).__init__()
# 非1标签平滑的值,a/k
smoothing_value = label_smoothing / (tgt_vocab_size - 2)
# 全部填充为,a/k
one_hot = torch.full((tgt_vocab_size,), smoothing_value)
# ignore_index估计是不需要平滑的label的index
one_hot[self.ignore_index] = 0
# 这个不知道是啥
self.register_buffer('one_hot', one_hot.unsqueeze(0))
# 这个是label为1的平滑后的值
self.confidence = 1.0 - label_smoothing
def forward(self, output, target):
"""
output (FloatTensor): batch_size x n_classes
target (LongTensor): batch_size
"""
# batch_size下,对每一个样本,转成one_hot形式,并且初始值设置为smoothing_value
model_prob = self.one_hot.repeat(target.size(0), 1)
# 把label位置的值插入并置为confidence
model_prob.scatter_(1, target.unsqueeze(1), self.confidence)
# 对忽略的index置为0
model_prob.masked_fill_((target == self.ignore_index).unsqueeze(1), 0)
# kl散度:https://blog.csdn.net/Forrest97/article/details/109573994
return F.kl_div(output, model_prob, reduction='sum')
loss:解析