torch.nn.CrossEntropyLoss

交叉熵损失函数torch.nn.CrossEntropyLoss

  • weight (Tensor, optional): a manual rescaling weight given to each class. If given, has to be a Tensor of size C 每个类别计算损失的权重
  • size_average (bool, optional): Deprecated (see :attr:reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field :attr:size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
  • ignore_index (int, optional): Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over non-ignored targets.
  • reduce (bool, optional): Deprecated (see :attr:reduction). By default, the losses are averaged or summed over observations for each minibatch depending on :attr:size_average. When :attr:reduce is False, returns a loss per batch element instead and ignores :attr:size_average. Default: True
  • reduction (string, optional): Specifies the reduction to apply to the output: ‘none’ | ‘mean’ | ‘sum’.
    • ‘none’: no reduction will be applied,
    • ‘mean’: the sum of the output will be divided by the number of elements in the output
    • ‘sum’: the output will be summed.
    • Note: :attr:size_average and :attr:reduce are in the process of being deprecated, and in the meantime,specifying either of those two args will override :attr:reduction. Default: ‘mean’

简单来说,三个参数:weightignorereduction

  • weight调整每个类别的权重
  • ignore_index不计算损失的index,例如padding的index不计算损失
  • reduction控制loss的计算模式[none,mean,sum]
import torch.nn.functional as F
input = torch.randn(3,5)
label = torch.empty(3, dtype=torch.long).random_(5)  # -> tensor([1, 3, 0])

res = F.cross_entropy(input, label) 
>>> tensor(1.8942)
res_mean = F.cross_entropy(input, label, reduction='mean') 
>>> tensor(1.8942)
res_sum = F.cross_entropy(input, label, reduction='sum') 
>>> tensor(5.6826)
res_none = F.cross_entropy(input, label, reduction='none') 
>>>tensor([1.3254, 2.9982, 1.3590])
res_ignore0 = F.cross_entropy(input, label, reduction='none', ignore_index=0)
>>>tensor([1.3254, 2.9982, 0.0000])

F.cross_entropy

torch.nn.CrossEntropyLoss调用了函数F.cross_entropy,与tf中不同的是,F.cross_entropy执行包含两部分log_softmaxF.nll_loss
log_softmax主要用于解决函数overflow和underflow,加快运算速度,提高数据稳定性。
softmax会进行指数操作,当输入比较大,会产生overflow;当输入为负数且绝对值也很大,会使得分子和分母很小,有可能四舍五入向下溢出。
在数学表达式是对softmax取对数,实际运算是通过下列式子:
l o g [ f ( x i ) ] = l o g ( e x i e x 1 + e x 2 + . . . + e x n ) = l o g ( e x i e M e x 1 e M + e x 2 e M + . . . + e x n e M ) = l o g ( e ( x i − M ) ∑ j n e ( x j − M ) ) = l o g ( e ( x i − M ) ) − l o g ( ∑ j n e ( x j − M ) ) = ( x i − M ) − l o g ( ∑ j n e ( x j − M ) ) log[f(x_i)]=log(\frac{e^{x_i}}{e^{x_1}+e^{x_2}+...+e^{x_n}})\\ =log(\frac{\frac{e^{x_i}}{e^M}}{\frac{e^{x_1}}{e^M}+\frac{e^{x_2}}{e^M}+...+\frac{e^{x_n}}{e^M}})=log(\frac{e^{(x_i-M)}}{\sum_j^ne^{(x_j-M)}})\\ =log(e^{(x_i-M)})-log(\sum_j^ne^{(x_j-M)})\\=(x_i-M)-log(\sum_j^ne^{(x_j-M)}) log[f(xi)]=log(ex1+ex2+...+exnexi)=log(eMex1+eMex2+...+eMexneMexi)=log(jne(xjM)e(xiM))=log(e(xiM))log(jne(xjM))=(xiM)log(jne(xjM))
其中,M为所有 x i x_i xi中最大的值。

F.nll_loss

F.nll_loss表示The negative log likelihood loss.log似然代价函数
log_softmax与softmax的区别在哪里?
pytorch的F.cross_entropy交叉熵函数

torch.nn.CrossEntropyLossPyTorch中常用的交叉熵损失函数之一。它结合了torch.nn.LogSoftmax和torch.nn.NLLLoss两个函数,用于多分类问题的训练中。交叉熵损失函数常用于衡量模型输出与真实标签之间的差异。 在torch.nn.CrossEntropyLoss中,输入的形状为(batch_size, num_classes),其中batch_size是每个训练批次的样本数量,num_classes是分类的类别数量。在训练过程中,模型输出的结果会通过torch.nn.LogSoftmax函数进行处理,得到对应的概率分布。然后,模型预测的概率分布与真实标签之间会被计算交叉熵损失。 交叉熵损失函数的计算公式如下: loss = -sum(y_true * log(y_pred)) 其中,y_true是真实标签的概率分布,y_pred是模型预测的概率分布。 torch.nn.CrossEntropyLoss会自动将模型输出的概率分布进行归一化,并进行log运算。因此,在使用torch.nn.CrossEntropyLoss时,不需要手动应用torch.nn.LogSoftmax函数。 需要注意的是,torch.nn.CrossEntropyLoss函数的输入不包含softmax层。如果模型的最后一层是softmax层,可以直接使用torch.nn.CrossEntropyLoss来计算损失。如果模型的最后一层是logits层(未经过softmax激活),可以使用torch.nn.CrossEntropyLoss配合torch.nn.LogSoftmax来计算损失。 总结起来,torch.nn.CrossEntropyLossPyTorch中用于多分类问题训练的交叉熵损失函数,它结合了torch.nn.LogSoftmax和torch.nn.NLLLoss两个函数,并且可以适用于不同形式的模型输出。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值