pytorch nn.NLLLoss

The negative log likelihood loss. It is useful to train a classification problem with C classes.
If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.

应用

概念讲解

1)假设有m张图片,经过神经网络后输出为m*n的矩阵(m是图片个数,n是图片类别),下例中:
m=2,n=2既有两张图片,供区分两种类别比如猫狗。假设第0维为猫,第1维为狗

import torch
input=torch.randn(2,2)
input
------------------------
tensor([[-1.6243, -0.4164],
        [-0.2492, -0.9667]])
------------------------

2)使用softmax将其转化为概率,我们可以看到,第一张图片是狗的概率大,第二张是猫的概率大。

soft = torch.nn.Softmax(dim=1) # 横向计算softmax
soft(input) # 将输出转化为概率
-------------------------
tensor([[0.2301, 0.7699],
        [0.6721, 0.3279]])
--------------------------

3)对上述结果取对数:(可以使用logsoft(input)替代2,3步骤)

torch.log(soft(input))
---------------------------
tensor([[-1.4694, -0.2615],
        [-0.3974, -1.1149]])
---------------------------

4)NLLLoss结果就是把上面取对数之后的结果与Label对应的那个值拿出来,再去掉负号,然后求和取均值。
假设target是[0,1]既第一张是猫,第二张是狗。第一行取第0个元素,第二行取第1个,去掉负号,求和取均值,既:

(-(-1.4694) + -(-1.1149))/2 = 1.29215

直接使用NLLLoss函数验证:

nll = nn.NLLLoss()
target = torch.tensor([0,1])
nll(torch.log(soft(input)),target)
-----------------------------------------------
tensor(1.2921)
-------------------------------------------------

5)CrossEntropyLoss其实就是Softmax–Log–NLLLoss合并成一步。

ce = nn.CrossEntropyLoss()
ce(input,target)
-----------------------------
tensor(1.2921)
------------------------------

官方例子

m = nn.LogSoftmax(dim=1)
loss = nn.NLLLoss()
input = torch.randn(3, 5, requires_grad=True) # input is of size N x C = 3 x 5
target = torch.tensor([1, 0, 4]) # each element in target has to have 0 <= value < C,y是类别标签(不在是softmax输出的向量)
output = loss(m(input), target)
output.backward()

API

l ( x , y ) = L = { l i , . . . , l N } T l(x,y)=L=\{l_i,...,l_N\}^T l(x,y)=L={li,...,lN}T
l n = − w y n x n , y n l_n = -w_{yn}x_{n,yn} ln=wynxn,yn
w c = w e i g h t [ c ] w_c = weight[c] wc=weight[c]

l ( x , y ) = { ∑ n = 1 N 1 ∑ n = 1 N w y n l n , i f r e d u c t i o n = ’ m e a n ’ ∑ n = 1 N l n i f r e d u c t i o n = ’ s u m ’ l(x,y)= \begin{cases} \sum^N_{n=1}\frac{1}{\sum^N_{n=1}w_{yn}}l_n, & if reduction=’mean’\\ \sum^N_{n=1}l_n & if reduction=’sum’\\ \end{cases} l(x,y)={n=1Nn=1Nwyn1ln,n=1Nlnifreduction=meanifreduction=sum

CLASS torch.nn.NLLLoss(weight: Optional[torch.Tensor] = None, size_average=None, ignore_index: int = -100, reduce=None, reduction: str = 'mean')

参数

参数描述
weight (Tensor, optional)
size_average (bool, optional)
ignore_index (int, optional)
reduce (bool, optional)
reduction (string, optional)

参考:
https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html?highlight=nn%20nllloss#torch.nn.NLLLoss
https://blog.csdn.net/qq_22210253/article/details/85229988

  • 6
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值