Pytorch损失函数NLLLoss()与CrossEntropyLoss()的关系
import torch
input=torch.randn(3,3)
soft_input = torch.nn.Softmax(dim=1)
input1=torch.log(soft_input(input))
print("对input做Softmax,然后取log#############")
print(torch.log(soft_input(input)))
loss=torch.nn.NLLLoss()
target=torch.tensor([0,1,2])
loss(input1,target)
print('调用torch.nn.NLLLoss的结果#########')
print(loss(input1,target))
loss1 =torch.nn.CrossEntropyLoss()
loss1(input,target)
print('调用torch.nn.CrossEntropyLoss()的结果#########')
print(loss1(input,target))
输出结果:
对input做Softmax,然后取log#################
tensor([[-0.3138, -1.5749, -2.7752],
[-2.3118, -0.5529, -1.1219],
[-2.6200, -0.6879, -0.8567]])
调用torch.nn.NLLLoss的结果#################
tensor(0.5745)
调用torch.nn.CrossEntropyLoss()的结果#########
tensor(0.5745)
结论:
softmax(x)+log(x)+nn.NLLLoss====>nn.CrossEntropyLoss