torch.nn.functional.cross_entropy代码定义(可参考大牛写的这篇博客:https://blog.csdn.net/chao_shine/article/details/89925762?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522164627346316780357263356%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=164627346316780357263356&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2allfirst_rank_ecpm_v1~rank_v31_ecpm-1-89925762.pc_search_result_cache&utm_term=%E4%BA%A4%E5%8F%89%E7%86%B5%E6%8D%9F%E5%A4%B1%E5%87%BD%E6%95%B0%E4%BB%A3%E7%A0%81&spm=1018.2226.3001.4187)
def cross_entropy(input, target, weight=None, size_average=None, ignore_index=-100,
reduce=None, reduction=‘mean’):
# type: (Tensor, Tensor, Optional[Tensor], Optional[bool], int, Optional[bool], str) -> Tensor
if size_average is not None or reduce is not None:
reduction = _Reduction.legacy_get_string(size_average, reduce)
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
可以看出inpt、target是必选项,并且都是Tensor类型。最后return可以看出functional.cross_entropy实际计算过程应是先计算input的log_softmax,然后再计算nll_loss
log_softmax是在sotfmax基础上进行log函数。
sotfmax 在dim上的理解:注意:上标在哪计算结果就在哪(可参考:):https://blog.csdn.net/weixin_41391619/article/details/104823086?spm=1001.2101.3001.6650.1&utm_medium=distribute.pc_relevant.none-task-blog-2%7Edefault%7ECTRLIST%7ERate-1.pc_relevant_antiscan&depth_1-utm_source=distribute.pc_relevant.none-task-blog-2%7Edefault%7ECTRLIST%7ERate-1.pc_relevant_antiscan&utm_relevant_index=2
y = torch.rand(size=[2,2,3])
print(‘y=’,y)
net1 = nn.Softmax(dim=0)
net2 = nn.Softmax(dim=1)
net3 = nn.Softmax(dim=2)
当dim =0:旨在第一个维度上,本例中第一个维度是2
y= tensor([[[0.1391, 0.1783, 0.8231],
[0.8878, 0.8061, 0.2732]],
[[0.7208, 0.6728, 0.9620],
[0.5471, 0.3034, 0.8006]]])
dim=0的结果是: tensor([[[0.3585, 0.3788, 0.4653],
[0.5843, 0.6231, 0.3711]],
[[0.6415, 0.6212, 0.5347],
[0.4157, 0.3769, 0.6289]]])
是怎么计算的呢?
for i in range(2):
print(y[i,:,:].reshape(-1))
#输出tensor([0.1391, 0.1783, 0.8231, 0.8878, 0.8061, 0.2732])
tensor([0.7208, 0.6728, 0.9620, 0.5471, 0.3034, 0.8006])
…
dim = 1:旨在第二维度上,本例第二维度为2
print(‘dim=1的结果’,net2(y))
for i in range(2):
print(y[:,i,:].reshape(-1)) #
dim=1的结果 tensor([[[0.3211, 0.3480, 0.6341],
[0.6789, 0.6520, 0.3659]],
[[0.5433, 0.5913, 0.5403],
[0.4567, 0.4087, 0.4597]]])
reshape输出:tensor([0.1391, 0.1783, 0.8231, 0.7208, 0.6728, 0.9620])
tensor([0.8878, 0.8061, 0.2732, 0.5471, 0.3034, 0.8006])
dim = 3旨在第三维度上,本例第三维度为3
print(‘dim=2的结果’,net3(y))
for i in range(3):
print(y[:,:,i].reshape(-1))
#输出结果:
dim=2的结果 tensor([[[0.2486, 0.2586, 0.4928],
[0.4061, 0.3742, 0.2196]],
[[0.3100, 0.2955, 0.3946],
[0.3255, 0.2551, 0.4194]]])
#reshape结果:
tensor([0.1391, 0.8878, 0.7208, 0.5471])
tensor([0.1783, 0.8061, 0.6728, 0.3034])
tensor([0.8231, 0.2732, 0.9620, 0.8006])
知道了log_softmax是干嘛的,现在来了解下nll_loss
truth = torch.tensor([0], dtype=torch.int64)
predicted1 = torch.tensor([[0.5, 0.4, 0.1]], dtype=torch.float)
loss = nn.NLLLoss()
print(F.log_softmax(predicted1, 1))
print(loss(F.log_softmax(predicted1, 1), truth))
print(F.cross_entropy(predicted1, truth))
输出结果:
tensor([[-0.9459, -1.0459, -1.3459]])
tensor(0.9459)
tensor(0.9459)
可以看到truth中的值就是log_softmax结果后的数组的idx,即truth的0对应-0.9459的位置,将-0.9459取出后取负数便为NLLLoss的结果。同样地,若truth的值为1,那么第二行和第三行的输出结果为1.0459。