1. torch.nn.BCELoss
之前介绍过torch.nn.CrossEntropyLoss
,这个函数既可以用来计算二分类,也可以用来计算多分类,这两个函数是有区别的,CrossEntropyLoss
函数中结合了LogSoftmax
以及NLLLOSS
,下面介绍一下torch.nn.BCELoss
,这里说明以下BCELoss
并没有结合前面的非线性函数
torch.nn.BCELoss
- Input: (N, ∗) where ∗ means, any number of additional dimensions
- Target: (N, ∗), same shape as the input
其中input以及target的维度是相同的
先列出二元分类公式:
L = 1 N ∑ i L i = 1 N ∑ i − [ y i l o g ( p i ) + ( 1 − y i ) l o g ( 1 − p i ) ] L = \frac{1}{N}\sum_{i}L_{i} = \frac{1}{N}\sum_{i}-[y_{i}log(p_{i})+(1-y_{i})log(1-p_{i})] L=N1i∑Li=N1i∑−[yilog(pi)+(1−yi)log(1−pi)]
下面举例说明函数的具体运算过程:
import torch
import torch.nn as nn
import numpy as np
from math import exp
from math import log
input = torch.tensor([[0.9,0.1,0.3], [0.4,0.8,0.2]], dtype=torch.float)
target = torch.tensor([[1,0,0], [0,1,0]], dtype=torch.float)
print("input:", input)
print("target:", target)
'''
input: tensor([[0.9000, 0.1000, 0.3000],
[0.4000, 0.8000, 0.2000]])
target: tensor([[1., 0., 0.],
[0., 1., 0.]])
'''
# 计算BCELoss
m = nn.Sigmoid()
loss = nn.BCELoss()
Bloss = loss(input, target)
print(Bloss)
'''
tensor(0.2541)
'''
因为我理解的不够透彻,这里先加入我的个人理解,N行,即Batchsize,有N批;列呢就是这一批样本中的样本数。虽然input和target有行和列分别是矩阵的形式,但是因为是二分类,所以只要一个input与一个target对应就可以了,比如input和target分别取第一行第二列的值0.1和0
,代表我们的预测概率为0.1,但是真实值为0
下面用公式验证上述BCELoss
的正确性,通过下述计算确实与上面相同
L1 = (1*log(0.9) + 1*log(1-0.1) + 1*log(1-0.3))/(-3)
L2 = (1*log(1-0.4) + 1*log(0.8) + 1*log(1-0.2))/(-3)
L = (L1+L2)/2
print(L)
'''
0.2540847836081325
'''
2. torch.nn.BCEWithLogitsLoss
在计算时,我们会得到一个分数,这个分数就对应着我们的预测概率
P
i
P_{i}
Pi,但是求loss时要求我们的结果都在
[
0
,
1
]
[0, 1]
[0,1]之间,那就需要加入一个sigmoid
函数,恰巧torch.nn.Sigmoid
为我们提供了这个函数
torch.nn.BCEWithLogitsLoss
同时结合了Sigmoid
和BCELoss
函数
下面举例说明:
batch_size = 2
class_num = 3
inputs = torch.randn(batch_size, class_num)
for i in range(batch_size):
for j in range(class_num):
inputs[i][j] = (i + 1) * (j + 1)
targets = torch.tensor([[1,0,0], [0,1,0]], dtype=torch.float)
print("inputs:", inputs)
print("targets:", targets)
'''
inputs: tensor([[1., 2., 3.],
[2., 4., 6.]])
targets: tensor([[1., 0., 0.],
[0., 1., 0.]])
'''
loss2 = nn.BCEWithLogitsLoss()
Bloss2 = loss2(inputs, targets)
print(Bloss2)
'''
tensor(2.2727)
'''
然后我们用torch.nn.Sigmoid()
和torch.nn.BCELoss
计算
m = nn.Sigmoid()
loss = nn.BCELoss()
inputs_ = m(inputs)
Bloss2_ = loss(inputs_, targets)
print(Bloss2_)
'''
tensor(2.2727)
'''
可以发现二者相同