torch.nn.BCEWithLogitsLoss用法介绍

self.bce = nn.BCEWithLogitsLoss(reduction='none'), None的使用方法可以见官网pytorch代码文档
BCELoss
代码举例

import torch
a = torch.rand((1, 3, 3))
target = torch.tensor([[[1, 0, 0],
                        [0, 1, 0],
                        [0, 0, 0]]])
print(a)
'''
ouput:tensor([[[0.2070, 0.8432, 0.2494],
         [0.5782, 0.4587, 0.1135],
         [0.9794, 0.8516, 0.4418]]])
'''
b = torch.nn.BCEWithLogitsLoss(reduction='none')(a, target.to(a.dtype))
print(b)
'''
output: tensor([[[0.5950, 1.2011, 0.8256],
         [1.0234, 0.4899, 0.7515],
         [1.2982, 1.2070, 0.9383]]])
'''
c = torch.nn.BCEWithLogitsLoss(reduction='mean')(a, target.to(a.dtype))
print(c)
'''
output:tensor(0.9256)
'''
d = torch.nn.BCEWithLogitsLoss(reduction='sum')(a, target.to(a.dtype))
print(d)
'''
output:tensor(8.3301)
'''

举例的代码中target中的计算方法是这样的(resuction='none')

对于target[0][0][0]=1=yn, a[0][0][0]=0.2070=xn, 因此,对于ln

0.5950 = − ( 1 × ln ⁡ σ ( 0.2070 ) + 0 × ln ⁡ ( 1 − σ ( 0.2070 ) ) ) 0.5950 = -\left ( 1 \times \ln{\sigma\left ( 0.2070 \right ) } + 0 \times\ln{ \left ( 1 - \sigma \left ( 0.2070\right ) \right ) } \right ) 0.5950=(1×lnσ(0.2070)+0×ln(1σ(0.2070)))
对于target[0][0][1]=0=yn, a[0][0][1]=0.8432=xn, 因此,对于ln
1.2011 = − ( 0 × ln ⁡ σ ( 0.8432 ) + 1 × ln ⁡ ( 1 − σ ( 0.8432 ) ) ) 1.2011= -\left ( 0 \times \ln{\sigma\left ( 0.8432\right ) } + 1 \times\ln{ \left ( 1 - \sigma \left ( 0.8432\right ) \right ) } \right ) 1.2011=(0×lnσ(0.8432)+1×ln(1σ(0.8432)))

其中 σ \sigma σ是Sigmoid函数, 以此类推, 可以最终得到output: tensor([[[0.5950, 1.2011, 0.8256],[1.0234, 0.4899, 0.7515],[1.2982, 1.2070, 0.9383]]]), 如果reduction=‘mean’的时候, 就是reduction=‘none’的output求平均值, 最终可以得到output:tensor(0.9256),如果reduction='sum',那就是reduction=‘none’的output求和,最终可以得到output:tensor(8.3301)

可以验证一下上面的代码,可以看到结果是一样的

a = torch.tensor([[[0.2070, 0.8432, 0.2494],
                   [0.5782, 0.4587, 0.1135],
                   [0.9794, 0.8516, 0.4418]]])
target = torch.tensor([[[1, 0, 0],
                        [0, 1, 0],
                        [0, 0, 0]]])
sa = torch.nn.Sigmoid()(a)
result = -(target * torch.log(sa) + (1 - target) * torch.log(1 - sa))
'''
output: tensor([[[0.5950, 1.2011, 0.8256],
         [1.0235, 0.4899, 0.7515],
         [1.2982, 1.2070, 0.9382]]])
'''
result_mean = result.mean()
'''
output: tensor(0.9256)
'''
result_sum = result.sum()
'''
output: tensor(8.3300)
'''

如果是bs = 2,reduction='mean'reduction='sum'的效果也是一样的,最终的结果也是一个值。代码如下, 可以看到均值不变, 求和的时候扩大2倍

import torch
# a = torch.rand((1, 3, 3))
a = torch.tensor([[[0.2070, 0.8432, 0.2494],
         [0.5782, 0.4587, 0.1135],
         [0.9794, 0.8516, 0.4418]], 
         [[0.2070, 0.8432, 0.2494],
         [0.5782, 0.4587, 0.1135],
         [0.9794, 0.8516, 0.4418]]])
target = torch.tensor([[[1, 0, 0],
                        [0, 1, 0],
                        [0, 0, 0]],
                        [[1, 0, 0],
                        [0, 1, 0],
                        [0, 0, 0]]])

b = torch.nn.BCEWithLogitsLoss(reduction='none')(a, target.to(a.dtype))
print(b)
'''
output: tensor([[[0.5950, 1.2011, 0.8256],
                 [1.0235, 0.4899, 0.7515],
                 [1.2982, 1.2070, 0.9382]],

		        [[0.5950, 1.2011, 0.8256],
		         [1.0235, 0.4899, 0.7515],
		         [1.2982, 1.2070, 0.9382]]])
'''

c = torch.nn.BCEWithLogitsLoss(reduction='mean')(a, target.to(a.dtype))
print(c)
'''
output: tensor(0.9256)
'''

d = torch.nn.BCEWithLogitsLoss(reduction='sum')(a, target.to(a.dtype))
print(d)
'''
output: tensor(16.6601)
'''

sa = torch.nn.Sigmoid()(a)
result = -(target * torch.log(sa) + (1 - target) * torch.log(1 - sa))
print(result)
'''
output: tensor([[[0.5950, 1.2011, 0.8256],
		         [1.0235, 0.4899, 0.7515],
		         [1.2982, 1.2070, 0.9382]],

		        [[0.5950, 1.2011, 0.8256],
		         [1.0235, 0.4899, 0.7515],
		         [1.2982, 1.2070, 0.9382]]])
'''

result_mean = result.mean()
'''
output: tensor(0.9256)
'''
result_sum = result.sum()
'''
output: tensor(16.6601)
'''
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值