loss函数之BCELoss, BCEWithLogitsLoss

BCELoss

二分类交叉熵损失

单标签二分类

一个输入样本对应于一个分类输出,例如,情感分类中的正向和负向

对于包含 N N N个样本的batch数据, l o s s loss loss计算如下:

l o s s = 1 N ∑ n = 1 N l n loss=\frac{1}{N} \sum_{n=1}^{N} l_{n} loss=N1n=1Nln

其中, l n = − w [ y n ⋅ log ⁡ x n + ( 1 − y n ) ⋅ log ⁡ ( 1 − x n ) ] l_{n}=-w\left[y_{n} \cdot \log x_{n}+\left(1-y_{n}\right) \cdot \log \left(1-x_{n}\right)\right] ln=w[ynlogxn+(1yn)log(1xn)] 为第 n n n个样本对应的 l o s s loss loss

x n x_{n} xn代表第n个样本的对应的模型输出,经过sigmoid激活函数处理

y n y_{n} yn代表第n个样本的对应的类别

w w w是超参数,对于单标签二分类,设不设置 w w w, 没有影响

一般情况下, y y y中各个元素的取值为0或1,代表真实类别

class BCELoss(_WeightedLoss):
    __constants__ = ['reduction', 'weight']
    def __init__(self, weight=None, size_average=None, reduce=None, reduction='mean'):
        super(BCELoss, self).__init__(weight, size_average, reduce, reduction)
    def forward(self, input, target):
        return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)

pytorch中通过torch.nn.BCELoss类实现,也可以直接调用F.binary_cross_entropy 函数,代码中的weight即是 w w wsize_averagereduce已经弃用。reduction有三种取值mean, sum, none,对应不同的返回 ℓ ( x , y ) \ell(x, y) (x,y). 默认为mean,对应于上述 l o s s loss loss的计算。

L = { l 1 , … , l N } L=\left\{l_{1}, \ldots, l_{N}\right\} L={l1,,lN}

ℓ ( x , y ) = { L ⁡ ,  if reduction  =  ’none’  mean ⁡ ( L ) ,  if reduction  =  ’mean’  sum ⁡ ( L ) ,  if reduction  =  ’sum’  \ell(x, y)=\left\{\begin{array}{ll}\operatorname L, & \text { if reduction }=\text { 'none' } \\ \operatorname{mean}(L), & \text { if reduction }=\text { 'mean' } \\ \operatorname{sum}(L), & \text { if reduction }=\text { 'sum' }\end{array} \right. (x,y)=L,mean(L),sum(L), if reduction = ’none’  if reduction = ’mean’  if reduction = ’sum’ 

验证函数:证明loss函数的输出和我们理解的输出一致

def validate_loss(output, target, weight=None, pos_weight=None):
    # 处理正负样本不均衡问题
    if pos_weight is None:
        label_size = output.size()[1]
        pos_weight = torch.ones(label_size)
    # 处理多标签不平衡问题
    if weight is None:
        label_size = output.size()[1]
        weight = torch.ones(label_size)

    val = 0
    for li_x, li_y in zip(output, target):
        for i, xy in enumerate(zip(li_x, li_y)):
            x, y = xy
            loss_val = pos_weight[i] * y * math.log(x, math.e) + (1 - y) * math.log(1 - x, math.e)
            val += weight[i] * loss_val
    return -val / (output.size()[0] * output.size(1))

使用torch.nn.BCELoss类实现loss计算

import torch
import torch.nn.functional as F
import torch.nn as nn
import math

# 单标签二分类
m = nn.Sigmoid()
weight = torch.tensor([0.8])
loss_fct = nn.BCELoss(reduction="mean", weight=weight)
input_src = torch.Tensor([[0.8], [0.9], [0.3]])
target = torch.Tensor([[1], [1], [0]])
print(input_src.size())
print(target.size())
output = m(input_src)
loss = loss_fct(output, target)
print(loss.item())

# 验证计算
validate = validate_loss(output, target, weight)
print(validate.item())
# 输出
torch.Size([3, 1])
torch.Size([3, 1])
0.4177626073360443
0.4177626371383667

使用binary_cross_entropy函数实现loss计算

# 单标签二分类
weight = torch.tensor([0.8])
input_src = torch.Tensor([[0.8], [0.9], [0.3]])
target = torch.Tensor([[1], [1], [0]])
print(input_src.size())
print(target.size())
output = torch.sigmoid(input_src)
loss = F.binary_cross_entropy(output, target, weight=weight, reduction='mean')
print(loss.item())

# 验证计算
validate = validate_loss(output, target, weight)
print(validate.item())
torch.Size([3, 1])
torch.Size([3, 1])
0.4177626073360443
0.4177626371383667
多标签二分类

什么是多标签分类?

比如说给一篇文章分配话题,它既可以是科技类又可以是教育类,科技和教育就是这篇文章的两个标签。又比如判断一幅图中包含什么,它可能既包含房子又包含马路。房子和马路就是这幅图对应的两个标签。将每一种标签,看作是二分类。一个输入样本对应于多个标签,每个标签对应一个二分类(是或不是)。

l o s s loss loss 的计算方式和上面一致,对于包含 N N N个样本的batch数据 ,每个样本可能有 M M M个标签, l o s s loss loss计算如下:
l o s s = 1 N ∑ n = 1 N l n loss=\frac{1}{N} \sum_{n=1}^{N} l_{n} loss=N1n=1Nln

其中, l n = 1 M ∑ i = 1 M l n i l_{n}= \frac{1}{M} \sum_{i=1}^{M}l_{n}^{i} ln=M1i=1Mlni 为第 n n n个样本对应的 l o s s loss loss

l n i = − w i [ y n i ⋅ log ⁡ x n i + ( 1 − y n i ) ⋅ log ⁡ ( 1 − x n i ) ] l_{n}^{i}= - w_{i}\left[y_{n}^{i} \cdot \log x_{n}^{i}+\left(1-y_{n}^{i}\right) \cdot \log \left(1-x_{n}^{i}\right)\right] lni=wi[ynilogxni+(1yni)log(1xni)]

x n i x_{n}^i xni代表第n个样本的第i个标签对应的模型输出,经过sigmoid激活函数处理

y n i y_{n}^i yni代表第n个样本的的第i个标签对应的类别

w i w_{i} wi是超参数, 用于处理标签间的样本不均衡问题。对于一批训练集,若其中某个标签的出现次数较少,计算 l o s s loss loss时应该给予更高的权重。

L = ( l 1 1 , l 1 2 , . . . , l 1 M l 2 1 , l 2 2 , . . . , l 2 M ⋮ l N 1 , l N 2 , . . . , l N M ) L=\left(\begin{array}{c}l_{1}^{1},l_{1}^{2},...,l_{1}^{M} \\ l_{2}^{1},l_{2}^{2},...,l_{2}^{M} \\ \vdots \\ l_{N}^{1},l_{N}^{2},...,l_{N}^{M} \end{array}\right) L=l11l12...,l1Ml21l22...,l2MlN1lN2...,lNM

ℓ ( x , y ) = { L ⁡ ,  if reduction  =  ’none’  mean ⁡ ( L ) ,  if reduction  =  ’mean’  sum ⁡ ( L ) ,  if reduction  =  ’sum’  \ell(x, y)=\left\{\begin{array}{ll}\operatorname L, & \text { if reduction }=\text { 'none' } \\ \operatorname{mean}(L), & \text { if reduction }=\text { 'mean' } \\ \operatorname{sum}(L), & \text { if reduction }=\text { 'sum' }\end{array} \right. (x,y)=L,mean(L),sum(L), if reduction = ’none’  if reduction = ’mean’  if reduction = ’sum’ 

多标签二分类例子:

import torch
import torch.nn.functional as F
import torch.nn as nn
import math
weight = torch.Tensor([0.8, 1, 0.8])
input = torch.Tensor([[0.8, 0.9, 0.3], [0.8, 0.9, 0.3], [0.8, 0.9, 0.3], [0.8, 0.9, 0.3]])
target = torch.Tensor([[1, 1, 0], [1, 1, 0], [1, 1, 0], [1, 1, 0]])
print(input.size())
print(target.size())
output = torch.sigmoid(input)
loss = F.binary_cross_entropy(output, target, reduction='none', weight=weight)
print(loss)  # none

loss = F.binary_cross_entropy(output, target, reduction='mean', weight=weight)
print(loss.item())
# 验证计算
validate = validate_loss(output, target, weight)
print(validate.item())
torch.Size([4, 3])
torch.Size([4, 3])
tensor([[0.2969, 0.3412, 0.6835],
        [0.2969, 0.3412, 0.6835],
        [0.2969, 0.3412, 0.6835],
        [0.2969, 0.3412, 0.6835]])
0.4405061900615692
0.4405062198638916

BCEWithLogitsLoss

将Sigmoid层和BCELoss类合并在一个类中.

BCEWithLogitsLoss = Sigmoid + BCELoss 例子:

m = nn.Sigmoid()
weight = torch.tensor([0.8])
loss_fct = nn.BCELoss(reduction="mean", weight=weight)
loss_fct_logit = nn.BCEWithLogitsLoss(reduction="mean", weight=weight)
input_src = torch.Tensor([0.8, 0.9, 0.3])
target = torch.Tensor([1, 1, 0])
print(input_src)
print(target)
output = m(input_src)
loss = loss_fct(output, target)
loss_logit = loss_fct_logit(input_src, target)
print(loss.item())
print(loss_logit.item())
# 结果一致
tensor([0.8000, 0.9000, 0.3000])
tensor([1., 1., 0.])
0.4177626371383667
0.4177626371383667

值得注意的是,BCEWithLogitsLoss类比BCELoss类,还多了一个pos_weight参数,pos_weight用于处理每种标签中正负样本不均衡问题,作为正类的权重。具体的,若正类样本较多,设置pos_weight<1,若负类样本较多,设置pos_weight>1

input = torch.Tensor([[0.8, 0.9, 0.3], [0.8, 0.9, 0.3], [0.8, 0.9, 0.3], [0.8, 0.9, 0.3]])
target = torch.Tensor([[1, 1, 0], [1, 1, 0], [1, 1, 0], [1, 1, 0]])
print(input.size())
print(target.size())
output = torch.sigmoid(input)
weight = torch.tensor([0.8, 1, 0.8])
loss = F.binary_cross_entropy_with_logits(input, target, reduction='mean', pos_weight=weight)
print(loss.item())
# 验证计算
validate = validate_loss(output, target, pos_weight=weight)
print(validate.item())
torch.Size([4, 3])
torch.Size([4, 3])
0.49746325612068176
0.49746325612068176
评论 9
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

旺旺棒棒冰

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值