yolo系列相关损失函数

参考:https://blog.csdn.net/geek0105/article/details/129549229?utm_medium=distribute.pc_relevant.none-task-blog-2defaultbaidujs_baidulandingword~default-0-129549229-blog-134171750.235v40pc_relevant_3m_sort_dl_base3&spm=1001.2101.3001.4242.1&utm_relevant_index=3

目录

1.BCEBlurWithLogitsLoss

2.FocalLoss

3.QFocalLoss

4.APLoss

5.aLRPLoss

6.RankSortLoss

7.IOULoss

GIoU

DIoU

CIoU(Complete IoU loss)

Enhanced Completed IoU

Efficient IoU Loss

αIoU

SIoU

1.BCEBlurWithLogitsLoss
BCEBlurWithLogitsLoss是BCE函数的一个变种,在yolov5中提出来的,其目的是削弱missing样本(就是存在目标但是没有标注出来)带来的负面影响。其实就是通过降低missing样本loss的权重,降低其在反向传播中的比重,达到降低missing样本的负面影响的目的。但是笔者认为,这样可能导致模型无法区分目标和混淆目标,提高混淆目标的误检率。代码如下:

class BCEBlurWithLogitsLoss(nn.Module):
    # BCEwithLogitLoss() with reduced missing label effects.
    def __init__(self, alpha=0.05):
        super().__init__()
        self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none')  # must be nn.BCEWithLogitsLoss()
        self.alpha = alpha
 
    def forward(self, pred, true):
        loss = self.loss_fcn(pred, true)
        pred = torch.sigmoid(pred)  # prob from logits
        dx = pred - true  # reduce only missing label effects
        # dx = (pred - true).abs()  # reduce missing label and false label effects
        alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4))
        loss *= alpha_factor
        return loss.mean()

2.FocalLoss
FcoalLoss是何凯明大神在2017年发表的一篇论文:Focal Loss for Dense Object Detection.中提出的。
论文地址:https://arxiv.org/abs/1708.02002
主要思路是: 通过增加困难样本的权重,让模型专注于困难样本(hard_sample)的学习,防止简单样本(easy_sample)过多主导训练的进程,可以解决难样本过少的问题。代码如下

class FocalLoss(nn.Module):
    # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
    def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
        super(FocalLoss, self).__init__()
        self.loss_fcn = loss_fcn  # must be nn.BCEWithLogitsLoss()
        self.gamma = gamma
        self.alpha = alpha
        self.reduction = loss_fcn.reduction
        self.loss_fcn.reduction = 'none'  # required to apply FL to each element
 
    def forward(self, pred, true):
        loss = self.loss_fcn(pred, true)
        # p_t = torch.exp(-loss)
        # loss *= self.alpha * (1.000001 - p_t) ** self.gamma  # non-zero power for gradient stability
 
        # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py
        pred_prob = torch.sigmoid(pred)  # prob from logits
        p_t = true * pred_prob + (1 - true) * (1 - pred_prob)
        alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
        modulating_factor = (1.0 - p_t) ** self.gamma
        loss *= alpha_factor * modulating_factor
 
        if self.reduction == 'mean':
            return loss.mean()
        elif self.reduction == 'sum':
            return loss.sum()
        else:  # 'none'
            return loss

3.QFocalLoss
QFocalLoss是2020年的一篇文章,主要是解决FocalLoss只能适用于标签是0-1这样的二分类或者多分类任务,对于使用了label smooth的任务则无法使用。代码如下:

class QFocalLoss(nn.Module):
    # Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
    def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
        super(QFocalLoss, self).__init__()
        self.loss_fcn = loss_fcn  # must be nn.BCEWithLogitsLoss()
        self.gamma = gamma
        self.alpha = alpha
        self.reduction = loss_fcn.reduction
        self.loss_fcn.reduction = 'none'  # required to apply FL to each element
 
    def forward(self, pred, true):
        loss = self.loss_fcn(pred, true)
 
        pred_prob = torch.sigmoid(pred)  # prob from logits
        alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
        modulating_factor = torch.abs(true - pred_prob) ** self.gamma
        loss *= alpha_factor * modulating_factor
 
        if self.reduction == 'mean':
            return loss.mean()
        elif self.reduction == 'sum':
            return loss.sum()
        else:  # 'none'
            return loss

从个人在私有任务使用看,使用QFocalLoss对于map基本没有提升,甚至误检增多了(recall升高)。分析可能是和FocalLoss一样的问题,由于提高了困难样本的权重,会造成过于关注一些模棱两可的样本。对于小样本来说,估计会有提升,对于一般任务,不如使用BCEBlurWithLogitsLoss。
7.IOULoss

IoU、GIoU、DIoU、CIoU、EIoU、αIoU、SIoU

  • 6
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值