目标检测与图像分割常用损失函数详解

Focal Loss

def focal_loss(logits, targets, alpha, gamma, normalizer):
  """Compute the focal loss between `logits` and the golden `target` values.
  Focal loss = -(1-pt)^gamma * log(pt)
  where pt is the probability of being classified to the true class.
  Args:
    logits: A float32 tensor of size
      [batch, height_in, width_in, num_predictions].
    targets: A float32 tensor of size
      [batch, height_in, width_in, num_predictions].
    alpha: A float32 scalar multiplying alpha to the loss from positive examples
      and (1-alpha) to the loss from negative examples.
    gamma: A float32 scalar modulating loss from hard and easy examples.
    normalizer: A float32 scalar normalizes the total loss from all examples.
  Returns:
    loss: A float32 Tensor of size [batch, height_in, width_in, num_predictions]
      representing normalized loss on the prediction map.
  """

SSD Loss

    """SSD Weighted Loss Function
    Compute Targets:
        1) Produce Confidence Target Indices by matching  ground truth boxes
           with (default) 'priorboxes' that have jaccard index > threshold parameter
           (default threshold: 0.5).
        2) Produce localization target by 'encoding' variance into offsets of ground
           truth boxes and their matched  'priorboxes'.
        3) Hard negative mining to filter the excessive number of negative examples
           that comes with using a large number of default bounding boxes.
           (default negative:positive ratio 3:1)
    Objective Loss:
        L(x,c,l,g) = (Lconf(x, c) + αLloc(x,l,g)) / N
        Where, Lconf is the CrossEntropy Loss and Lloc is the SmoothL1 Loss
        weighted by α which is set to 1 by cross val.
        Args:
            c: class confidences,
            l: predicted boxes,
            g: ground truth boxes
            N: number of matched default boxes
        See: https://arxiv.org/pdf/1512.02325.pdf for more details.
    """

YOLACT Loss

'''
# During training, first compute the maximum gt IoU for each prior.
# Then, for priors whose maximum IoU is over the positive threshold, marked as positive.
# For priors whose maximum IoU is less than the negative threshold, marked as negative.
# The rest are neutral ones and are not used to calculate the loss.


Loss由三部分组成,包括分类损失Lconf,定位损失Lloc,以及Mask损失Lmask
Loss = Lconf + Lloc + Lmask

其中,Lcof利用交叉熵计算:
loss_c = F.cross_entropy(class_p_selected, class_gt_selected, reduction='sum')

Lloc利用Smooth L1计算:
loss_b = F.smooth_l1_loss(pos_box_p, pos_offsets, reduction='sum') * cfg.bbox_alpha

Lmask利用像素级的二值交叉熵计算
loss_s += F.binary_cross_entropy_with_logits(cur_segment, segment_gt, reduction='sum')
'''

Faster R-CNN Loss

Faster R-CNN损失函数包含两部分,分类损失和定位损失,分别如下:

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述在这里插入图片描述

YOLO Loss

在这里插入图片描述

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值