目标检测-回归损失函数

目录

1 IoU

1.1 论文

1.2 概念及公式

1.3 不足

2 GIoU

2.1 论文

2.2 概念及公式

2.3 不足

3 DIoU

3.1 论文

3.2 概念及公式

3.3 不足

4 CIoU

4.1 论文

4.2 概念及公式

4.3 不足

5 EIoU

5.1 论文

5.2 概念及公式

5.3 代码(包含IoU、GIoU、DIoU、CIoU、EIoU等)

6 MPDIoU

6.1 论文

6.2 概念及公式

6.3 代码

7 NWDIoU

7.1 论文

7.2 概念及公式

7.3 代码


最近在对损失函数进行改进,对经典的损失函数以及最近学习的损失函数进行梳理

1 IoU

1.1 论文

UnitBox: An Advanced Object Detection Network

1.2 概念及公式

图1 IoU示意图

IoU:交并比,指的是gt(真实框)与预测框交集和并集的比值,示意图见图1,计算公式如下:

IoU=\frac{\left | A\cap B \right |}{\left | A\cup B \right |}

其中,A为真实框,B为预测框。

1.3 不足

  • 如果两个框不相交,则|A∩B|=0,得到IoU=0,不能反映两者的距离大小

2 GIoU

2.1 论文

Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression

2.2 概念及公式

图2 GIoU示意图

GIoU示意图如图2所示,公式如下:

GIoU=IoU-\frac{\left | C-A\cup B \right |}{\left | C \right |}

其中,A为真实框,B为预测框,C为A和B的最小封闭框。加入后面这一惩罚项,当A和B无交集时,可以反映两个框的距离。

2.3 不足

  • 当一个框被另一个框包围时,退化为IoU(|C|=|A∪B|)

3 DIoU

3.1 论文

Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression

3.2 概念及公式

图3 DIoU示意图

DIoU示意图如图3所示,公式如下:

DIoU=IoU-\frac{\rho ^{2}\left ( A,B \right )}{c^{2}}

其中,A为真实框,B为预测框,C为A和B的最小封闭框。ρ²(A,B)为A、B中心点的距离,c为最小封闭框C的对角线长度。

3.3 不足

  • 当中心点重合时,退化为IoU

  • 没有考虑对宽高的收敛(这一问题前面也有,在这里提一下是因为后面的CIoU和EIoU主要针对这一点提出改进)

4 CIoU

4.1 论文

提出损失函数应考虑三个几何因素,即:1)重叠面积;2)距离;3)纵横比(宽高比)。

Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation

4.2 概念及公式

图4 CIoU示意图

CIoU示意图如图4所示,公式如下:

CIoU=IoU-\frac{\rho ^{2}\left ( A,B \right )}{c^{2}}-\alpha V

V=\frac{4}{\pi ^{2}}\left ( \arctan \frac{w^{A}}{h^{A}} -\arctan \frac{w^{B}}{h^{B}} \right )

\alpha =\frac{V}{1-IoU+V}

其中,A为真实框,B为预测框,C为A和B的最小封闭框。ρ²(A,B)为A、B中心点的距离,c为最小封闭框C的对角线长度,wh为两个框的宽和高。V表示的是宽高比,用于对检测框宽高的收敛。α是该项的权重,其取值为:

\alpha =\begin{cases} 0 & \text{ if } IoU< 0.5 \\ \frac{V}{\left ( 1-IoU \right )+V} & \text{ if } IoU\geq 0.5 \end{cases}

当IoU<0.5时,两个框重合度低,此时对宽高比的约束不太重要;当IoU>0.5时,将宽高比加入损失函数中。两个框重合度越高,α的值越趋向于1,即宽高比越重要。

4.3 不足

  • 预测框和真实框的长宽比相同时,那么长宽比的惩罚项恒为0(如下图中例子,宽高比均为3/2,得到的V为0)

  • w, h对V的梯度为相反数,w和h不能同时增大或减小

5 EIoU

5.1 论文

Focal and Efficient IOU Loss for Accurate Bounding Box Regression

5.2 概念及公式

图5 EIoU示意图

EIoU示意图如图5所示,公式如下:

EIoU=IoU-\frac{\rho ^{2}\left ( A,B \right )}{c^{2}}-\frac{\rho ^{2}\left ( w^{A},w^{B} \right )}{w^{C}}-\frac{\rho ^{2}\left ( h^{A},h^{B} \right )}{h^{C}}

其中,A为真实框,B为预测框,C为A和B的最小封闭框。ρ²(A,B)为A、B中心点的距离,c为最小封闭框C的对角线长度,wh为A、B、C三个框的宽和高。\rho ^{2}\left ( w^{A},w^{B} \right )\rho ^{2}\left ( h^{A},h^{B} \right )为分别为宽的差的平方和高的差的平方。

5.3 代码(包含IoU、GIoU、DIoU、CIoU、EIoU等)

def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, SIoU=False, EIoU=False, WIoU=False, Focal=False, alpha=1, gamma=0.5, scale=False, eps=1e-7):
    # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4
    box2 = box2.T

    # Get the coordinates of bounding boxes
    if x1y1x2y2:  # x1, y1, x2, y2 = box1
        b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
        b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
    else:  # transform from xywh to xyxy
        b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
        b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
        b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
        b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2

    # Intersection area
    inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
            (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)

    # Union Area
    w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
    w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
    union = w1 * h1 + w2 * h2 - inter + eps
    if scale:
        self = WIoU_Scale(1 - (inter / union))

    # IoU
    # iou = inter / union # ori iou
    iou = torch.pow(inter/(union + eps), alpha) # alpha iou
    if CIoU or DIoU or GIoU or EIoU or SIoU or WIoU:
        cw = b1_x2.maximum(b2_x2) - b1_x1.minimum(b2_x1)  # convex (smallest enclosing box) width
        ch = b1_y2.maximum(b2_y2) - b1_y1.minimum(b2_y1)  # convex height
        if CIoU or DIoU or EIoU or SIoU or WIoU:  # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
            c2 = (cw ** 2 + ch ** 2) ** alpha + eps  # convex diagonal squared
            rho2 = (((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4) ** alpha  # center dist ** 2
            if CIoU:  # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
                v = (4 / math.pi ** 2) * (torch.atan(w2 / h2) - torch.atan(w1 / h1)).pow(2)
                with torch.no_grad():
                    alpha_ciou = v / (v - iou + (1 + eps))
                if Focal:
                    return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha)), torch.pow(inter/(union + eps), gamma)  # Focal_CIoU
                else:
                    return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha))  # CIoU
            elif EIoU:
                rho_w2 = ((b2_x2 - b2_x1) - (b1_x2 - b1_x1)) ** 2
                rho_h2 = ((b2_y2 - b2_y1) - (b1_y2 - b1_y1)) ** 2
                cw2 = torch.pow(cw ** 2 + eps, alpha)
                ch2 = torch.pow(ch ** 2 + eps, alpha)
                if Focal:
                    return iou - (rho2 / c2 + rho_w2 / cw2 + rho_h2 / ch2), torch.pow(inter/(union + eps), gamma) # Focal_EIou
                else:
                    return iou - (rho2 / c2 + rho_w2 / cw2 + rho_h2 / ch2) # EIou
            elif SIoU:
                # SIoU Loss https://arxiv.org/pdf/2205.12740.pdf
                s_cw = (b2_x1 + b2_x2 - b1_x1 - b1_x2) * 0.5 + eps
                s_ch = (b2_y1 + b2_y2 - b1_y1 - b1_y2) * 0.5 + eps
                sigma = torch.pow(s_cw ** 2 + s_ch ** 2, 0.5)
                sin_alpha_1 = torch.abs(s_cw) / sigma
                sin_alpha_2 = torch.abs(s_ch) / sigma
                threshold = pow(2, 0.5) / 2
                sin_alpha = torch.where(sin_alpha_1 > threshold, sin_alpha_2, sin_alpha_1)
                angle_cost = torch.cos(torch.arcsin(sin_alpha) * 2 - math.pi / 2)
                rho_x = (s_cw / cw) ** 2
                rho_y = (s_ch / ch) ** 2
                gamma = angle_cost - 2
                distance_cost = 2 - torch.exp(gamma * rho_x) - torch.exp(gamma * rho_y)
                omiga_w = torch.abs(w1 - w2) / torch.max(w1, w2)
                omiga_h = torch.abs(h1 - h2) / torch.max(h1, h2)
                shape_cost = torch.pow(1 - torch.exp(-1 * omiga_w), 4) + torch.pow(1 - torch.exp(-1 * omiga_h), 4)
                if Focal:
                    return iou - torch.pow(0.5 * (distance_cost + shape_cost) + eps, alpha), torch.pow(inter/(union + eps), gamma) # Focal_SIou
                else:
                    return iou - torch.pow(0.5 * (distance_cost + shape_cost) + eps, alpha) # SIou
            elif WIoU:
                if Focal:
                    raise RuntimeError("WIoU do not support Focal.")
                elif scale:
                    return getattr(WIoU_Scale, '_scaled_loss')(self), (1 - iou) * torch.exp((rho2 / c2)), iou # WIoU https://arxiv.org/abs/2301.10051
                else:
                    return iou, torch.exp((rho2 / c2)) # WIoU v1
            if Focal:
                return iou - rho2 / c2, torch.pow(inter/(union + eps), gamma)  # Focal_DIoU
            else:
                return iou - rho2 / c2  # DIoU
        c_area = cw * ch + eps  # convex area
        if Focal:
            return iou - torch.pow((c_area - union) / c_area + eps, alpha), torch.pow(inter/(union + eps), gamma)  # Focal_GIoU https://arxiv.org/pdf/1902.09630.pdf
        else:
            return iou - torch.pow((c_area - union) / c_area + eps, alpha)  # GIoU https://arxiv.org/pdf/1902.09630.pdf
    if Focal:
        return iou, torch.pow(inter/(union + eps), gamma)  # Focal_IoU
    else:
        return iou  # IoU

6 MPDIoU

6.1 论文

MPDIoU: A Loss for Efficient and Accurate Bounding Box Regression

6.2 概念及公式

图6 MPDIoU示意图

MPDIoU示意图如图6所示,公式如下:

MPDIoU=IoU-\frac{d_{1}^{2}}{w^{2}+h^{2}}-\frac{d_{2}^{2}}{w^{2}+h^{2}}

其中,A为真实框,B为预测框,w和h为输入图像的宽度和高度,d1和d2分别为两个框左上角的距离和右下角的距离。

6.3 代码

def bbox_mpdiou(box1, box2, x1y1x2y2=True, mpdiou_hw=None, grid=None, eps=1e-7):
    # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4
    box2 = box2.T
    box1[:2] += grid
    box2[:2] += grid

    # Get the coordinates of bounding boxes
    if x1y1x2y2:  # x1, y1, x2, y2 = box1
        b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
        b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
    else:  # transform from xywh to xyxy
        b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
        b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
        b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
        b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
    
    # Intersection area
    inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
            (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)

    # Union Area
    w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
    w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
    union = w1 * h1 + w2 * h2 - inter + eps

    iou = inter / union
    d1 = (b2_x1 - b1_x1) ** 2 + (b2_y1 - b1_y1) ** 2
    d2 = (b2_x2 - b1_x2) ** 2 + (b2_y2 - b1_y2) ** 2
    return iou - d1 / mpdiou_hw - d2 / mpdiou_hw  # MPDIoU

7 NWDIoU

7.1 论文

A Normalized Gaussian Wasserstein Distance for Tiny Object Detection

Detecting tiny objects in aerial images: A normalized Wasserstein distance and a new benchmark

(第二篇发表在ISPRS Journal of Photogrammetry and Remote Sensing期刊上,两个讲的是同个东西,但第二篇更完整)

7.2 概念及公式

该损失函数主要针对小目标问题,小目标覆盖的像素点少,IoU对小目标bbox的偏移非常敏感。

该损失函数将边界框建模为高斯分布,将真实框与预测框的相似度度量改为计算两个高斯分布之间的分布距离。

(寻找框的内切椭圆。二维高斯分布与椭圆有着紧密联系,二维高斯分布的投影为椭圆。利用椭圆表达式可以得到高斯分布,从而计算高斯分布的Wasserstein距离)

7.3 代码

def wasserstein_loss(pred, target, eps=1e-7, constant=12.8, weight = 2):
    r"""`Implementation of paper `Enhancing Geometric Factors into
    Model Learning and Inference for Object Detection and Instance
    Segmentation <https://arxiv.org/abs/2005.03572>`_.
    Code is modified from https://github.com/Zzh-tju/CIoU.
    Args:
        pred (Tensor): Predicted bboxes of format (x_center, y_center, w, h),
            shape (n, 4).
        target (Tensor): Corresponding gt bboxes, shape (n, 4).
        eps (float): Eps to avoid log(0).
    Return:
        Tensor: Loss tensor.
    """

    center1 = pred[:, :2]
    center2 = target[:, :2]

    whs = center1[:, :2] - center2[:, :2]

    center_distance = whs[:, 0] * whs[:, 0] + whs[:, 1] * whs[:, 1] + eps #

    w1 = pred[:, 2]  + eps
    h1 = pred[:, 3]  + eps
    w2 = target[:, 2] + eps
    h2 = target[:, 3] + eps

    wh_distance = ((w1 - w2) ** 2 + (h1 - h2) ** 2) / (weight ** 2)

    wasserstein_2 = center_distance + wh_distance
    return torch.exp(-torch.sqrt(wasserstein_2) / constant)
  • 22
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值