PaddleOCR 文字检测部分源码学习(6)-损失函数(2)

2021SC@SDUSC
east的损失函数
代码位置:ppocr->losses->det_east_loss

# copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import paddle
from paddle import nn
from .det_basic_loss import DiceLoss
import numpy as np


class SASTLoss(nn.Layer):
    """
    """

    def __init__(self, eps=1e-6, **kwargs):
        super(SASTLoss, self).__init__()
        self.dice_loss = DiceLoss(eps=eps)

    def forward(self, predicts, labels):
        """
        tcl_pos: N x 128 x 3
        tcl_mask: N x 128 x 1
        tcl_label: N x X list or LoDTensor
        """

        f_score = predicts['f_score']
        f_border = predicts['f_border']
        f_tvo = predicts['f_tvo']
        f_tco = predicts['f_tco']

        l_score, l_border, l_mask, l_tvo, l_tco = labels[1:]

        #score_loss
        intersection = paddle.sum(f_score * l_score * l_mask)
        union = paddle.sum(f_score * l_mask) + paddle.sum(l_score * l_mask)
        score_loss = 1.0 - 2 * intersection / (union + 1e-5)

        #border loss
        l_border_split, l_border_norm = paddle.split(
            l_border, num_or_sections=[4, 1], axis=1)
        f_border_split = f_border
        border_ex_shape = l_border_norm.shape * np.array([1, 4, 1, 1])
        l_border_norm_split = paddle.expand(
            x=l_border_norm, shape=border_ex_shape)
        l_border_score = paddle.expand(x=l_score, shape=border_ex_shape)
        l_border_mask = paddle.expand(x=l_mask, shape=border_ex_shape)

        border_diff = l_border_split - f_border_split
        abs_border_diff = paddle.abs(border_diff)
        border_sign = abs_border_diff < 1.0
        border_sign = paddle.cast(border_sign, dtype='float32')
        border_sign.stop_gradient = True
        border_in_loss = 0.5 * abs_border_diff * abs_border_diff * border_sign + \
                    (abs_border_diff - 0.5) * (1.0 - border_sign)
        border_out_loss = l_border_norm_split * border_in_loss
        border_loss = paddle.sum(border_out_loss * l_border_score * l_border_mask) / \
                    (paddle.sum(l_border_score * l_border_mask) + 1e-5)

        #tvo_loss
        l_tvo_split, l_tvo_norm = paddle.split(
            l_tvo, num_or_sections=[8, 1], axis=1)
        f_tvo_split = f_tvo
        tvo_ex_shape = l_tvo_norm.shape * np.array([1, 8, 1, 1])
        l_tvo_norm_split = paddle.expand(x=l_tvo_norm, shape=tvo_ex_shape)
        l_tvo_score = paddle.expand(x=l_score, shape=tvo_ex_shape)
        l_tvo_mask = paddle.expand(x=l_mask, shape=tvo_ex_shape)
        #
        tvo_geo_diff = l_tvo_split - f_tvo_split
        abs_tvo_geo_diff = paddle.abs(tvo_geo_diff)
        tvo_sign = abs_tvo_geo_diff < 1.0
        tvo_sign = paddle.cast(tvo_sign, dtype='float32')
        tvo_sign.stop_gradient = True
        tvo_in_loss = 0.5 * abs_tvo_geo_diff * abs_tvo_geo_diff * tvo_sign + \
                    (abs_tvo_geo_diff - 0.5) * (1.0 - tvo_sign)
        tvo_out_loss = l_tvo_norm_split * tvo_in_loss
        tvo_loss = paddle.sum(tvo_out_loss * l_tvo_score * l_tvo_mask) / \
                    (paddle.sum(l_tvo_score * l_tvo_mask) + 1e-5)

        #tco_loss
        l_tco_split, l_tco_norm = paddle.split(
            l_tco, num_or_sections=[2, 1], axis=1)
        f_tco_split = f_tco
        tco_ex_shape = l_tco_norm.shape * np.array([1, 2, 1, 1])
        l_tco_norm_split = paddle.expand(x=l_tco_norm, shape=tco_ex_shape)
        l_tco_score = paddle.expand(x=l_score, shape=tco_ex_shape)
        l_tco_mask = paddle.expand(x=l_mask, shape=tco_ex_shape)

        tco_geo_diff = l_tco_split - f_tco_split
        abs_tco_geo_diff = paddle.abs(tco_geo_diff)
        tco_sign = abs_tco_geo_diff < 1.0
        tco_sign = paddle.cast(tco_sign, dtype='float32')
        tco_sign.stop_gradient = True
        tco_in_loss = 0.5 * abs_tco_geo_diff * abs_tco_geo_diff * tco_sign + \
                    (abs_tco_geo_diff - 0.5) * (1.0 - tco_sign)
        tco_out_loss = l_tco_norm_split * tco_in_loss
        tco_loss = paddle.sum(tco_out_loss * l_tco_score * l_tco_mask) / \
                    (paddle.sum(l_tco_score * l_tco_mask) + 1e-5)

        # total loss
        tvo_lw, tco_lw = 1.5, 1.5
        score_lw, border_lw = 1.0, 1.0
        total_loss = score_loss * score_lw + border_loss * border_lw + \
                    tvo_loss * tvo_lw + tco_loss * tco_lw

        losses = {'loss':total_loss, "score_loss":score_loss,\
            "border_loss":border_loss, 'tvo_loss':tvo_loss, 'tco_loss':tco_loss}
        return losses

损失函数公式为:
在这里插入图片描述

其中,Ls和Lg分别表示分数图和几何图的损失,λg表示两个损失之间的重要性(本文实验λg=1)。
目前的方法中,多数在训练图像通过均衡采样和hard negative mining,以解决目标的不均衡分布,这样做可能会提高网络性能。然而,使用这种技术不可避免地会引入一个阶段和更多的参数来调整pipeline,这与本文的设计原则相矛盾。为了简化训练过程,本文使用类平衡交叉熵(用于解决类别不平衡训练,β=反例样本数量/总样本数量),公式如下:
在这里插入图片描述

其中,Y^= Fs是分数图的预测,Y是标注值。参数β是正和负样本之间的平衡因子,公式如下
在这里插入图片描述

文本检测的一个挑战是自然场景图像中文本的大小差别很大,而直接使用L1或L2损失进行回归会导致损失偏向较大和较长的文本区域。因此,对于RBOX回归,采用AABB部分的IoU损失。对于QUAD回归,采用scale-normalized平滑L1损失。
RBOX损失:
在这里插入图片描述

RBOX 对于AABB部分使用IoU损失,因为它对不同尺寸的物体是不变的:
其中R^表示预测的AABB几何形状,R是其相应的标注框。计算相交矩形的宽度和高度:
在这里插入图片描述

其中,d1,d2,d3和d4分别表示从一个像素到其对应矩形的顶部,右侧,底部和左侧边界的距离。并集区域公式如下:
在这里插入图片描述

因此,可以很容易地计算出IoU区域。接着,计算旋转角度的损失:
在这里插入图片描述

其中,θ^是对旋转角度的预测,并且θ*表示标注值。最后,总体几何损失是AABB损失和角度损失的加权和,公式如下:
在这里插入图片描述

QUAD损失:

在这里插入图片描述
因为本文会预测成千上万个几何框,一个简单的NMS算法的时间复杂度是O(n^2),其中n是候选框的数量,这个时间复杂度太高。所以本文提出逐行合并几何图形,假设来自附近像素的几何图形倾向于高度相关,在合并同一行中的几何图形时,将迭代合并当前遇到的几何图形与最后一个合并图形,改进后的时间复杂度为O(n)。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值