YOLO v3 原理总结

论文:YOLOv3: An Incremental Improvement

YOLO3主要的改进有:调整了网络结构;利用多尺度特征进行对象检测;对象分类用Logistic取代了softmax。

新的网络结构Darknet-53

在基本的图像特征提取方面,YOLO3采用了称之为Darknet-53的网络结构(含有53个卷积层),它借鉴了残差网络residual network的做法,在一些层之间设置了快捷链路(shortcut connections)。

 

利用多尺度特征进行对象检测

YOLO2曾采用passthrough结构来检测细粒度特征,在YOLO3更进一步采用了3个不同尺度的特征图来进行对象检测。

结合上图看,卷积网络在79层后,经过下方几个黄色的卷积层得到一种尺度的检测结果。相比输入图像,这里用于检测的特征图有32倍的下采样。比如输入是416*416的话,这里的特征图就是13*13了。由于下采样倍数高,这里特征图的感受野比较大,因此适合检测图像中尺寸比较大的对象。

为了实现细粒度的检测,第79层的特征图又开始作上采样,然后与第61层特征图融合(Concatenation),这样得到第91层较细粒度的特征图,同样经过几个卷积层后得到相对输入图像16倍下采样的特征图。它具有中等尺度的感受野,适合检测中等尺度的对象。

最后,第91层特征图再次上采样,并与第36层特征图融合(Concatenation),最后得到相对输入图像8倍下采样的特征图。它的感受野最小,适合检测小尺寸的对象。

 

9种尺度的先验框

随着输出的特征图的数量和尺度的变化,先验框的尺寸也需要相应的调整。YOLO2已经开始采用K-means聚类得到先验框的尺寸,YOLO3延续了这种方法,为每种下采样尺度设定3种先验框,总共聚类出9种尺寸的先验框。在COCO数据集这9个先验框是:(10x13),(16x30),(33x23),(30x61),(62x45),(59x119),(116x90),(156x198),(373x326)。

分配上,在最小的13*13特征图上(有最大的感受野)应用较大的先验框(116x90),(156x198),(373x326),适合检测较大的对象。中等的26*26特征图上(中等感受野)应用中等的先验框(30x61),(62x45),(59x119),适合检测中等大小的对象。较大的52*52特征图上(较小的感受野)应用较小的先验框(10x13),(16x30),(33x23),适合检测较小的对象。

感受一下9种先验框的尺寸,下图中蓝色框为聚类得到的先验框。黄色框式ground truth,红框是对象中心点所在的网格。

 

对象分类softmax改成logistic

预测对象类别时不使用softmax,改成使用logistic的输出进行预测。这样能够支持多标签对象(比如一个人有Woman 和 Person两个标签)。

 

输入映射到输出

我们看一下YOLO3共进行了多少个预测。对于一个416*416的输入图像,在每个尺度的特征图的每个网格设置3个先验框,总共有 13*13*3 + 26*26*3 + 52*52*3 = 10647 个预测。每一个预测是一个(4+1+80)=85维向量,这个85维向量包含边框坐标(4个数值),边框置信度(1个数值),对象类别的概率(对于COCO数据集,有80种对象)。

对比一下,YOLO2采用13*13*5 = 845个预测,YOLO3的尝试预测边框数量增加了10多倍,而且是在不同分辨率上进行,所以mAP以及对小物体的检测效果有一定的提升。

 

数据处理

Train

class YOLOLayer(nn.Module):
    """Detection layer"""

    def __init__(self, anchors, num_classes, img_dim=416):
        super(YOLOLayer, self).__init__()
        self.anchors = anchors
        self.num_anchors = len(anchors)
        self.num_classes = num_classes
        self.ignore_thres = 0.5
        self.mse_loss = nn.MSELoss()
        self.bce_loss = nn.BCELoss()
        self.obj_scale = 1
        self.noobj_scale = 100
        self.metrics = {}
        self.img_dim = img_dim
        self.grid_size = 0  # grid size

    def compute_grid_offsets(self, grid_size, cuda=True):
        """
        将anchor box转换成以格子为单位;计算每个格子的像素数
        :param grid_size:
        :param cuda:
        :return:
        """
        self.grid_size = grid_size
        g = self.grid_size
        FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor
        self.stride = self.img_dim / self.grid_size # 每个格子包含的resize后图片的像素数
        # Calculate offsets for each grid
        self.grid_x = torch.arange(g).repeat(g, 1).view([1, 1, g, g]).type(FloatTensor) #[1, 1, grid_size, grid_size]用来计算中心点x方向的坐标
        self.grid_y = torch.arange(g).repeat(g, 1).t().view([1, 1, g, g]).type(FloatTensor) #[1, 1, grid_size, grid_size]用来计算中心点方向的坐标
        self.scaled_anchors = FloatTensor([(a_w / self.stride, a_h / self.stride) for a_w, a_h in self.anchors]) # 将anchor box的坐标由像素为单位转换成格子为单位, [n_anchorbox, 2]
        self.anchor_w = self.scaled_anchors[:, 0:1].view((1, self.num_anchors, 1, 1)) # anchor box 的w , [1, n_anchorbox, 1, 1]
        self.anchor_h = self.scaled_anchors[:, 1:2].view((1, self.num_anchors, 1, 1)) # anchor box 的h , [1, n_anchorbox, 1, 1]

    def forward(self, x, targets=None, img_dim=None):

        # Tensors for cuda support
        FloatTensor = torch.cuda.FloatTensor if x.is_cuda else torch.FloatTensor
        LongTensor = torch.cuda.LongTensor if x.is_cuda else torch.LongTensor
        ByteTensor = torch.cuda.ByteTensor if x.is_cuda else torch.ByteTensor

        self.img_dim = img_dim
        num_samples = x.size(0) # batch_size
        grid_size = x.size(2) # 尺度, grid_size*grid_size为特征图的网格数量

        prediction = ( # [batch_size, n_anchor_box, grid_size, grid_size, n_classes+5]
            x.view(num_samples, self.num_anchors, self.num_classes + 5, grid_size, grid_size)
            .permute(0, 1, 3, 4, 2)
            .contiguous()
        )

        # 神经网络的预测 x,y,w,h,c,cls
        x = torch.sigmoid(prediction[..., 0])  # 中心点距离当前网格左上角的x方向的偏移量,单位为x个grid_size大小的格子, [batch_size, n_anchorbox, grid_size, grid_size]
        y = torch.sigmoid(prediction[..., 1])  # 中心点距离当前网格左上角的y方向的偏移量,单位为x个grid_size大小的格子, [batch_size, n_anchorbox, grid_size, grid_size]
        w = prediction[..., 2]  # 预测出的box的weight,单位为x个grid_size大小的格子, [batch_size, n_anchorbox, grid_size, grid_size]
        h = prediction[..., 3]  # 预测出的box的Height,单位为x个grid_size大小的格子, [batch_size, n_anchorbox, grid_size, grid_size]
        pred_conf = torch.sigmoid(prediction[..., 4])  # 包含物体的置信度, [batch_size, n_anchorbox, grid_size, grid_size]
        pred_cls = torch.sigmoid(prediction[..., 5:])  # 类别概率, [batch_size, n_anchorbox, grid_size, grid_size, n_classes]

        # If grid size does not match current we compute new offsets
        if grid_size != self.grid_size:
            self.compute_grid_offsets(grid_size, cuda=x.is_cuda)

        # Add offset and scale with anchors
        pred_boxes = FloatTensor(prediction[..., :4].shape) # [batch_size, n_anchorbox, grid_size, grid_size, 4], 单位是格子数
        pred_boxes[..., 0] = x.data + self.grid_x
        pred_boxes[..., 1] = y.data + self.grid_y
        pred_boxes[..., 2] = torch.exp(w.data) * self.anchor_w
        pred_boxes[..., 3] = torch.exp(h.data) * self.anchor_h

        output = torch.cat(  # 转换为像素为单位,[batch_size, grid_size*grid_size*n_anchorbox, 5+n_classes]
            (
                pred_boxes.view(num_samples, -1, 4) * self.stride,
                pred_conf.view(num_samples, -1, 1),
                pred_cls.view(num_samples, -1, self.num_classes),
            ),
            -1,
        )

        if targets is None:
            return output, 0
        else:
            # iou_scores: [batch_size, n_anchorbox, grid_size, grid_size] 预测的box 和target box 的交并比
            # class_mask: [batch_size, n_anchorbox, grid_size, grid_size], 预测正确的class 为true
            # obj_mask : [batch_size, n_anchorbox, grid_size, grid_size]
            # noobj_mask:  [batch_size, n_anchorbox, grid_size, grid_size]
            # tx: [batch_size, n_anchorbox, grid_size, grid_size]
            # ty: [batch_size, n_anchorbox, grid_size, grid_size]
            # tw: [batch_size, n_anchorbox, grid_size, grid_size]
            # th: [batch_size, n_anchorbox, grid_size, grid_size]
            # tcls :[batch_size, n_anchorbox, grid_size, grid_size, n_classes]
            # tconf: [batch_size, n_anchorbox, grid_size, grid_size]
            iou_scores, class_mask, obj_mask, noobj_mask, tx, ty, tw, th, tcls, tconf = build_targets(
                pred_boxes=pred_boxes,
                pred_cls=pred_cls,
                target=targets,
                anchors=self.scaled_anchors,
                ignore_thres=self.ignore_thres,
            )

            # Loss : Mask outputs to ignore non-existing objects (except with conf. loss)
            loss_x = self.mse_loss(x[obj_mask], tx[obj_mask])
            loss_y = self.mse_loss(y[obj_mask], ty[obj_mask])
            loss_w = self.mse_loss(w[obj_mask], tw[obj_mask])
            loss_h = self.mse_loss(h[obj_mask], th[obj_mask])
            loss_conf_obj = self.bce_loss(pred_conf[obj_mask], tconf[obj_mask])
            loss_conf_noobj = self.bce_loss(pred_conf[noobj_mask], tconf[noobj_mask])
            loss_conf = self.obj_scale * loss_conf_obj + self.noobj_scale * loss_conf_noobj
            loss_cls = self.bce_loss(pred_cls[obj_mask], tcls[obj_mask])
            total_loss = loss_x + loss_y + loss_w + loss_h + loss_conf + loss_cls

            # Metrics
            cls_acc = 100 * class_mask[obj_mask].mean() # 正确率
            conf_obj = pred_conf[obj_mask].mean() # 有物体的平均置信度
            conf_noobj = pred_conf[noobj_mask].mean() # 无物体的平均置信度
            conf50 = (pred_conf > 0.5).float() # 置信度大于0.5的位置 [batch_size, n_anchorbox, grid_size, grid_size]
            iou50 = (iou_scores > 0.5).float() # 交并比大于0.5的位置 [batch_size, n_anchorbox, grid_size, grid_size]
            iou75 = (iou_scores > 0.75).float() # 交并比大于0.75的位置 [batch_size, n_anchorbox, grid_size, grid_size]
            detected_mask = conf50 * class_mask * tconf # 置信度大于0.5,并且预测的类别正确,并且obj_mask为true
            precision = torch.sum(iou50 * detected_mask) / (conf50.sum() + 1e-16)
            recall50 = torch.sum(iou50 * detected_mask) / (obj_mask.sum() + 1e-16)
            recall75 = torch.sum(iou75 * detected_mask) / (obj_mask.sum() + 1e-16)

            self.metrics = {
                "loss": to_cpu(total_loss).item(),
                "x": to_cpu(loss_x).item(),
                "y": to_cpu(loss_y).item(),
                "w": to_cpu(loss_w).item(),
                "h": to_cpu(loss_h).item(),
                "conf": to_cpu(loss_conf).item(),
                "cls": to_cpu(loss_cls).item(),
                "cls_acc": to_cpu(cls_acc).item(),
                "recall50": to_cpu(recall50).item(),
                "recall75": to_cpu(recall75).item(),
                "precision": to_cpu(precision).item(),
                "conf_obj": to_cpu(conf_obj).item(),
                "conf_noobj": to_cpu(conf_noobj).item(),
                "grid_size": grid_size,
            }

            return output, total_loss

def build_targets(pred_boxes, pred_cls, target, anchors, ignore_thres):
    """

    :param pred_boxes: 预测的box,单位是格子数 [batch_size, n_anchorbox, grid_size, grid_size, 4]
    :param pred_cls: 类别概率, [batch_size, n_anchorbox, grid_size, grid_size, n_classes]
    :param target: [n_boxes, 6], 第二个维度有6个值,分别为: box所属的图片的index, 类别index, x, y, w, h
    :param anchors: [n_anchorbox, 2] ,第二个维度为anchor box的weight和hight
    :param ignore_thres:
    :return:
    """
    ByteTensor = torch.cuda.ByteTensor if pred_boxes.is_cuda else torch.ByteTensor
    FloatTensor = torch.cuda.FloatTensor if pred_boxes.is_cuda else torch.FloatTensor

    nB = pred_boxes.size(0) # batch_size
    nA = pred_boxes.size(1) # n_anchor_box
    nC = pred_cls.size(-1) # n_classes
    nG = pred_boxes.size(2) # grid_size

    # Output tensors
    obj_mask = ByteTensor(nB, nA, nG, nG).fill_(0) # [batch_size, n_anchor_box, grid_size, grid_size]
    noobj_mask = ByteTensor(nB, nA, nG, nG).fill_(1) # [batch_size, n_anchor_box, grid_size, grid_size]
    class_mask = FloatTensor(nB, nA, nG, nG).fill_(0) # [batch_size, n_anchor_box, grid_size, grid_size]
    iou_scores = FloatTensor(nB, nA, nG, nG).fill_(0) # [batch_size, n_anchor_box, grid_size, grid_size]
    tx = FloatTensor(nB, nA, nG, nG).fill_(0) # [batch_size, n_anchor_box, grid_size, grid_size]
    ty = FloatTensor(nB, nA, nG, nG).fill_(0) # [batch_size, n_anchor_box, grid_size, grid_size]
    tw = FloatTensor(nB, nA, nG, nG).fill_(0) # [batch_size, n_anchor_box, grid_size, grid_size]
    th = FloatTensor(nB, nA, nG, nG).fill_(0) # [batch_size, n_anchor_box, grid_size, grid_size]
    tcls = FloatTensor(nB, nA, nG, nG, nC).fill_(0) # [batch_size, n_anchor_box, grid_size, grid_size]

    # Convert to position relative to box
    target_boxes = target[:, 2:6] * nG # [n_boxes, 4] 将box的相对坐标转换成以格子为单位的坐标,n_boxes为一个batch样本中框的总数
    gxy = target_boxes[:, :2] # target box中心点的坐标,单位为格子数, [n_target_box, 2]
    gwh = target_boxes[:, 2:] # target box的weight和height,单位为格子数,[n_target_box, 2]
    # 找出于target box 交并比最大的anchor box,
    ious = torch.stack([bbox_wh_iou(anchor, gwh) for anchor in anchors]) # target box和anchor box计算交并比, [n_anchorbox, n_target_box]
    best_ious, best_n = ious.max(0) # 最大的iou,与target box 交并比最大的anchor box的index, [n_target_box], [n_target_box]
    # Separate target values
    b, target_labels = target[:, :2].long().t() # target box在批中的index,target box的物体类别index;  [n_target_box], [n_target_box]
    gx, gy = gxy.t() # target box中心点的坐标,单位为格子数,[n_target_box], [n_target_box]
    gw, gh = gwh.t() # target box的weight和height,单位为格子数,[n_target_box], [n_target_box]
    gi, gj = gxy.long().t() # gi为格子在x方向的index, gj为格子在y方向的index,[n_target_box], [n_target_box]
    # Set masks
    obj_mask[b, best_n, gj, gi] = 1 # [batch_size, n_anchorbox, grid_size, grid_size]
    noobj_mask[b, best_n, gj, gi] = 0 # [batch_size, n_anchorbox, grid_size, grid_size]

    # Set noobj mask to zero where iou exceeds ignore threshold
    for i, anchor_ious in enumerate(ious.t()):
        noobj_mask[b[i], anchor_ious > ignore_thres, gj[i], gi[i]] = 0

    # Coordinates
    tx[b, best_n, gj, gi] = gx - gx.floor() # target box 中心点在x方向上相对当前格子左上角的偏移[batch_size, n_anchorbox, grid_size, grid_size]
    ty[b, best_n, gj, gi] = gy - gy.floor() # target box 中心点在y方向上相对当前格子左上角的偏移[batch_size, n_anchorbox, grid_size, grid_size]
    # Width and height
    tw[b, best_n, gj, gi] = torch.log(gw / anchors[best_n][:, 0] + 1e-16) # target box 的weight[batch_size, n_anchorbox, grid_size, grid_size]
    th[b, best_n, gj, gi] = torch.log(gh / anchors[best_n][:, 1] + 1e-16) # target box 的height[batch_size, n_anchorbox, grid_size, grid_size]
    # One-hot encoding of label
    tcls[b, best_n, gj, gi, target_labels] = 1 # [batch_size, n_anchorbox, grid_size, grid_size, n_classes]
    # Compute label correctness and iou at best anchor
    class_mask[b, best_n, gj, gi] = (pred_cls[b, best_n, gj, gi].argmax(-1) == target_labels).float() # [batch_size, n_anchorbox, grid_size, grid_size]
    iou_scores[b, best_n, gj, gi] = bbox_iou(pred_boxes[b, best_n, gj, gi], target_boxes, x1y1x2y2=False) # [batch_size, n_anchorbox, grid_size, grid_size]

    tconf = obj_mask.float()
    return iou_scores, class_mask, obj_mask, noobj_mask, tx, ty, tw, th, tcls, tconf

 

   此函数定义损失函数,损失函数包括三个部分,坐标损失,置信度损失,类别损失:

预测出框后,进行的后处理:

1.将置信度大于阈值的框挑选出来

2.计算每个框的score, score=置信度*分类概率

3.非极大值抑制nms

  • 将所有框根据score由大到小排序;
  • 取出score最大的框,与剩余的框计算iou;
  • 如果iou大于阈值,并且识别出的物体的标签与score最大的框识别出物体的标签相同,标记为invalid框;
  • 将invalid的框从候选框中删除;
  • 根据invalid框的置信度,作为权重,加权平均修正score最大的框的x,y,w,h;
  • 将修正后的框加入结果框列表;
  • 循环往复,知道没有候选框。

参考:

[1] YOLOv3 深入理解
 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值