DETR paper阅读

摘要,

目标检测新范式,原先的方式不是直接的,通过anchor或者center,以及相对anchor或者center的偏差来间接的得到目标,这里直接回归最终的结果,省去一些转化的过程。

 这里的关键是定义一个bipartite matching loss,用于网络的训练。

在COCO数据集上的表现,DETR可以和Faster RCNN有接近的表现。大物体上表现更好,但是小物体表现不好。需要更久的训练,而且需要额外的辅助loss。

相关的工作,

直接预测一个组合,setting,loss需要和顺序无关,一般是基于匈牙利匹配算法。可以借鉴。

Transformer,基于self attention的,对全局信息的感知,更长的记忆,并行推理,已经在NLP取代RNN。可以借鉴。

已经有一些尝试,set-based loss,learnable nms,rnn detector.

DETR模型,

  1. loss定义
    设计cost function,利用匈牙利匹配算法得到相关的匹配,这样得到预测和真值之间的关联。
    基于如上的关联,计算常用的目标检测的损失,分类,回归的损失。
    具体代码如下:
    def forward(self, outputs, targets):
            """ Performs the matching
            Params:
                outputs: This is a dict that contains at least these entries:
                     "pred_logits": Tensor of dim [batch_size, num_queries, num_classes] with the classification logits
                     "pred_boxes": Tensor of dim [batch_size, num_queries, 4] with the predicted box coordinates
                targets: This is a list of targets (len(targets) = batch_size), where each target is a dict containing:
                     "labels": Tensor of dim [num_target_boxes] (where num_target_boxes is the number of ground-truth
                               objects in the target) containing the class labels
                     "boxes": Tensor of dim [num_target_boxes, 4] containing the target box coordinates
            Returns:
                A list of size batch_size, containing tuples of (index_i, index_j) where:
                    - index_i is the indices of the selected predictions (in order)
                    - index_j is the indices of the corresponding selected targets (in order)
                For each batch element, it holds:
                    len(index_i) = len(index_j) = min(num_queries, num_target_boxes)
            """
            bs, num_queries = outputs["pred_logits"].shape[:2]
    
            # We flatten to compute the cost matrices in a batch
            out_prob = outputs["pred_logits"].flatten(0, 1).softmax(-1)  # [batch_size * num_queries, num_classes]
            out_bbox = outputs["pred_boxes"].flatten(0, 1)  # [batch_size * num_queries, 4]
    
            # Also concat the target labels and boxes
            tgt_ids = torch.cat([v["labels"] for v in targets])
            tgt_bbox = torch.cat([v["boxes"] for v in targets])
    
            # Compute the classification cost. Contrary to the loss, we don't use the NLL,
            # but approximate it in 1 - proba[target class].
            # The 1 is a constant that doesn't change the matching, it can be ommitted.
            cost_class = -out_prob[:, tgt_ids]
    
            # Compute the L1 cost between boxes
            cost_bbox = torch.cdist(out_bbox, tgt_bbox, p=1)
    
            # Compute the giou cost betwen boxes
            cost_giou = -generalized_box_iou(box_cxcywh_to_xyxy(out_bbox), box_cxcywh_to_xyxy(tgt_bbox))
    
            # Final cost matrix
            C = self.cost_bbox * cost_bbox + self.cost_class * cost_class + self.cost_giou * cost_giou
            C = C.view(bs, num_queries, -1).cpu()
    
            sizes = [len(v["boxes"]) for v in targets]
            indices = [linear_sum_assignment(c[i]) for i, c in enumerate(C.split(sizes, -1))]
            return [(torch.as_tensor(i, dtype=torch.int64), torch.as_tensor(j, dtype=torch.int64)) for i, j in indices]

    回顾一下centerhead目标检测的loss设计:
    focal loss + l1 loss

     focal loss,用于平衡正负样本的不均衡性。清晰易懂的Focal Loss原理解释_视学算法的博客-CSDN博客

 2.  网络结构

cnn backbone, encoder-decoder transformer, ffn for final detection.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值