DETR--

DETR是一种新的目标检测方法,它将检测视为集合预测问题,消除了对手工设计组件如NMS和锚点的依赖。该模型基于Transformer架构,通过集合损失和匹配策略进行训练。DETR的代码和预训练模型已公开,其在COCO数据集上的性能与优化过的FasterR-CNN相当,并且可以轻松扩展到全景分割任务。文章讨论了DETR的工作原理、代码实现、存在的问题以及潜在的改进方案。
摘要由CSDN通过智能技术生成

1.视频教程:
B站、网易云课堂、腾讯课堂
2.代码地址:
Gitee
Github
3.存储地址:
Google云
百度云:
提取码:

《End-to-End Object Detection with Transformers》
—DETR
作者:Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko
单位:
发表会议及时间:
Submission history
From: Nicolas Carion [view email]
[v1] Tue, 26 May 2020 17:06:38 UTC (6,968 KB)
[v2] Wed, 27 May 2020 13:57:30 UTC (6,968 KB)
[v3] Thu, 28 May 2020 17:37:23 UTC (6,968 KB)


  • Abstract
    We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines. Training code and pretrained models are available at this https URL.

paper

code


1.本文将目标检测视作一个set prediction问题。
2.简化了目标检测pipeline->无需anchor以及NMS这种手工结构。
3.方法名为DETR,包含一个set based loss和一个Transformer结构。
4.模型原理简单,性能较佳

DETR的位置编码是写死的,不可学习的
这里的位置编码会应用到每一个encoder上而不只是开头的第一个
而且只会加到QK上,不影响V

一 原理解析

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

二 代码实现

所有的这些方法都可以统称为:Dense Prediction不知道物体在哪里,所以先预定位,然后逐个分类。
只是在两阶段中,是将位置信息显式的标出来,然后逐个proposal分类+微调

而在单阶段中,位置信息是通过Feature Map上的相对位置关系得到,然后逐pixel分类+微调

这种思路下的目标检测总是需要后处理(NMS),因为总会有多个proposal对应同一个instance或者多个pixel对应同一instance,这些多出来的框就需要处理掉。

使用NMS有以下两个提前假设:
1.如果两个框离的很近,那么两个框很有可能属于同一instance
2.在属于同一instance的框中,分类得分越高的,定位质量越高

原理上这种方式不自然->人类并不是这样进行目标检测的。
实践上,这种方式有很多缺陷->Anchor如何设置,NMS不工作怎么办?
那就把这些都去了-------------->DETR

原目标检测算法存在大量手工设计痕迹,DETR借助set prediction losstransformer实现了一个简洁的pipeline

集合是无序的,怎么给Target,怎么算loss?
用匈牙利匹配找到loss最小的那种匹配方式,此时loss作为最终loss

Object queries是一个可学习的向量(num, b, dim)
Num是人为给的值,远大于图片内物体数量,默认100
b是batch size
dim是attention运行过程中用的维度数
最终学出来的东西类似于Anchor

参考文献:Vision Transformer , 通用 Vision Backbone 超详细解读 (原理分析+代码解读) (目录)

模型知道了自己的100个预测框每个该做什么事情,即:每个框该预测什么样的Object .

Object queries看成100个格子,每个格子是个256维的向量。训练完以后,这100个格子里面注入了不同Object的位置信息和类别信息。比如第1个格子里面的这个256维的向量代表着Car 这种 Object的位置信息,这种信息是通过训练,考虑了所有图片的某个位置附近的Car 编码特征,属于和位置有关的全局Car 统计信息。

1.—对一注册
2.Pixel之间的比较

2.1 模型构建

2.2 模型训练

2.3 模型预测

#前向代码
def forward(self, inputs):
    # CNN的前向
    x = self.backbone.conv1(inputs)
    x = self.backbone.bn1(x)
    x = self.backbone.relu(x)
    x = self.backbone.maxpool(x)

    x = self.backbone.layer1(x)
    x = self.backbone.layer2(x)
    x = self.backbone.layer3(x)
    x = self.backbone.layer4(x)

    # 调整维度2048->embed_dim
    h = self.conv(x)

    # pos_embeding
    H, W = h.shape[-2:]
    pos = torch.cat([
        self.col_embed[:W].unsqueeze(0).repeat(H, 1, 1),
        self.row_embed[:H].unsqueeze(1).repeat(1, W, 1),
    ], dim=-1).flatten(0, 1).unsqueeze(1)

    # 加起来拉平放入transformer中
    h = self.transformer(pos + 0.1 * h.flatten(2).permute(2, 0, 1),
                         self.query_pos.unsqueeze(1)).transpose(0, 1)

    # 过两个输出头输出结果
    return {'pred_logits': self.linear_class(h), 
            'pred_boxes': self.linear_bbox(h).sigmoid()}

loss计算的时候,是算匹配最好的loss,就是最低的loss值,作为loss进行反向传播

# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
"""
Modules to compute the matching cost and solve the corresponding LSAP.
"""
import torch
from scipy.optimize import linear_sum_assignment
from torch import nn

from util.box_ops import box_cxcywh_to_xyxy, generalized_box_iou


class HungarianMatcher(nn.Module):
    """This class computes an assignment between the targets and the predictions of the network

    For efficiency reasons, the targets don't include the no_object. Because of this, in general,
    there are more predictions than targets. In this case, we do a 1-to-1 matching of the best predictions,
    while the others are un-matched (and thus treated as non-objects).
    """

    def __init__(self, cost_class: float = 1, cost_bbox: float = 1, cost_giou: float = 1):
        """Creates the matcher

        Params:
            cost_class: This is the relative weight of the classification error in the matching cost
            cost_bbox: This is the relative weight of the L1 error of the bounding box coordinates in the matching cost
            cost_giou: This is the relative weight of the giou loss of the bounding box in the matching cost
        """
        super().__init__()
        self.cost_class = cost_class
        self.cost_bbox = cost_bbox
        self.cost_giou = cost_giou
        assert cost_class != 0 or cost_bbox != 0 or cost_giou != 0, "all costs cant be 0"

    @torch.no_grad()
    def forward(self, outputs, targets):
        """ Performs the matching

        Params:
            outputs: This is a dict that contains at least these entries:
                 "pred_logits": Tensor of dim [batch_size, num_queries, num_classes] with the classification logits
                 "pred_boxes": Tensor of dim [batch_size, num_queries, 4] with the predicted box coordinates

            targets: This is a list of targets (len(targets) = batch_size), where each target is a dict containing:
                 "labels": Tensor of dim [num_target_boxes] (where num_target_boxes is the number of ground-truth
                           objects in the target) containing the class labels
                 "boxes": Tensor of dim [num_target_boxes, 4] containing the target box coordinates

        Returns:
            A list of size batch_size, containing tuples of (index_i, index_j) where:
                - index_i is the indices of the selected predictions (in order)
                - index_j is the indices of the corresponding selected targets (in order)
            For each batch element, it holds:
                len(index_i) = len(index_j) = min(num_queries, num_target_boxes)
        """
        bs, num_queries = outputs["pred_logits"].shape[:2]

        # We flatten to compute the cost matrices in a batch
        out_prob = outputs["pred_logits"].flatten(0, 1).softmax(-1)  # [batch_size * num_queries, num_classes]
        out_bbox = outputs["pred_boxes"].flatten(0, 1)  # [batch_size * num_queries, 4]

        # Also concat the target labels and boxes
        tgt_ids = torch.cat([v["labels"] for v in targets])
        tgt_bbox = torch.cat([v["boxes"] for v in targets])

        # Compute the classification cost. Contrary to the loss, we don't use the NLL,
        # but approximate it in 1 - proba[target class].
        # The 1 is a constant that doesn't change the matching, it can be ommitted.
        
        #对于每个预测结果,把目前gt里面有的所有类别值提取出来,组成一个矩阵。
        cost_class = -out_prob[:, tgt_ids]

        # 计算predict box和gt box之间的l1距离,返回一个矩阵。
        cost_bbox = torch.cdist(out_bbox, tgt_bbox, p=1)

        # 计算每个predict box和每个gt之间的giou,返回一个矩阵。
        cost_giou = -generalized_box_iou(box_cxcywh_to_xyxy(out_bbox), box_cxcywh_to_xyxy(tgt_bbox))

        # 组成cost矩阵
        C = self.cost_bbox * cost_bbox + self.cost_class * cost_class + self.cost_giou * cost_giou
        C = C.view(bs, num_queries, -1).cpu()
        
        #单图匹配
        sizes = [len(v["boxes"]) for v in targets]
        #匈牙利匹配,返回最佳匹配结果 借助scipy库的linear_sum_assignment实现。
        indices = [linear_sum_assignment(c[i]) for i, c in enumerate(C.split(sizes, -1))]
        return [(torch.as_tensor(i, dtype=torch.int64), torch.as_tensor(j, dtype=torch.int64)) for i, j in indices]


def build_matcher(args):
    return HungarianMatcher(cost_class=args.set_cost_class, cost_bbox=args.set_cost_bbox, cost_giou=args.set_cost_giou)

from PIL import Image
import requests
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'

import torch
from torch import nn
from torchvision.models import resnet50
import torchvision.transforms as T
torch.set_grad_enabled(False);

class DETRdemo(nn.Module):
    """
    Demo DETR implementation.

    Demo implementation of DETR in minimal number of lines, with the
    following differences wrt DETR in the paper:
    * learned positional encoding (instead of sine)
    * positional encoding is passed at input (instead of attention)
    * fc bbox predictor (instead of MLP)
    The model achieves ~40 AP on COCO val5k and runs at ~28 FPS on Tesla V100.
    Only batch size 1 supported.
    """
    def __init__(self, num_classes, hidden_dim=256, nheads=8,
                 num_encoder_layers=6, num_decoder_layers=6):
        super().__init__()

        # create ResNet-50 backbone
        self.backbone = resnet50()
        del self.backbone.fc

        # create conversion layer
        self.conv = nn.Conv2d(2048, hidden_dim, 1)

        # create a default PyTorch transformer
        self.transformer = nn.Transformer(
            hidden_dim, nheads, num_encoder_layers, num_decoder_layers)

        # prediction heads, one extra class for predicting non-empty slots
        # note that in baseline DETR linear_bbox layer is 3-layer MLP
        self.linear_class = nn.Linear(hidden_dim, num_classes + 1)
        self.linear_bbox = nn.Linear(hidden_dim, 4)

        # output positional encodings (object queries)
        self.query_pos = nn.Parameter(torch.rand(100, hidden_dim))

        # spatial positional encodings
        # note that in baseline DETR we use sine positional encodings
        self.row_embed = nn.Parameter(torch.rand(50, hidden_dim // 2))
        self.col_embed = nn.Parameter(torch.rand(50, hidden_dim // 2))

    def forward(self, inputs):
        # propagate inputs through ResNet-50 up to avg-pool layer
        x = self.backbone.conv1(inputs)
        x = self.backbone.bn1(x)
        x = self.backbone.relu(x)
        x = self.backbone.maxpool(x)

        x = self.backbone.layer1(x)
        x = self.backbone.layer2(x)
        x = self.backbone.layer3(x)
        x = self.backbone.layer4(x)

        # convert from 2048 to 256 feature planes for the transformer
        h = self.conv(x)

        # construct positional encodings
        H, W = h.shape[-2:]
        pos = torch.cat([
            self.col_embed[:W].unsqueeze(0).repeat(H, 1, 1),
            self.row_embed[:H].unsqueeze(1).repeat(1, W, 1),
        ], dim=-1).flatten(0, 1).unsqueeze(1)

        # propagate through the transformer
        h = self.transformer(pos + 0.1 * h.flatten(2).permute(2, 0, 1),
                             self.query_pos.unsqueeze(1)).transpose(0, 1)
        
        # finally project transformer outputs to class labels and bounding boxes
        return {'pred_logits': self.linear_class(h), 
                'pred_boxes': self.linear_bbox(h).sigmoid()}

构建标签

def prepare_targets(self, targets):
    new_targets = []
    for targets_per_image in targets:
        h, w = targets_per_image.image_size
        image_size_xyxy = torch.as_tensor([w, h, w, h], dtype=torch.float, device=self.device)
        gt_classes = targets_per_image.gt_classes
        gt_boxes = targets_per_image.gt_boxes.tensor / image_size_xyxy
        gt_boxes = box_xyxy_to_cxcywh(gt_boxes)
        new_targets.append({"labels": gt_classes, "boxes": gt_boxes})
        if self.mask_on and hasattr(targets_per_image, 'gt_masks'):
            gt_masks = targets_per_image.gt_masks
            gt_masks = convert_coco_poly_to_mask(gt_masks.polygons, h, w)
            new_targets[-1].update({'masks': gt_masks})
    return new_targets

预测值都在0-1之间
所以box的label的值也要缩放到0-1之间

2.4 模型验证

三 问题思索

四 改进方案

五 额外补充

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值