Yolov10笔记

一、前言

        清华大学团队设计的Yolov10.

        在这项工作中,我们主要从后处理和模型结构两方面进一步优化YOLO系列模型的性能和延迟平衡。我们首先为YOLO引入了端到端训练的一致双重分配,这在大大降低推理延迟的情况下保证了性能。此外,我们针对YOLO的各组件使用效率和精度驱动的模型设计策略。这大大减少了计算冗余,并增强了模型能力。

        最终,我们获得了一个新的实时的端到端目标检测模型,即YOLOv10。广泛的实验表明,YOLOv10在各种模型规模上达到了最先进的性能和效率平衡。例如,YOLOv10-S在COCO上的类似AP下比RT-DETR-R18快1.8倍,同时有2.8倍更少的参数和FLOPs。与YOLOv9-C相比,YOLOv10-B在相同性能下延迟减少了46%,参数减少了25%。

二、创新点

1、一致双重匹配

        与一对多匹配不同,一对一匹配只为每个物体分配一个预测,避免了NMS后处理。然而,这会带来较弱的监督信息,导致次优的准确性和收敛速度[19]。另一方面,一对多策略则可以引入更多的正样本和监督信号,可有效弥补一对一匹配的不足[23]。因此,我们为YOLOs引入了双重标签匹配,如下图,以充分结合两种策略的优点。

2、效率精度驱动的模型设计

效率驱动的模型设计

(1)、轻量级分类头

(2)、空间-通道解耦下采样

(3)、秩引导的块设计

精度驱动的模型设计

(1)、大核卷积

(2)、部分自注意力(PSA)

三、代码解读

1、网络结构图

以yolov10s模型为例。

# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
  # [depth, width, max_channels]
  s: [0.33, 0.50, 1024]

backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  - [-1, 3, C2f, [128, True]]
  - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  - [-1, 6, C2f, [256, True]]
  - [-1, 1, SCDown, [512, 3, 2]] # 5-P4/16
  - [-1, 6, C2f, [512, True]]
  - [-1, 1, SCDown, [1024, 3, 2]] # 7-P5/32
  - [-1, 3, C2fCIB, [1024, True, True]]
  - [-1, 1, SPPF, [1024, 5]] # 9
  - [-1, 1, PSA, [1024]] # 10

# YOLOv8.0n head
head:
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  - [-1, 3, C2f, [512]] # 13

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  - [-1, 3, C2f, [256]] # 16 (P3/8-small)

  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 13], 1, Concat, [1]] # cat head P4
  - [-1, 3, C2f, [512]] # 19 (P4/16-medium)

  - [-1, 1, SCDown, [512, 3, 2]]
  - [[-1, 10], 1, Concat, [1]] # cat head P5
  - [-1, 3, C2fCIB, [1024, True, True]] # 22 (P5/32-large)

  - [[16, 19, 22], 1, v10Detect, [nc]] # Detect(P3, P4, P5)

由于完整的模型图太长,这里只截取头部处理的模型图。

2、代码分析

class v10Detect(Detect):

    max_det = -1

    def __init__(self, nc=80, ch=()):
        super().__init__(nc, ch)
        c3 = max(ch[0], min(self.nc, 100))  # channels
        self.cv3 = nn.ModuleList(nn.Sequential(nn.Sequential(Conv(x, x, 3, g=x), Conv(x, c3, 1)), \
                                               nn.Sequential(Conv(c3, c3, 3, g=c3), Conv(c3, c3, 1)), \
                                               nn.Conv2d(c3, self.nc, 1)) for i, x in enumerate(ch))

        self.one2one_cv2 = copy.deepcopy(self.cv2)
        self.one2one_cv3 = copy.deepcopy(self.cv3)
    
    def forward(self, x):
        one2one = self.forward_feat([xi.detach() for xi in x], self.one2one_cv2, self.one2one_cv3)
        if not self.export:
            one2many = super().forward(x)

        if not self.training:
            one2one = self.inference(one2one)
            if not self.export:
                return {"one2many": one2many, "one2one": one2one}
            else:
                assert(self.max_det != -1)
                boxes, scores, labels = ops.v10postprocess(one2one.permute(0, 2, 1), self.max_det, self.nc)
                return torch.cat([boxes, scores.unsqueeze(-1), labels.unsqueeze(-1)], dim=-1)
        else:
            return {"one2many": one2many, "one2one": one2one}
def v10postprocess(preds, max_det, nc=80):
    # preds: shape[1,8400,84]
    assert(4 + nc == preds.shape[-1])
    boxes, scores = preds.split([4, nc], dim=-1)  # [1,8400,4]  [1,8400,80]
    max_scores = scores.amax(dim=-1)  # [1,8400]
    max_scores, index = torch.topk(max_scores, max_det, axis=-1)  # [1,300]
    index = index.unsqueeze(-1)  # [1,300,1]
    boxes = torch.gather(boxes, dim=1, index=index.repeat(1, 1, boxes.shape[-1]))     # [1,300,4]
    scores = torch.gather(scores, dim=1, index=index.repeat(1, 1, scores.shape[-1]))  # [1,300,80]
    scores, index = torch.topk(scores.flatten(1), max_det, axis=-1)  # 1*300
    labels = index % nc  # 1*300
    index = index // nc  # 1*300
    boxes = boxes.gather(dim=1, index=index.unsqueeze(-1).repeat(1, 1, boxes.shape[-1]))  # 1*300*4
    return boxes, scores, labels
    def inference(self, x):
        # Inference path
        shape = x[0].shape  # BCHW
        x_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)
        if self.dynamic or self.shape != shape:
            self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
            self.shape = shape

        if self.export and self.format in ("saved_model", "pb", "tflite", "edgetpu", "tfjs"):  # avoid TF FlexSplitV ops
            box = x_cat[:, : self.reg_max * 4]
            cls = x_cat[:, self.reg_max * 4 :]
        else:
            box, cls = x_cat.split((self.reg_max * 4, self.nc), 1)

        if self.export and self.format in ("tflite", "edgetpu"):
            # Precompute normalization factor to increase numerical stability
            # See https://github.com/ultralytics/ultralytics/issues/7371
            grid_h = shape[2]
            grid_w = shape[3]
            grid_size = torch.tensor([grid_w, grid_h, grid_w, grid_h], device=box.device).reshape(1, 4, 1)
            norm = self.strides / (self.stride[0] * grid_size)
            dbox = self.decode_bboxes(self.dfl(box) * norm, self.anchors.unsqueeze(0) * norm[:, :2])
        else:
            dbox = self.decode_bboxes(self.dfl(box), self.anchors.unsqueeze(0)) * self.strides

        y = torch.cat((dbox, cls.sigmoid()), 1)
        return y if self.export else (y, x)

记录各个模块返回参数的shape

*** one2one = self.forward_feat([xi.detach() for xi in x], self.one2one_cv2, self.one2one_cv3)  ===>
1*128*80*80 ==> 1*64*80*80 + 1*80*80*80 = 1*144*80*80
1*256*40*40 ==> 1*64*40*40 + 1*80*40*40 = 1*144*40*40
1*512*20*20 ==> 1*64*20*20 + 1*80*20*20 = 1*144*20*20

*** one2one = self.inference(one2one)  ===>  
1*144*6400 + 1*144*1600 + 1*144*400 = 1*144*8400
box: 1*64*8400  ==>  dbox: 1*4*8400
cls: 1*80*8400
output: 1*4*8400 + 1*80*8400 = 1*84*8400

*** boxes, scores, labels = ops.v10postprocess(one2one.permute(0, 2, 1), 300, 80)
1*84*8400  ==> boxes:1*300*4   scores:1*300   labels:1*300

*** torch.cat([boxes, scores.unsqueeze(-1), labels.unsqueeze(-1)], dim=-1) ===>
boxes:1*300*4   scores:1*300   labels:1*300  ==>  1*300*6

四、参考

1、博客:YOLOv10:实时的端到端目标检测 (qq.com)

2、论文:[2405.14458] YOLOv10: Real-Time End-to-End Object Detection (arxiv.org)

3、code:THU-MIG/yolov10: YOLOv10: Real-Time End-to-End Object Detection (github.com)

  • 10
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值