【目标检测】55、YOLOv8 | YOLOv5 团队 Ultralytics 再次出手,又实现了 SOTA

在这里插入图片描述

论文:暂无

官方文档:https://docs.ultralytics.com/

代码:https://github.com/ultralytics/ultralytics

出处:2023.01 | Ultralytics (同 YOLOv5)

一、YOLO 系列算法的简单回顾

YOLO (You Only Look Once) 是目前非常流行的一种目标检测和图像分割的框架:

  • YOLOv1:2015 年被提出,其 anchor-free 的新模式和高效的框架结构赢得了很多研究者的关注
  • YOLOv2:2016 年被提出,在 v1 的基础上添加了 BN、anchor box、dimension cluster 等
  • YOLOv3:2018 年被提出,使用了更高效的 backbone、特征金字塔、focal loss
  • YOLOv4:2020 年被提出,引入了很多 BoF 和 BoS 方法,如 Mosic 数据增强、anchor-free detection head、CSP、PAN 等
  • YOLOv5:2021 年被提出,注重提升模型的性能,添加了很多新的特征用于支持全景分割和目标跟踪
  • YOLOv6:2020 年被提出,注重将 YOLO 用于工业领域,引入了 RepVGG、CSP、self-distillation、TAL 等多种模块,并选择了最优的结构作为 YOLOv6
  • YOLOv7:2022 年被提出,也是专门为实时目标检测而设计,引入了很多可训练的 BoF 模块,在不带来额外推理消耗的情况下,提升检测效果

二、YOLOv8 简介

YOLOv8 是由 Ultralytics 提出的,YOLOv8 的一个最大的特点是可扩展性,不仅仅支持一种算法,而是和前面的所有 YOLO 算法框架兼容的,可以通过修改配置参数来轻松的修改训练使用的 YOLO 算法。

YOLOv8 支持以下 3 个任务:

  • 目标检测
  • 实例分割
  • 图像分类

在这里插入图片描述

目标检测上的效果:

在这里插入图片描述

实例分割上的效果:

在这里插入图片描述

图像分类上的效果:

在这里插入图片描述

2.1 安装和简单使用

# 方法 1:直接安装
pip install ultralytics

# 方法 2:通过 clone 的方法安装
git clone https://github.com/ultralytics/ultralytics
cd ultralytics
pip install -e '.[dev]'

下面以目标检测任务来展示一下相关命令的简单使用,实例分割和图像分类只需要修改 task 和对应的 model 即可!

1、简单使用:

#  python
from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n.yaml")  # build a new model from scratch
model = YOLO("yolov8n.pt")  # load a pretrained model (recommended for training)

# Use the model
results = model.train(data="coco128.yaml", epochs=3)  # train the model
results = model.val()  # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image
success = model.export(format="onnx")  # export the model to ONNX format
# CLI(command line interface, 命令行)
yolo task=detect    mode=train    model=yolov8n.yaml      args...
          classify       predict        yolov8n-cls.yaml  args...
          segment        val            yolov8n-seg.yaml  args...
                         export         yolov8n.pt        format=onnx  args...
# train
yolo task=detect mode=train model=yolov8n.pt data=coco128.yaml device=0
yolo task=detect mode=train model=yolov8n.pt data=coco128.yaml device=\'0,1,2,3\'

2、训练:

# python
from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n.yaml")  # build a new model from scratch
model = YOLO("yolov8n.pt")  # load a pretrained model (recommended for training)

# Train the model
results = model.train(data="coco128.yaml", epochs=100, imgsz=640)
# CLI ()
yolo task=detect mode=train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640

3、验证:

# python
from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n.pt")  # load an official model
model = YOLO("path/to/best.pt")  # load a custom model

# Validate the model
results = model.val()  # no arguments needed, dataset and settings remembered
# CLI
yolo task=detect mode=val model=yolov8n.pt  # val official model
yolo task=detect mode=val model=path/to/best.pt  # val custom model

4、预测可视化:

# python
from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n.pt")  # load an official model
model = YOLO("path/to/best.pt")  # load a custom model

# Predict with the model
results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image
# CLI
yolo task=detect mode=predict model=yolov8n.pt source="https://ultralytics.com/images/bus.jpg"  # predict with official model
yolo task=detect mode=predict model=path/to/best.pt source="https://ultralytics.com/images/bus.jpg"  # predict with custom model

5、模型转换 Export

# python
from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n.pt")  # load an official model
model = YOLO("path/to/best.pt")  # load a custom trained

# Export the model
model.export(format="onnx")
# CLI
yolo mode=export mod
el=yolov8n.pt format=onnx  # export official model
yolo mode=export model=path/to/best.pt format=onnx  # export custom trained model

在这里插入图片描述

2.2 Ultralytics HUB

YOLOv5 的数据格式回顾:

- images
	- train
		- img1.jpg
	- val
		- img2.jpg
- labels
	- train
		- img1.txt
	- val
		- img2.txt

其中,labels 中的 txt 内容示例如下:

类别  	 x_center       y_center         width          height

45		 0.479492 		0.688771 		0.955609 		0.5955
45		 0.736516 		0.247188 		0.498875 		0.476417
50		 0.637063 		0.732938 		0.494125 		0.510583
45		 0.339438 		0.418896 		0.678875 		0.7815
49		 0.646836 		0.132552 		0.118047 		0.0969375
49		 0.773148 		0.129802 		0.0907344 		0.0972292
49		 0.668297 		0.226906		0.131281 		0.146896
49		 0.642859 		0.0792187 		0.148063 		0.148062

上面的 5 列数据分别表示框的类别编号(coco 中的类别编号)、框中心点 x 坐标,框中心点 y 坐标,框宽度 w,框高度 h

框的坐标参数如何从 COCO 格式 (x_min, y_min, w, h) 转换为 YOLO 可用的格式 (x_center, y_center, w, h)

  • YOLO 中的所有坐标参数都要归一化到 (0, 1) 之间,如下图所示
  • x_centerwidth 如何从坐标点转换为 0~1 的参数:x_center = x_coco/img_witdh, width = width_coco/img_width
  • y_centerheight 如何从坐标点转换为 0~1 的参数:y_center = y_coco/img_height, height = height_coco/img_height
    在这里插入图片描述

2.2.1 Upload Dataset

Ultralytics HUB 使用的数据集合 YOLOv5 一样,使用相同的组织结构和标签形式

在上传自己的数据集的时候,首先要确认的是 dataset root 中存在你的 dataset YAML,然后使用 zip 命令将数据集打包,上传至 https://hub.ultralytics.com/

ultralytics/hub/coco6.zip 为例,coco6.yaml 要放到 coco6 文件夹里边,并一同打包用于上传,整理的格式如下:

在这里插入图片描述
YAML 中的内容格式和 YOLOv5 的相同,格式如下:

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path:  # dataset root dir (leave empty for HUB)
train: images/train  # train images (relative to 'path') 8 images
val: images/val  # val images (relative to 'path') 8 images
test:  # test images (optional)

# Classes
names:
  0: person
  1: bicycle
  2: car
  3: motorcycle
  ...

将 COCO 格式的 json 转换成 YOLOv5 格式的 json 方式如下:

import os
import json
from pathlib import Path


def coco2yolov5(coco_json_path, yolo_txt_path):
    with open(coco_json_path, 'r') as f:
        info = json.load(f)
        coco_anno = info["annotations"]
        coco_images = info["images"]
        for img in coco_images:
            img_info = {
                "file_name": img["file_name"],
                "img_id": img["id"],
                "img_width": img["width"],
                "img_height": img["height"]
            }
            for anno in coco_anno:
                image_id = anno["image_id"]
                category_id = anno["category_id"]
                bbox = anno["bbox"]
                line = str(category_id - 1)
                if image_id == img_info["img_id"]:
                    txt_name = Path(img_info["file_name"]).name.split('.')[0]
                    yolo_txt = yolo_txt_path + '{}.txt'.format(txt_name)
                    with open(yolo_txt, 'a') as wf:
                        # coco: [x_min, y_min, w, h]
                        yolo_bbox = []
                        yolo_bbox.append(round((bbox[0] + bbox[2]) / img_info["img_width"], 6))
                        yolo_bbox.append(round((bbox[1] + bbox[3]) / img_info["img_height"], 6))
                        yolo_bbox.append(round(bbox[2] / img_info["img_width"], 6))
                        yolo_bbox.append(round(bbox[3] / img_info["img_height"], 6))
                        for bbox in yolo_bbox:
                            line += ' ' + str(bbox)
                        line += '\n'
                        wf.writelines(line)


if __name__ == "__main__":
    coco_json_path = "part1_all_coco.json"
    yolo_txt_path = "val/"
    if not os.path.exists(yolo_txt_path):
        os.makedirs(yolo_txt_path)
    coco2yolov5(coco_json_path, yolo_txt_path)

然后登陆 Ultralytics HUB 并上传 dataset:

在这里插入图片描述

训练好后,可以下载 Ultralytics App 然后在移动端进行部署。

2.3 YOLOv8 主要改动

YOLOv8(Anchor-free)相对于 YOLOv5(Anchor-based)的改动如下:

  • Backbone:v8 中仍然使用 CSP 的思想,使用了 C2f 替换了 v5 中的 C3 模块,每个 stage 的 blocks 个数从 [3, 6, 9, 3] 变成了 [3, 6, 6, 3],同时还使用了 v5 中的 SPPF 模块
  • Neck:将 v5 中的 PAN-FPN 中的 top-down 上采样阶段中的卷积删除了
  • Head:使用了解耦头,有三个 branch,包括 cls、reg、projection conv (DFL 使用)
  • Label Assign:是 anchor-free 的形式,使用 TAL 动态匹配的方式,且未在前面训练阶段使用 ATSS,TAL 从任务对齐的角度触发,根据设计的指标动态选择高质量的 anchor 作为正样本,并且融入到了 loss 的设计中
  • Loss:分类使用交叉熵损失(也添加了 varifocal loss,但注释掉了),回归使用的 DFL + CIoU loss

Label Assignment:

为了应对 NMS,anchor 分配应该满足如下两个规则:

  • 分类和回归都良好的 anchor 应该能够预测搞分类分数和精确的定位
  • 分类和回归效果未对齐的 anchor 应该有一个低的排序分数然后被抑制

TAL 是怎么做的:

  • 引入了 anchor 的分类和回归是否对齐的衡量指标:anchor alignment metric
  • 指标是怎么计算的: t = s α × u β t = s^{\alpha} \times u^{\beta} t=sα×uβ,其中 s 为分类 score,u 为 IoU 的值, α \alpha α β \beta β 用于控制分类和回归这两个任务在 anchor 对齐度量中的影响,t 在这两个任务的联合优化中起着关键作用,以实现任务对齐的目标。
  • 理论上的优势:让网络从联合优化的角度动态关注高质量的 anchor(即任务对齐程度更高的 anchor)
  • 如何使用该指标 t 来进行正负样本的分配:对于每个 gt,选择 m 个 t 值最大的 anchor 作为正样本,其他的 anchor 作为负样本
  • 如何将 t 嵌入分类损失函数实现动态分配:
    • 为了增加对齐指标高的 anchor 的分类得分,且降低对齐指标低的 anchor 分类得分,在训练期间,使用归一化后的 t ^ \hat{t} t^ 来替换 anchor 原本学习的二进制标签,如何归一化呢, t ^ \hat{t} t^ 的最大值是每个 gt 对应的 anchor 的最大 IoU 值 (u)
    • 分类交叉熵损失为 L c l s _ p o s = Σ i = 1 N p o s   B C E ( s i , t i ^ ) L_{cls\_pos}=\Sigma_{i=1}^{N_{pos}} \ BCE(s_i, \hat{t_i}) Lcls_pos=Σi=1Npos BCE(si,ti^)
    • 此外,还可以使用 Focal loss 的形式来减轻训练时的正负样本不平衡: L c l s = Σ i = 1 N p o s ∣ t i ^ − s i ∣   B C E ( s i , t i ^ ) + Σ j = 1 N n e g s j γ   B C E ( s j , 0 ) L_{cls} = \Sigma_{i=1}^{N_{pos}} | \hat{t_i}-s_i|\ BCE(s_i, \hat{t_i}) +\Sigma_{j=1}^{N_{neg}}s_j^{\gamma}\ BCE(s_j, 0) Lcls=Σi=1Nposti^si BCE(si,ti^)+Σj=1Nnegsjγ BCE(sj,0)
  • 如何将 t 嵌入回归损失函数实现动态分配:
    • 已知高质量(分类和回归都很好)的 anchor 有利于模型性能的优化,低质量的 anchor 可能会对模型训练产生负面的影响,所以更关注具有较大 t 的 anchor 来提高回归精度也很重要。这里,也会根据 t ^ \hat{t} t^ 来重新为每个 anchor 加权来计算回归损失,故 GIoU 重新定义如下:
    • L r e g = Σ i = 1 N p o s   t i ^   L G I o U ( b i , b i ^ ) L_{reg} = \Sigma_{i=1}^{N_{pos}}\ \hat{t_i} \ L_{GIoU}(b_i, \hat{b_i}) Lreg=Σi=1Npos ti^ LGIoU(bi,bi^)
    • TAL 的总训练损失为 L c l s + L r e g L_{cls}+L_{reg} Lcls+Lreg

TOOD 为什么被 YOLOv6、PP-YOLOE、PicoDet、YOLOv8 都选择了?

  • 首先,单阶段目标检测器的分类和定位不对齐的问题在前面好多方法中都有提及到,GFL 中也做了一大段的分析,在训练的时候分别训练分类和回归头,在预测的时候,使用分类得分来作为 NMS 的排序依据进行框的消除,没有将分类和定位联合起来
  • FCOS 中,使用 centerness 得分和分类得分相乘来对 NMS 中的框排序,但 centerness 是基于分类特征得到的,而非定位特征来得到的,不是很好的衡量定位准确性的参数
  • ATSS 中,对每个 gt,选取和 gt 的中心点距离最近的前 N 个框,计算保留下来的框和 gt 的 IoU,将 IoU>均值+方差的 anchor 分配为正样本
  • OTA 中,将正负样本分配建模成了一个最优传输的问题,gt 为需求方,anchor 为提供方,以 loss 为 anchor 传递到 gt 上的传输花费,计算所有 anchor 传输给每个 gt 的传输花费, 然后使用优化方法去优化传输矩阵,根据 dynamic k 方法来选择花费最低的前 k 个 anchor 作为该 gt 的正样本
  • SimOTA,将 OTA 中的优化方法去掉,直接选择每个传输花费最小的前 k 个 anchor 作为该 gt 的正样本,dynamic k(即某个 gt 需要 k 个 anchor)是根据各个 anchor 和 gt 的 IoU 来计算的,如果对于一个 gt,存在很多高质量(高 IoU)的 anchor,则多分配一些 anchor 负责该 gt,如果某个 gt 存在很少数量的优质 anchor,则少分配一些 anchor 来负责该 gt,也能避免带入一些有害的信息。
  • Varifocal 中,作者对 IoU 和 Centerness 分支都做了实验,证明上面的这些基于 IoU 的方法都可以为网络带来一些提升,但是,这种独立使用 IoU 和 cls score 的方法并非最优解,因为两个不完善的预测相乘的操作可能会导致 NMS 排序的依据变差,所以 Varifocal 方法中,是直接将 IoU 合并到了分类分支中,构建了 IACS (IoU-Aware Classification Score),来作为分类分支的学习目标(故此,Varifocal 中的分类学习目标并非二值 0/1 值,而是 0~1 中的值,也就是 IACS 值),同时构建了 Varifocal Loss,对负样本进行降权重。Varifocal Loss 为
  • TOOD 中,作者认为最优预测不应该是两者简单相乘后的乘积,而是应该同时拥有好的定位和分类得分,即分类头和定位头的一致性更高的 anchor 对网络更加重要,TOOD 设计了 T-Head 在 head 端进行两个任务的对齐,还设计了对一致性的衡量方法,比前述的几种方法都更加全面。

三、YOLOv8 细节详述

YOLOv8 和 YOLOv5 出自同一团队,故 YOLOv8 也可以看做在 YOLOv5 上的优化,结构也很类似,下面来对比一下 YOLOv5 和 YOLOv8 的异同点。

模块YOLOv8YOLOv5
输入尺寸640x640x3640x640x3
BackboneC2f(增加了更多梯度流的信息)C3
NeckPA-FPNPA-FPN
Head分类回归解耦 head耦合 head
AnchorAnchor-freeAnchor-based
Label AssignTAL以 gt 中心点在 grid 的象限位置来判断正负样本
objectness
输出20x20x(4+cls) / 40x40x(4+cls) / 80x80x(4+cls)20x20x(5+cls) / 40x40x(5+cls) / 80x80x(5+cls)

YOLOv8 模型框架结构如下:

在这里插入图片描述

YOLOv8 模块细节如下:

  • 主要差别就在于 YOLOv8 中使用了梯度更丰富(源于 YOLOv7 中的 E-ELAN 模块)的 C2f 模块
  • YOLOv5 中使用的是只进行了一次梯度分流的 C3 模块

在这里插入图片描述

YOLOv5 模型框架如下:

在这里插入图片描述

YOLOv5 模块细节如下:

在这里插入图片描述

请添加图片描述

四、YOLOv8 训练自己的数据集

1、数据集修改

ultralytics/datasets/ 下新建自己的数据集 yaml 文件,改成自己数据集的 train、val、test 路径,将 names 也改成自己数据集的类别名称

在这里插入图片描述

  • 红色为 root path
  • 黄色为 subpath,是 root path 的子文件夹目录

2、模型配置文件修改

模型配置文件的基础文件在 ultralytics/yolo/cfg/default.yaml 中,其中写了很多默认的参数,如果在训练时候重新指定了更高级别的 --cfg,则 cfg 对应的 yaml 文件更高级,会覆盖 default.yaml 中的部分参数。

# Ultralytics YOLO 🚀, AGPL-3.0 license
# Default training settings and hyperparameters for medium-augmentation COCO training

task: detect  # YOLO task, i.e. detect, segment, classify, pose
mode: train  # YOLO mode, i.e. train, val, predict, export, track, benchmark

# Train settings -------------------------------------------------------------------------------------------------------
model:  # path to model file, i.e. yolov8n.pt, yolov8n.yaml
data:  # path to data file, i.e. coco128.yaml
epochs: 100  # number of epochs to train for
patience: 50  # epochs to wait for no observable improvement for early stopping of training
batch: 16  # number of images per batch (-1 for AutoBatch)
imgsz: 640  # size of input images as integer or w,h
save: True  # save train checkpoints and predict results
save_period: -1 # Save checkpoint every x epochs (disabled if < 1)
cache: False  # True/ram, disk or False. Use cache for data loading
device:  # device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu
workers: 8  # number of worker threads for data loading (per RANK if DDP)
project:  # project name
name:  # experiment name, results saved to 'project/name' directory
exist_ok: False  # whether to overwrite existing experiment
pretrained: False  # whether to use a pretrained model
optimizer: SGD  # optimizer to use, choices=['SGD', 'Adam', 'AdamW', 'RMSProp']
verbose: True  # whether to print verbose output
seed: 0  # random seed for reproducibility
deterministic: True  # whether to enable deterministic mode
single_cls: False  # train multi-class data as single-class
image_weights: False  # use weighted image selection for training
rect: False  # rectangular training if mode='train' or rectangular validation if mode='val'
cos_lr: False  # use cosine learning rate scheduler
close_mosaic: 0  # (int) disable mosaic augmentation for final epochs
resume: False  # resume training from last checkpoint
amp: True  # Automatic Mixed Precision (AMP) training, choices=[True, False], True runs AMP check
# Segmentation
overlap_mask: True  # masks should overlap during training (segment train only)
mask_ratio: 4  # mask downsample ratio (segment train only)
# Classification
dropout: 0.0  # use dropout regularization (classify train only)

# Val/Test settings ----------------------------------------------------------------------------------------------------
val: True  # validate/test during training
split: val  # dataset split to use for validation, i.e. 'val', 'test' or 'train'
save_json: False  # save results to JSON file
save_hybrid: False  # save hybrid version of labels (labels + additional predictions)
conf:  # object confidence threshold for detection (default 0.25 predict, 0.001 val)
iou: 0.7  # intersection over union (IoU) threshold for NMS
max_det: 300  # maximum number of detections per image
half: False  # use half precision (FP16)
dnn: False  # use OpenCV DNN for ONNX inference
plots: True  # save plots during train/val

# Prediction settings --------------------------------------------------------------------------------------------------
source:  # source directory for images or videos
show: False  # show results if possible
save_txt: False  # save results as .txt file
save_conf: False  # save results with confidence scores
save_crop: False  # save cropped images with results
show_labels: True  # show object labels in plots
show_conf: True  # show object confidence scores in plots
vid_stride: 1  # video frame-rate stride
line_thickness: 3  # bounding box thickness (pixels)
visualize: False  # visualize model features
augment: False  # apply image augmentation to prediction sources
agnostic_nms: False  # class-agnostic NMS
classes:  # filter results by class, i.e. class=0, or class=[0,2,3]
retina_masks: False  # use high-resolution segmentation masks
boxes: True  # Show boxes in segmentation predictions

# Export settings ------------------------------------------------------------------------------------------------------
format: torchscript  # format to export to
keras: False  # use Keras
optimize: False  # TorchScript: optimize for mobile
int8: False  # CoreML/TF INT8 quantization
dynamic: False  # ONNX/TF/TensorRT: dynamic axes
simplify: False  # ONNX: simplify model
opset:  # ONNX: opset version (optional)
workspace: 4  # TensorRT: workspace size (GB)
nms: False  # CoreML: add NMS

# Hyperparameters ------------------------------------------------------------------------------------------------------
lr0: 0.01  # initial learning rate (i.e. SGD=1E-2, Adam=1E-3)
lrf: 0.01  # final learning rate (lr0 * lrf)
momentum: 0.937  # SGD momentum/Adam beta1
weight_decay: 0.0005  # optimizer weight decay 5e-4
warmup_epochs: 3.0  # warmup epochs (fractions ok)
warmup_momentum: 0.8  # warmup initial momentum
warmup_bias_lr: 0.1  # warmup initial bias lr
box: 7.5  # box loss gain
cls: 0.5  # cls loss gain (scale with pixels)
dfl: 1.5  # dfl loss gain
pose: 12.0  # pose loss gain
kobj: 1.0  # keypoint obj loss gain
label_smoothing: 0.0  # label smoothing (fraction)
nbs: 64  # nominal batch size
hsv_h: 0.015  # image HSV-Hue augmentation (fraction)
hsv_s: 0.7  # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4  # image HSV-Value augmentation (fraction)
degrees: 0.0  # image rotation (+/- deg)
translate: 0.1  # image translation (+/- fraction)
scale: 0.5  # image scale (+/- gain)
shear: 0.0  # image shear (+/- deg)
perspective: 0.0  # image perspective (+/- fraction), range 0-0.001
flipud: 0.0  # image flip up-down (probability)
fliplr: 0.5  # image flip left-right (probability)
mosaic: 1.0  # image mosaic (probability)
mixup: 0.0  # image mixup (probability)
copy_paste: 0.0  # segment copy-paste (probability)

# Custom config.yaml ---------------------------------------------------------------------------------------------------
cfg:  # for overriding defaults.yaml

# Debug, do not modify -------------------------------------------------------------------------------------------------
v5loader: False  # use legacy YOLOv5 dataloader

# Tracker settings ------------------------------------------------------------------------------------------------------
tracker: botsort.yaml  # tracker type, ['botsort.yaml', 'bytetrack.yaml']

五、代码

5.1 训练

训练命令为:

yolo task=detect mode=train data=ultralytics/datasets/data.yaml model=yolov8x.yaml

或者:

from ultralytics import YOLO

# Load a model
model = YOLO('yolov8n.pt')  # load a pretrained model (recommended for training)

# Train the model with 2 GPUs
model.train(data='coco128.yaml', epochs=100, imgsz=640, device=[0, 1])

训练走的是 ultralytics/ultralytics/yolo/engine/model.py(342) 文件,调用 ultralytics/ultralytics/yolo/engine/trainer.py(166)(311)

for i, batch in pbar:
     self.run_callbacks('on_train_batch_start')
     # Warmup
     ni = i + nb * epoch
     if ni <= nw:
         xi = [0, nw]  # x interp
         self.accumulate = max(1, np.interp(ni, xi, [1, self.args.nbs / self.batch_size]).round())
         for j, x in enumerate(self.optimizer.param_groups):
             # Bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
             x['lr'] = np.interp(
                 ni, xi, [self.args.warmup_bias_lr if j == 0 else 0.0, x['initial_lr'] * self.lf(epoch)])
             if 'momentum' in x:
                 x['momentum'] = np.interp(ni, xi, [self.args.warmup_momentum, self.args.momentum])

     # Forward
     with torch.cuda.amp.autocast(self.amp):
         batch = self.preprocess_batch(batch)
         preds = self.model(batch['img'])
         self.loss, self.loss_items = self.criterion(preds, batch)
         if RANK != -1:
             self.loss *= world_size
         self.tloss = (self.tloss * i + self.loss_items) / (i + 1) if self.tloss is not None \
             else self.loss_items

     # Backward
     self.scaler.scale(self.loss).backward()

batch 为 4 时,一个 batch 的输入的示例:

{
'img_file': ['1.jpg', '2.jpg', '3.jpg', '4.jpg'],
'ori_shape': [[590, 353], [1280, 720], [266, 400], [720, 1280]],
'resized_shape': [[640, 640], [640, 640], [640, 640], [640, 640]],
'img': tensor()
'cls': tensor()
'bboxes': tensor()
'batch_idx': tensor()

预测输出 preds:使用 ultralytics/ultralytics/nn/tasks.py(209)forward()ultralytics/ultralytics/nn/tasks.py(46)_forward_once()

# ultralytics/ultralytics/nn/tasks.py(46)_forward_once()
def _forward_once(self, x, profile=False, visualize=False):
    """
    Perform a forward pass through the network.

    Args:
        x (torch.Tensor): The input tensor to the model
        profile (bool):  Print the computation time of each layer if True, defaults to False.
        visualize (bool): Save the feature maps of the model if True, defaults to False

    Returns:
        (torch.Tensor): The last output of the model.
    """
    y, dt = [], []  # outputs
    for m in self.model:
        if m.f != -1:  # if not from previous layer
            x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f]  # from earlier layers
        if profile:
            self._profile_one_layer(m, x, dt)
        x = m(x)  # run,执行每个层的处理
        y.append(x if m.i in self.save else None)  # save output
        if visualize:
            feature_visualization(x, m.type, m.i, save_dir=visualize)
    return x

输出为三层特征,大小为:

  • [4, 76, 80, 80]
  • [4, 76, 40, 40]
  • [4, 76, 20, 20]

计算 loss:使用 ultralytics/ultralytics/yolo/v8/detect/train.py(line218)

# cls loss
# loss[1] = self.varifocal_loss(pred_scores, target_scores, target_labels) / target_scores_sum  # VFL way
loss[1] = self.bce(pred_scores, target_scores.to(dtype)).sum() / target_scores_sum  # BCE

# bbox loss
if fg_mask.sum():
    target_bboxes /= stride_tensor
    loss[0], loss[2] = self.bbox_loss(pred_distri, pred_bboxes, anchor_points, target_bboxes, target_scores,
                                      target_scores_sum, fg_mask)

loss[0] *= self.hyp.box  # box gain
loss[1] *= self.hyp.cls  # cls gain
loss[2] *= self.hyp.dfl  # dfl gain

YOLOv8x 的模型层数和每层的参数量:

                   from  n    params  module                                       arguments                     
  0                  -1  1      2320  ultralytics.nn.modules.conv.Conv             [3, 80, 3, 2]                 
  1                  -1  1    115520  ultralytics.nn.modules.conv.Conv             [80, 160, 3, 2]               
  2                  -1  3    436800  ultralytics.nn.modules.block.C2f             [160, 160, 3, True]           
  3                  -1  1    461440  ultralytics.nn.modules.conv.Conv             [160, 320, 3, 2]              
  4                  -1  6   3281920  ultralytics.nn.modules.block.C2f             [320, 320, 6, True]           
  5                  -1  1   1844480  ultralytics.nn.modules.conv.Conv             [320, 640, 3, 2]              
  6                  -1  6  13117440  ultralytics.nn.modules.block.C2f             [640, 640, 6, True]           
  7                  -1  1   3687680  ultralytics.nn.modules.conv.Conv             [640, 640, 3, 2]              
  8                  -1  3   6969600  ultralytics.nn.modules.block.C2f             [640, 640, 3, True]           
  9                  -1  1   1025920  ultralytics.nn.modules.block.SPPF            [640, 640, 5]                 
 10                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 11             [-1, 6]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 12                  -1  3   7379200  ultralytics.nn.modules.block.C2f             [1280, 640, 3]                
 13                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 14             [-1, 4]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 15                  -1  3   1948800  ultralytics.nn.modules.block.C2f             [960, 320, 3]                 
 16                  -1  1    922240  ultralytics.nn.modules.conv.Conv             [320, 320, 3, 2]              
 17            [-1, 12]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 18                  -1  3   7174400  ultralytics.nn.modules.block.C2f             [960, 640, 3]                 
 19                  -1  1   3687680  ultralytics.nn.modules.conv.Conv             [640, 640, 3, 2]              
 20             [-1, 9]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 21                  -1  3   7379200  ultralytics.nn.modules.block.C2f             [1280, 640, 3]                
 22        [15, 18, 21]  1   8755525  ultralytics.nn.modules.head.Detect           [39, [320, 640, 640]]         
YOLOv8x summary: 365 layers, 68190165 parameters, 68190149 gradients, 258.3 GFLOPs

模型结构:

DetectionModel(
  (model): Sequential(
    (0): Conv(
      (conv): Conv2d(3, 80, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn): BatchNorm2d(80, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (1): Conv(
      (conv): Conv2d(80, 160, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (2): C2f(
      (cv1): Conv(
        (conv): Conv2d(160, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (cv2): Conv(
        (conv): Conv2d(400, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): ModuleList(
        (0): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(80, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(80, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(80, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(80, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (1): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(80, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(80, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(80, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(80, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (2): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(80, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(80, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(80, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(80, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (3): Conv(
      (conv): Conv2d(160, 320, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (4): C2f(
      (cv1): Conv(
        (conv): Conv2d(320, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (cv2): Conv(
        (conv): Conv2d(1280, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): ModuleList(
        (0): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (1): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (2): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (3): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (4): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (5): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (5): Conv(
      (conv): Conv2d(320, 640, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (6): C2f(
      (cv1): Conv(
        (conv): Conv2d(640, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (cv2): Conv(
        (conv): Conv2d(2560, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): ModuleList(
        (0): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (1): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (2): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (3): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (4): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (5): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (7): Conv(
      (conv): Conv2d(640, 640, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (8): C2f(
      (cv1): Conv(
        (conv): Conv2d(640, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (cv2): Conv(
        (conv): Conv2d(1600, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): ModuleList(
        (0): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (1): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (2): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (9): SPPF(
      (cv1): Conv(
        (conv): Conv2d(640, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (cv2): Conv(
        (conv): Conv2d(1280, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): MaxPool2d(kernel_size=5, stride=1, padding=2, dilation=1, ceil_mode=False)
    )
    (10): Upsample(scale_factor=2.0, mode=nearest)
    (11): Concat()
    (12): C2f(
      (cv1): Conv(
        (conv): Conv2d(1280, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (cv2): Conv(
        (conv): Conv2d(1600, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): ModuleList(
        (0): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (1): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (2): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (13): Upsample(scale_factor=2.0, mode=nearest)
    (14): Concat()
    (15): C2f(
      (cv1): Conv(
        (conv): Conv2d(960, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (cv2): Conv(
        (conv): Conv2d(800, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): ModuleList(
        (0): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (1): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (2): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (16): Conv(
      (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (17): Concat()
    (18): C2f(
      (cv1): Conv(
        (conv): Conv2d(960, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (cv2): Conv(
        (conv): Conv2d(1600, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): ModuleList(
        (0): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (1): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (2): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (19): Conv(
      (conv): Conv2d(640, 640, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (20): Concat()
    (21): C2f(
      (cv1): Conv(
        (conv): Conv2d(1280, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (cv2): Conv(
        (conv): Conv2d(1600, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(640, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): ModuleList(
        (0): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (1): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (2): Bottleneck(
          (cv1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (cv2): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (22): Detect(
      (cv2): ModuleList(
        (0): Sequential(
          (0): Conv(
            (conv): Conv2d(320, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(80, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (1): Conv(
            (conv): Conv2d(80, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(80, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (2): Conv2d(80, 64, kernel_size=(1, 1), stride=(1, 1))
        )
        (1): Sequential(
          (0): Conv(
            (conv): Conv2d(640, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(80, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (1): Conv(
            (conv): Conv2d(80, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(80, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (2): Conv2d(80, 64, kernel_size=(1, 1), stride=(1, 1))
        )
        (2): Sequential(
          (0): Conv(
            (conv): Conv2d(640, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(80, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (1): Conv(
            (conv): Conv2d(80, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(80, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (2): Conv2d(80, 64, kernel_size=(1, 1), stride=(1, 1))
        )
      )
      (cv3): ModuleList(
        (0): Sequential(
          (0): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (2): Conv2d(320, 39, kernel_size=(1, 1), stride=(1, 1))
        )
        (1): Sequential(
          (0): Conv(
            (conv): Conv2d(640, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (2): Conv2d(320, 39, kernel_size=(1, 1), stride=(1, 1))
        )
        (2): Sequential(
          (0): Conv(
            (conv): Conv2d(640, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (1): Conv(
            (conv): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (2): Conv2d(320, 39, kernel_size=(1, 1), stride=(1, 1))
        )
      )
      (dfl): DFL(
        (conv): Conv2d(16, 1, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
    )
  )
)
  • 25
    点赞
  • 100
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

呆呆的猫

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值