YOLOV5s+Shufflenetv2+VOC数据集+迁移学习

前言:更改YOLOV5的backbone网络为 Shufflenetv2,便于达到轻量化的目的

1. 试运行YOLOv5

b站推土机

2. VOC数据集处理

3. 更改轻量级网络

参考魔改yolov5

3.1 在common.py末尾加入以下代码

#添加轻量化模块Shufflenetv2
# ---------------------------- ShuffleBlock start -------------------------------

# 通道重排,跨group信息交流
def channel_shuffle(x, groups):
    batchsize, num_channels, height, width = x.data.size()
    channels_per_group = num_channels // groups

    # reshape
    x = x.view(batchsize, groups,
               channels_per_group, height, width)

    x = torch.transpose(x, 1, 2).contiguous()

    # flatten
    x = x.view(batchsize, -1, height, width)

    return x


class conv_bn_relu_maxpool(nn.Module):
    def __init__(self, c1, c2):  # ch_in, ch_out
        super(conv_bn_relu_maxpool, self).__init__()
        self.conv = nn.Sequential(
            nn.Conv2d(c1, c2, kernel_size=3, stride=2, padding=1, bias=False),
            nn.BatchNorm2d(c2),
            nn.ReLU(inplace=True),
        )
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)

    def forward(self, x):
        return self.maxpool(self.conv(x))


class Shuffle_Block(nn.Module):
    def __init__(self, inp, oup, stride):
        super(Shuffle_Block, self).__init__()

        if not (1 <= stride <= 3):
            raise ValueError('illegal stride value')
        self.stride = stride

        branch_features = oup // 2
        assert (self.stride != 1) or (inp == branch_features << 1)

        if self.stride > 1:
            self.branch1 = nn.Sequential(
                self.depthwise_conv(inp, inp, kernel_size=3, stride=self.stride, padding=1),
                nn.BatchNorm2d(inp),
                nn.Conv2d(inp, branch_features, kernel_size=1, stride=1, padding=0, bias=False),
                nn.BatchNorm2d(branch_features),
                nn.ReLU(inplace=True),
            )

        self.branch2 = nn.Sequential(
            nn.Conv2d(inp if (self.stride > 1) else branch_features,
                      branch_features, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(branch_features),
            nn.ReLU(inplace=True),
            self.depthwise_conv(branch_features, branch_features, kernel_size=3, stride=self.stride, padding=1),
            nn.BatchNorm2d(branch_features),
            nn.Conv2d(branch_features, branch_features, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(branch_features),
            nn.ReLU(inplace=True),
        )

    @staticmethod
    def depthwise_conv(i, o, kernel_size, stride=1, padding=0, bias=False):
        return nn.Conv2d(i, o, kernel_size, stride, padding, bias=bias, groups=i)

    def forward(self, x):
        if self.stride == 1:
            x1, x2 = x.chunk(2, dim=1)  # 按照维度1进行split
            out = torch.cat((x1, self.branch2(x2)), dim=1)
        else:
            out = torch.cat((self.branch1(x), self.branch2(x)), dim=1)

        out = channel_shuffle(out, 2)

        return out


# ---------------------------- ShuffleBlock end --------------------------------

3.2 在yolo.py parse_model里面添加新的模块

#新加模块conv_bn_relu_maxpool, Shuffle_Block

在这里插入图片描述

3.3 在model文件目录下新建yolov5.0s_shufflenetv2.yaml

# parameters
nc: 20  # number of classes
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple

# anchors
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32
# YOLOv5  backbone
backbone:
  # [from, number, module, args]
  # Shuffle_Block: [out, stride]
  [[ -1, 1, conv_bn_relu_maxpool, [ 32 ] ], # 0-P2/4
   [ -1, 1, Shuffle_Block, [ 128, 2 ] ],  # 1-P3/8
   [ -1, 3, Shuffle_Block, [ 128, 1 ] ],  # 2
   [ -1, 1, Shuffle_Block, [ 256, 2 ] ],  # 3-P4/16
   [ -1, 7, Shuffle_Block, [ 256, 1 ] ],  # 4
   [ -1, 1, Shuffle_Block, [ 512, 2 ] ],  # 5-P5/32
   [ -1, 3, Shuffle_Block, [ 512, 1 ] ],  # 6
   [ -1, 1, Conv, [ 1024, 3, 2 ] ],  # 7-P5/32
   [ -1, 1, SPP, [ 1024, [ 5, 9, 13 ] ] ],# 8
   [ -1, 3, C3, [ 1024, False ] ],  # 9
  ]

# YOLOv5 v5.0 head
head:
  [[-1, 1, Conv, [512, 1, 1]], # 10
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],# 11
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4 # 12
   [-1, 3, C3, [512, False]],  # 13

   [-1, 1, Conv, [256, 1, 1]], # 14
   [-1, 1, nn.Upsample, [None, 2, 'nearest']], # 15
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3 # 16
   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]], # 18
   [[-1, 14], 1, Concat, [1]],  # cat head P4 # 19
   [-1, 3, C3, [512, False]],  # 20(P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],# 21
   [[-1, 10], 1, Concat, [1]],  # cat head P5
   [-1, 3, C3, [1024, False]],  # 23 (P5/32-large)

   [[13, 17, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]

3.4 更改train.py设置训练参数

在这里插入图片描述

4. 迁移学习训练改后网络

yolov5的freeze在train.py里面

freeze = []  # parameter names to freeze (full or partial)
freeze = ['model.%s.' % x for x in range(10,24)]  # parameter names to freeze (full or partial)

    for k, v in model.named_parameters():
        v.requires_grad = True  # train all layers
        if any(x in k for x in freeze):
            print('freezing %s' % k)
            v.requires_grad = False

YOLOv5网络层的冻结通过设置其梯度为零来实现
本次冻结10到23层网络

freeze = ['model.%s.' % x for x in range(10,24)]  # parameter names to freeze (full or partial)
    for k, v in model.named_parameters():
        v.requires_grad = True  # train all layers
        if any(x in k for x in freeze):
            print('freezing %s' % k)
            v.requires_grad = False
           Class      Images      Labels           P           R      mAP@.5  mAP@.5:.95: 100%|██████████| 155/155 [00:56<00:00,  2.73it/s]
                 all        4952       12032       0.683       0.607       0.648       0.394
           aeroplane        4952         285       0.712       0.663       0.715       0.438
             bicycle        4952         337       0.853       0.706       0.779        0.49
                bird        4952         459       0.603       0.547       0.564       0.315
                boat        4952         263       0.599       0.556       0.582       0.275
              bottle        4952         469       0.524       0.301       0.322       0.156
                 bus        4952         213       0.783       0.657       0.736       0.578
                 car        4952        1201       0.777       0.749       0.805       0.549
                 cat        4952         358       0.734       0.703       0.754       0.513
               chair        4952         756         0.5       0.374       0.375       0.187
                 cow        4952         244       0.675        0.52       0.588       0.335
         diningtable        4952         206       0.703       0.544       0.624       0.404
                 dog        4952         489       0.672       0.578       0.646       0.398
               horse        4952         348       0.742       0.747       0.776        0.47
           motorbike        4952         325       0.821       0.734       0.804        0.49
              person        4952        4528       0.774       0.687       0.764       0.409
         pottedplant        4952         480       0.605       0.388       0.418       0.173
               sheep        4952         242       0.515       0.677       0.603        0.37
                sofa        4952         239       0.653       0.569       0.601       0.385
               train        4952         282       0.753       0.766       0.821       0.517
           tvmonitor        4952         308       0.672       0.672        0.69        0.42
100 epochs completed in 7.559 hours.
Optimizer stripped from runs\train\exp23\weights\last.pt, 12.3MB
Optimizer stripped from runs\train\exp23\weights\best.pt, 12.3MB

参考
魔改yolov5

  • 0
    点赞
  • 31
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
YOLO系列是基于深度学习的端到端实时目标检测方法。 PyTorch版的YOLOv5轻量而性能高,更加灵活和易用,当前非常流行。 本课程将手把手地教大家使用labelImg标注和使用YOLOv5训练自己的数据集。课程实战分为两个项目:单目标检测(足球目标检测)和多目标检测(足球和梅西同时检测)。 本课程的YOLOv5使用ultralytics/yolov5,在Windows系统上做项目演示。包括:安装YOLOv5、标注自己的数据集、准备自己的数据集、修改配置文件、使用wandb训练可视化工具、训练自己的数据集、测试训练出的网络模型和性能统计。 希望学习Ubuntu上演示的同学,请前往 《YOLOv5(PyTorch)实战:训练自己的数据集(Ubuntu)》课程链接:https://edu.csdn.net/course/detail/30793  本人推出了有关YOLOv5目标检测的系列课程。请持续关注该系列的其它视频课程,包括:《YOLOv5(PyTorch)目标检测实战:训练自己的数据集》Ubuntu系统 https://edu.csdn.net/course/detail/30793Windows系统 https://edu.csdn.net/course/detail/30923《YOLOv5(PyTorch)目标检测:原理与源码解析》课程链接:https://edu.csdn.net/course/detail/31428《YOLOv5目标检测实战:Flask Web部署》课程链接:https://edu.csdn.net/course/detail/31087《YOLOv5(PyTorch)目标检测实战:TensorRT加速部署》课程链接:https://edu.csdn.net/course/detail/32303《YOLOv5目标检测实战:Jetson Nano部署》课程链接:https://edu.csdn.net/course/detail/32451《YOLOv5+DeepSORT多目标跟踪与计数精讲》课程链接:https://edu.csdn.net/course/detail/32669《YOLOv5实战口罩佩戴检测》课程链接:https://edu.csdn.net/course/detail/32744《YOLOv5实战中国交通标志识别》课程链接:https://edu.csdn.net/course/detail/35209《YOLOv5实战垃圾分类目标检测》课程链接:https://edu.csdn.net/course/detail/35284       

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值