365天深度学习训练营-第P8周-YOLOv5-C3模块实现

🍨 本文为🔗365天深度学习训练营 中的学习记录博客
🍖 原作者:K同学啊|接辅导、项目定制


任务:

  • YOLO系列文章
  • 思考:
    • C3模块的作用
    • C3模块为什么要这么设计?

一、模块详解

YOLOv5 C3
c3

1.ConvBNSiLU模块

在这里插入图片描述

  • SiLU激活函数:Sigmoid Linear Unit
    在这里插入图片描述
    • 特点: 相对于ReLU函数,SiLU函数在接近零时具有更平滑的曲线,并且由于其使用了sigmoid函数,可以使网络的输出范围在0和1之间
    • 注意点:在使用SiLU时,如果数据存在过大或过小的情况,可能会导致梯度消失或梯度爆炸,因此需要进行一些调整,例如对输入数据进行归一化等。而ReLU在这方面较为稳定,不需要过多的处理。

代码解析:

class Conv(nn.Module):
    # Standard convolution
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):
        # ch_in, ch_out, kernel, stride, padding, groups
        super().__init__()
        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
        self.bn = nn.BatchNorm2d(c2)
        self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())

1)自动Padding

(参考知乎某博主,见引用。)
在这里插入图片描述

def autopad(k, p=None): # kernel, padding
    # Pad to 'same'
    if p is None:
        p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
    return p

为什么要自动Padding?且为什么是除以2?

  • 答:
    • 一方面是偷懒,方便模块化,第二,是为了实现图片特征图经过一个以上的标准卷积之后,特征图的大小保持不变。
    • 参考上图知乎博主的推导,简单理解是Padding一般是在图片上下左右都做0填充,所以是双倍的,所以要除以2,见下图的公式。
      请添加图片描述

2)Conv2d参数groups

请添加图片描述

  • 通道分组的参数,输入通道数、输出通道数必须同时满足被groups整除
  • 如果group=1,就等于是6个大小为6KK的卷积核,另外,多通道卷积的过程如下,原博客参看卷积过程详细讲解

在这里插入图片描述

3)pytorch nn.Identity()

pytorch源码
请添加图片描述

其实就是类似下面的代码:

def f(x):
	return x
  • 不区分参数的占位符标识运算符:就是这个网络层的设计是用于占位的,即不干活,只是有这么一个层,放到残差网络里就是在跳过连接的地方用这个层,显得没有那么空虚!

2.Bottleneck模块

在这里插入图片描述

代码如下:

class Bottleneck(nn.Module):
    # Standard bottleneck
    def __init__(self, c1, c2, shortcut=True, g=1, e=0.5):
        # ch_in, ch_out, shortcut, groups, expansion
        super().__init__()
        c_ = int(c2 * e) # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c_, c2, 3, 1, g=g)
        self.add = shortcut and c1 == c2
        
    def forward(self, x):
        return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
  • 由两个标准卷积和一个残差连接组成
  • 其中,输入输出的卷积通道数相等

二、代码

1.数据集

  • 数据来源(天气识别)
data_dir = pathlib.Path("./data/")
data_paths = list(data_dir.glob('*'))

classNames = [str(path).split('/')[1] for path in data_paths]

from PIL import Image
cloudy = list(data_dir.glob('cloudy/*.jpg'))
PIL.Image.open(cloudy[100])

在这里插入图片描述

划分数据集

from torchvision import transforms, datasets
train_transforms = transforms.Compose([
    transforms.Resize([224, 224]),
    transforms.ToTensor(),
    transforms.Normalize(
        mean=[0.485, .456, 0.406],
        std=[0.229, 0.224, 0.225]
    )
])


total_data = datasets.ImageFolder("./data/", transform=train_transforms)
total_data

在这里插入图片描述

train_size = int(0.8 * len(total_data))
test_size = len(total_data) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])
train_dataset, test_dataset
batch_size = 32

train_dl = torch.utils.data.DataLoader(train_dataset, 
                                       batch_size=batch_size, 
                                       shuffle=True, 
                                       num_workers=1)

test_dl = torch.utils.data.DataLoader(test_dataset, 
                                      batch_size=batch_size, 
                                      shuffle=False,
                                      num_workers=1)

2.搭建模型

YOLOv5 C3 模块
在这里插入图片描述

import torch.nn as nn

def autopad(k, p=None): # kernel, padding
    # Pad to 'same'
    if p is None:
        p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
    return p

class Conv(nn.Module):
    # Standard convolution
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):
        # ch_in, ch_out, kernel, stride, padding, groups
        super().__init__()
        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
        self.bn = nn.BatchNorm2d(c2)
        self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())
        
    def forward(self, x):
        return self.act(self.bn(self.conv(x)))
    
class Bottleneck(nn.Module):
    # Standard bottleneck
    def __init__(self, c1, c2, shortcut=True, g=1, e=0.5):
        # ch_in, ch_out, shortcut, groups, expansion
        super().__init__()
        c_ = int(c2 * e) # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c_, c2, 3, 1, g=g)
        self.add = shortcut and c1 == c2
        
    def forward(self, x):
        return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
    
class C3(nn.Module):
    # CSP Bottleneck with 3 convolutions
    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
        # ch_in, ch_out, number, shorcut, groups, expansion
        super().__init__()
        c_ = int(c2 * e) # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c1, c_, 1, 1, g=g)
        self.cv3 = Conv(2 * c_, c2, 1) # act = FReLU(c2)
        self.main = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
        
    def forward(self, x):
        return self.cv3(torch.cat((self.main(self.cv1(x)), self.cv2(x)), dim=1))

网络结构

  • c3 + 全连接
class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        
        # Conv
        self.Conv = Conv(3, 32, 3, 2)
        
        # C3
        self.c3_1 = C3(32, 64, 3, 2)
        
        # fulled-connacted
        self.classifier = nn.Sequential(
            nn.Linear(in_features=802816, out_features=100),
            nn.ReLU(),
            nn.Linear(in_features=100, out_features=len(classNames))
        )
        
    def forward(self, x):
        x = self.Conv(x)
        x = self.c3_1(x)
        x = torch.flatten(x, start_dim=1)
        x = self.classifier(x)
        
        return x

3.训练模型

设置训练设备

device = 'cuda' if torch.cuda.is_available() else 'cpu'
print("Using {} device".format(device))

初始化模型

model = MyModel().to(device)
model

在这里插入图片描述
在这里插入图片描述

模型参数量
import torchsummary  as summary
summary.summary(model, (3, 224, 224))

在这里插入图片描述

训练测试模块

训练

def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    
    train_loss, train_acc = 0, 0
    
    for X, y in dataloader:
        X, y = X.to(device), y.to(device)
        
        # loss
        pred = model(X)
        loss = loss_fn(pred, y)
        
        # back
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        
        # log
        train_loss += loss.item()
        train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()
        
    train_acc /= size
    train_loss /= num_batches
    
    return train_acc, train_loss

测试

def test(dataloader, model, loss_fn):
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    
    test_loss, test_acc = 0, 0
    
    with torch.no_grad():
        for imgs, target in dataloader:
            imgs, target = imgs.to(device), target.to(device)
            
            # loss
            target_pred = model(imgs)
            loss = loss_fn(target_pred, target)
            
            test_loss += loss.item()
            test_acc += (target_pred.argmax(1) == target).type(torch.float).sum().item()
            
    test_acc /= size
    test_loss /= num_batches
    
    return test_acc, test_loss

正式训练

import copy

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
epochs = 20

train_loss, train_acc = [], []
test_loss, test_acc = [], []

best_acc = 0

for epoch in range(epochs):
    
    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)
    # scheduler.step()
    
    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)
    
    if epoch_test_acc > best_acc:
        best_acc = epoch_test_acc
        best_model = copy.deepcopy(model)
        
    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)
    
    lr = optimizer.state_dict()['param_groups'][0]['lr']
    
    template = "Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}"
    print(template.format(epoch+1, 
                          epoch_train_acc*100, epoch_train_loss, 
                          epoch_test_acc*100, epoch_test_loss, 
                          lr))
    
PATH = './best_model.pth'
torch.save(model.state_dict(), PATH)

print('Done!!!')

在这里插入图片描述

  • 尝试了,增加C3模块的数量,发现准确度下降,可能是模型过于复杂

4. 结果可视化

import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False
plt.rcParams['figure.dpi'] = 100

epochs_range = range(epochs)

plt.figure(figsize=(12, 3))

plt.subplot(1, 2, 1)
plt.plot(epochs_range, train_acc, label='Train Acc.')
plt.plot(epochs_range, test_acc, label='Test Acc.')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Train Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='lower right')
plt.title('Training and Validation Loss')
plt.show()

在这里插入图片描述


5.预测

from PIL import Image

classes = list(total_data.class_to_idx)

def predict_one_image(image_path, model, transform, classes):
    
    test_img = Image.open(image_path).convert('RGB')
    plt.imshow(test_img)
    
    test_img = transform(test_img)
    img = test_img.to(device).unsqueeze(0)
    
    model.eval()
    output = model(img)
    
    _, pred = torch.max(output, 1)
    pred_class = classes[pred]
    print(f'Predict is: {pred_class}')

predict_one_image(image_path='./data/sunrise/sunrise13.jpg', 
                  model=model,
                  transform=train_transforms,
                  classes=classes
                 )

在这里插入图片描述


总结:

  • SiLU激活函数
  • 标准卷积模块
  • 遗留问题:
    • c3模块的作用
    • c3模块为什么要这样设计?

参考:
yolov5组件笔记
激活函数ReLU和SiLU的区别
图解卷积层stride,padding,kernel_size 和卷积前后特征图尺寸之间的关系
pytorch的函数中的group参数的作用
卷积过程详细讲解

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值