第J2周:ResNet50V2算法实战与解析

电脑环境:
语言环境:Python 3.8.0
编译器:Jupyter Notebook
深度学习环境:tensorflow 2.17.0

一、前期工作

1、ResNetV2结构与ResNet结构对比

在这里插入图片描述
改进点:(a)original表示原始的ResNet的残差结构,(b)proposed表示新的ResNet残差结构。主要差别就是(a)结构先卷积后进行BN和激活函数计算,最后执行addition后再进行ReLU计算;(b)结构先进行BN和激活函数计算后卷积,把addition后的ReLU计算放到了残差结构内部。

改进结果:作者使用这两种不同的结构在CiFAR-10数据集上做测试,模型使用的是1001层的ResNet模型。从图中我们可以看出,(b)proposed的测试集错误率明显更低,达到了4.92%的错误率,(a)original的测试集错误率为7.61%。

2、关于残差结构的不同尝试

在这里插入图片描述
(b-f)中的快捷连接被不同的组件障碍。为了简化插图,我们不显示BN层,这里属所有单位均采用权值层后的BN层。图中(a-f)都是作者对残差结构的shortcut部分进行的不同尝试,作者对不同shortcut结构的尝试结构如下表所示:
在这里插入图片描述
作者用不同的shortcut结构的ResNet-110在CIFAR-10数据集上做测试,发现原始的(a)original结构是最好的,也就是identity mapping 恒等映射是最好的。

3、关于激活的尝试

在这里插入图片描述
在这里插入图片描述
可以得出最好的结果是(e)full pre-activation,其次是(a)original。

二、模型复现

1、设置GPU

import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device

2、导入数据

import matplotlib.pyplot as plt
import os, PIL, pathlib
import numpy as np

data_dir = './bird_photos'
dat3a_dir = pathlib.Path(data_dir)
data_path = list(data_dir.glob('*'))
classeNames = [str(path).split('/')[1] for path in data_path]
classeNames

3、数据预处理

import torchvision
from torchvision import transforms, datasets
import torchvision.transforms as transforms

train_transforms = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor(),
    transforms.Normalize(
        mean=[0.485, 0.456, 0.406],
        std=[0.229, 0.224, 0.225]
        )
])

total_data = datasets.ImageFolder('./bird_photos', transform=train_transforms)

# 划分数据集
train_size = int(0.8 * len(total_data))
test_size = len(total_data) - train_size

train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])

batch_size = 8
train_dl = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_dl = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)

4、导入模型

注意:Resnet50V2、ResNet101V2与ResNet152V2的搭建方式完全一样,区别在于堆叠residual block的数量不同。

1、Residual Block

import torch
import torch.nn as nn

class Block2(nn.Module):
    def __init__(self, filters, kernel_size=3, stride=1, conv_shortcut=False):
        super(Block2, self).__init__()
        self.conv_shortcut = conv_shortcut

        self.bn1 = nn.BatchNorm2d(filters)
        self.relu = nn.ReLU(inplace=True)

        if conv_shortcut:
            self.shortcut = nn.Conv2d(4 * filters, kernel_size=1, stride=stride, bias=False)
        else:
            if stride > 1:
                self.shortcut = nn.MaxPool2d(kernel_size=1, stride=stride)
            else:
                self.shortcut = nn.Identity()

        self.conv1 = nn.Conv2d(filters, filters, kernel_size=1, stride=1, bias=False)
        self.bn2 = nn.BatchNorm2d(filters)
        self.relu2 = nn.ReLU(inplace=True)

        self.padding = nn.ZeroPad2d(1)  

        self.conv2 = nn.Conv2d(filters, filters, kernel_size=kernel_size, stride=stride, padding=0, bias=False)
        self.bn3 = nn.BatchNorm2d(filters)
        self.relu3 = nn.ReLU(inplace=True)

        self.conv3 = nn.Conv2d(filters, 4 * filters, kernel_size=1, bias=False)

        self.add = nn.Identity() 

    def forward(self, x):
        preact = self.bn1(x)
        preact = self.relu(preact)

        if self.conv_shortcut:
            shortcut = self.shortcut(preact)
        else:
            shortcut = self.shortcut(x)

        out = self.conv1(preact)
        out = self.bn2(out)
        out = self.relu2(out)
        out = self.padding(out)
        out = self.conv2(out)
        out = self.bn3(out)
        out = self.relu3(out)

        out = self.conv3(out)

        out += shortcut

        return out

2、堆叠Residual Block

import torch.nn as nn

class Stack2(nn.Module):
    def __init__(self, block, filters, blocks, stride1=2):
        super(Stack2, self).__init__()
        self.layers = nn.ModuleList()
        self.layers.append(block(filters, stride=stride1, conv_shortcut=True))

        for i in range(2, blocks):
            self.layers.append(block(filters))

        self.layers.append(block(filters, stride=stride1))

    def forward(self, x):
        for layer in self.layers:
            x = layer(x)
        return x

3、ResNet50V2架构复线

在这里插入图片描述

代码如下:

import torch
import torch.nn as nn
import torch.nn.functional as F

class ResNet50V2(nn.Module):
    def __init__(self, num_classes=1000, include_top=True, preact=False, pooling='avg'):
        super(ResNet50V2, self).__init__()
        self.include_top = include_top
        self.preact = preact

        # conv1
        self.conv1_pad = nn.ZeroPad2d((3, 3, 3, 3))
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)

        # conv2_x
        self.pool1_pad = nn.ZeroPad2d((1, 1, 1, 1))
        self.pool1 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)

        # Residual Blocks (stack layers)
        self.layer1 = self._make_stack_layer(64, 64, 3, stride=1, name='conv2')
        self.layer2 = self._make_stack_layer(64*4, 128, 4, stride=2, name='conv3')
        self.layer3 = self._make_stack_layer(128*4, 256, 6, stride=2, name='conv4')
        self.layer4 = self._make_stack_layer(256*4, 512, 3, stride=2, name='conv5')

        # BatchNorm and relu for post-processing
        self.post_bn = nn.BatchNorm2d(512 * 4)
        self.post_relu = nn.ReLU(inplace=True)

        # Pooling and Fully Connected Layer
        if include_top:
            self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
            self.fc = nn.Linear(512 * 4, num_classes)
        else:
            if pooling == 'avg':
                self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
            elif pooling == 'max':
                self.avgpool = nn.AdaptiveMaxPool2d((1, 1))

    def _make_stack_layer(self, in_planes, planes, blocks, stride=1, name=None):
        layers = []
        # First block with shortcut
        layers.append(Bottleneck(in_planes, planes, stride, conv_shortcut=True))

        # Remaining blocks
        for _ in range(1, blocks):
            layers.append(Bottleneck(planes * 4, planes))
        return nn.Sequential(*layers)

    def forward(self, x):
        # Initial layers (conv1)
        x = self.conv1_pad(x)
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)

        # MaxPool layer (conv2_x)
        x = self.pool1_pad(x)
        x = self.pool1(x)

        # Residual blocks (stack layers)
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        # Optional post-bn and relu if preact is True
        if self.preact:
            x = self.post_bn(x)
            x = self.post_relu(x)

        # Pooling layer
        x = self.avgpool(x)
        x = torch.flatten(x, 1)

        if self.include_top:
            x = self.fc(x)

        return x

# Bottleneck Block used in ResNet
class Bottleneck(nn.Module):
    expansion = 4

    def __init__(self, in_planes, planes, stride=1, conv_shortcut=False):
        super(Bottleneck, self).__init__()
        self.conv_shortcut = conv_shortcut

        self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
        self.bn1 = nn.BatchNorm2d(planes)

        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(planes)

        self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
        self.bn3 = nn.BatchNorm2d(planes * 4)

        if self.conv_shortcut:
            self.shortcut = nn.Sequential(
                nn.Conv2d(in_planes, planes * 4, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(planes * 4)
            )
        else:
            self.shortcut = nn.Identity()

    def forward(self, x):
        shortcut = self.shortcut(x)

        out = self.conv1(x)
        out = self.bn1(out)
        out = F.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = F.relu(out)

        out = self.conv3(out)
        out = self.bn3(out)

        out += shortcut
        out = F.relu(out)

        return out

# Instantiate the model
def ResNet50V2_instance(include_top=True, num_classes=1000, preact=False, pooling='avg'):
    return ResNet50V2(num_classes=num_classes, include_top=include_top, preact=preact, pooling=pooling)

model = ResNet50V2_instance()
print(model)

4、训练函数和测试函数

# 训练循环
def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)  # 训练集的大小
    num_batches = len(dataloader)   # 批次数目, (size/batch_size,向上取整)

    train_loss, train_acc = 0, 0  # 初始化训练损失和正确率

    for X, y in dataloader:  # 获取图片及其标签
        X, y = X.to(device), y.to(device)

        # 计算预测误差
        pred = model(X)          # 网络输出
        loss = loss_fn(pred, y)  # 计算网络输出和真实值之间的差距,targets为真实值,计算二者差值即为损失

        # 反向传播
        optimizer.zero_grad()  # grad属性归零
        loss.backward()        # 反向传播
        optimizer.step()       # 每一步自动更新

        # 记录acc与loss
        train_acc  += (pred.argmax(1) == y).type(torch.float).sum().item()
        train_loss += loss.item()

    train_acc  /= size
    train_loss /= num_batches

    return train_acc, train_loss


def test (dataloader, model, loss_fn):
    size        = len(dataloader.dataset)  # 测试集的大小
    num_batches = len(dataloader)          # 批次数目, (size/batch_size,向上取整)
    test_loss, test_acc = 0, 0

    # 当不进行训练时,停止梯度更新,节省计算内存消耗
    with torch.no_grad():
        for imgs, target in dataloader:
            imgs, target = imgs.to(device), target.to(device)

            # 计算loss
            target_pred = model(imgs)
            loss        = loss_fn(target_pred, target)

            test_loss += loss.item()
            test_acc  += (target_pred.argmax(1) == target).type(torch.float).sum().item()

    test_acc  /= size
    test_loss /= num_batches

    return test_acc, test_loss

5、模型训练

import copy

optimizer  = torch.optim.Adam(model.parameters(), lr= 1e-4)
loss_fn    = nn.CrossEntropyLoss() # 创建损失函数

epochs     = 10

train_loss = []
train_acc  = []
test_loss  = []
test_acc   = []

for epoch in range(epochs):

    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)

    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)

    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)

    # 获取当前的学习率
    lr = optimizer.state_dict()['param_groups'][0]['lr']

    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}')
    print(template.format(epoch+1, epoch_train_acc*100, epoch_train_loss,
                          epoch_test_acc*100, epoch_test_loss, lr))

Epoch: 1, Train_acc:51.1%, Train_loss:1.920, Test_acc:61.9%, Test_loss:0.954, Lr:1.00E-04
Epoch: 2, Train_acc:69.5%, Train_loss:0.829, Test_acc:73.5%, Test_loss:1.099, Lr:1.00E-04
Epoch: 3, Train_acc:75.9%, Train_loss:0.638, Test_acc:62.8%, Test_loss:1.229, Lr:1.00E-04
Epoch: 4, Train_acc:81.2%, Train_loss:0.476, Test_acc:77.9%, Test_loss:0.494, Lr:1.00E-04
Epoch: 5, Train_acc:89.2%, Train_loss:0.363, Test_acc:78.8%, Test_loss:0.605, Lr:1.00E-04
Epoch: 6, Train_acc:87.4%, Train_loss:0.373, Test_acc:84.1%, Test_loss:0.495, Lr:1.00E-04
Epoch: 7, Train_acc:90.0%, Train_loss:0.318, Test_acc:78.8%, Test_loss:0.885, Lr:1.00E-04
Epoch: 8, Train_acc:92.7%, Train_loss:0.215, Test_acc:84.1%, Test_loss:0.475, Lr:1.00E-04
Epoch: 9, Train_acc:91.4%, Train_loss:0.248, Test_acc:87.6%, Test_loss:0.643, Lr:1.00E-04
Epoch:10, Train_acc:89.6%, Train_loss:0.282, Test_acc:78.8%, Test_loss:0.553, Lr:1.00E-04

6、结果可视化

# coding=utf-8
import matplotlib.pyplot as plt
#隐藏警告
import warnings
warnings.filterwarnings("ignore")               #忽略警告信息
plt.rcParams['figure.dpi']         = 100        #分辨率

epochs_range = range(epochs)

plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)

plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

在这里插入图片描述

三、总结

学习了resent V2与resent网络之间的结构差异。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值