ResNet 阅读学习和实现

1.什么是ResNet

        ResNet由何凯明、孙剑博士等人于2015年提出,主要解决深层网络难以训练的问题。由于梯度消失和梯度爆炸的存在(初始归一化和中间层归一化一定程度解决了这个问题,使得具有十数层的网络收敛,以进行反向传播的梯度下降),当面对更深层次的网络时,退化问题就会暴露,随着网络深度的增加,精度达到饱和,然后迅速退化。这种退化不是由过度拟合引起的,而是由于更高的训练误差。

        训练精度的下降表明并非所有系统都同样容易优化。让我们考虑一个较浅的架构和一个较深的对应架构,并在其上添加更多层。通过构造深层模型,存在一种解决方案:添加的层是identity mapping,其他层是从已学习的较浅模型复制的。该构造解的存在表明,较深的模型不应比较浅的模型产生更高的训练误差。 但是现有的求解器无法实现收敛

        作者通过深度残差网络来解决退化问题,不希望每个堆叠层直接适合所需的底层映射,而是明确地让这些层适合剩余映射。将所需的底层映射表示为H(x),我们让堆叠的非线性层拟合另一个映射F(x):= H(x) - x,则原始的映射变为F(x) + x,可以通过有“shortcut connections”的前馈网络实现,如下图

         正如之前所讨论的那样,如果添加的层可以构造为identity mapping,则较深的模型的训练误差应不大于较浅的模型。退化问题表明,求解器可能难以通过多个非线性层近似单位映射。利用残差学习重构,如果单位映射是最优的,则求解器可以简单地将多个非线性层的权重推向零,以接近单位映射。如果最优函数更接近单位映射而不是零映射,则求解器应更容易找到参考单位映射的扰动,而不是将函数学习为新函数。

        对每几层展开残差学习,如图2,形式上可以构建一个块儿,定义如下:

y = F(x, {Wi}) + x

或者

y = F(x, {Wi}) + Ws·x

使用线性投影Ws匹配维度。网络结构如下图

 

        更多内容请阅读原始文献

 基于Pytorch的实现

1. resnet网络pytorch实现

import torch
import torch.nn as nn

class BasicBlock(nn.Module):
    expansion = 1
    def __init__(self, in_channels, out_channels, stride=1):
        super().__init__()
        #residual function
        self.residual_function = nn.Sequential(
            nn.Conv2d(in_channels, out_channels, kernel_size=(3,3), stride=stride, padding=1, bias=False),
            nn.BatchNorm2d(out_channels),
            nn.ReLU(inplace=True),
            nn.Conv2d(out_channels, out_channels*BasicBlock.expansion, kernel_size=(3,3), padding=1, bias=False),
            nn.BatchNorm2d(out_channels*BasicBlock.expansion)
        )
        #shortcut
        self.shortcut = nn.Sequential()
        # the shortcut output dimension is not the same with residual function
        # use 1*1 convolution to match the dimension
        if stride != 1 or in_channels != BasicBlock.expansion * out_channels:
            self.shortcut = nn.Sequential(
                nn.Conv2d(in_channels, out_channels*BasicBlock.expansion, kernel_size=(1,1), stride=stride, bias=False),
                nn.BatchNorm2d(out_channels*BasicBlock.expansion)
            )

    def forward(self,x):
        return nn.ReLU(inplace=True)(self.residual_function(x)+self.shortcut(x)) #(inplace=True)修改原来的值,相当于地址传递

class BottleNeck(nn.Module):
    expansion=4
    def __init__(self, in_channels, out_channels, stride=1):
        super().__init__()
        self.residual_function = nn.Sequential(
            nn.Conv2d(in_channels, out_channels, kernel_size=(1,1), bias=False),
            nn.BatchNorm2d(out_channels),
            nn.ReLU(inplace=True),
            nn.Conv2d(out_channels, out_channels, stride=stride, kernel_size=(3,3), padding=1, bias=False),
            nn.BatchNorm2d(out_channels),
            nn.ReLU(inplace=True),
            nn.Conv2d(out_channels, out_channels*BottleNeck.expansion, kernel_size=(1,1), bias=False),
            nn.BatchNorm2d(out_channels*BottleNeck.expansion)
        )

        self.shortcut = nn.Sequential()  #输出等于输入

        if stride != 1 or in_channels != BottleNeck.expansion * out_channels:
            self.shortcut = nn.Sequential(
                nn.Conv2d(in_channels, out_channels * BottleNeck.expansion, kernel_size=(1,1), stride=stride, bias=False),
                nn.BatchNorm2d(out_channels*BottleNeck.expansion)
            )
        #print("shortcut", BottleNeck.expansion * out_channels)

    def forward(self,x):
        #print(self.residual_function(x).size())
        #print(self.shortcut(x).size())
        return nn.ReLU(inplace=True)(self.residual_function(x) + self.shortcut(x))

class ResNet(nn.Module):
    def __init__(self, block, num_block, num_classes=10):
        super().__init__()
        self.in_channels = 16
        self.conv1 = nn.Sequential(
            nn.Conv2d(3, 16, kernel_size=(3,3), padding=1, bias=False),
            nn.BatchNorm2d(16),
            nn.ReLU(inplace=True)
        )
        self.conv2_x = self._make_layer(block, 16, num_block[0], 1)
        self.conv3_x = self._make_layer(block, 32, num_block[1], 2)
        self.conv4_x = self._make_layer(block, 64, num_block[2], 2)
        #self.conv5_x = self._make_layer(block, 512, num_block[3], 2)
        self.avg_pool = nn.AdaptiveAvgPool2d((1,1))
        self.fc = nn.Linear(64*block.expansion, num_classes)

    def _make_layer(self, block, out_channels, num_blocks, stride):
        strides = [stride] + [1]*(num_blocks-1) #[stride, 1, 1, 1, ......]
        layers = []
        for stride in strides:
            layers.append(block(self.in_channels, out_channels, stride))
            self.in_channels = out_channels*block.expansion

        return nn.Sequential(*layers) #通过nn.Sequential函数将列表通过非关键字参数的形式传入

    def forward(self, x):
        output = self.conv1(x)
        output = self.conv2_x(output)
        output = self.conv3_x(output)
        output = self.conv4_x(output)
        #output = self.conv5_x(output)
        output = self.avg_pool(output)
        output = output.view(output.size(0),-1)
        output = self.fc(output)

        return output

def ResNet_cifar10():
    return ResNet(BottleNeck, [2, 2, 2]) #6n+2层

2. 训练函数

def train(model, criterion,  optimizer, epochs):

    start = time.time()
    best_acc = 0.0
    for epoch in range(epochs):
        start_epoch = time.time()
        print("Epoch {}/{}".format(epoch, epochs-1))
        print("-" * 16)
        model.train()
        running_loss = 0.0
        running_correct = 0.0
        for index, data in enumerate(trainloader):
            inputs, labels = data
            inputs = inputs.to(device)
            labels = labels.to(device)
            optimizer.zero_grad()
            outputs = model(inputs)
            _, preds = torch.max(outputs, 1) #返回每一行中最大值的那个元素,且返回其索引, preds是索引
            loss = criterion(outputs, labels) #每批量的平均损失
            loss.backward()
            optimizer.step()
            running_loss += loss.item()*inputs.size(0)
            running_correct += torch.sum(preds == labels.data)
        epoch_loss = running_loss / len(trainset)
        epoch_acc = running_correct / len(trainset)

        writer.add_scalar('loss', epoch_loss, epoch)
        writer.add_scalar('acc', epoch_acc, epoch)

        if epoch_acc > best_acc:
            best_acc = epoch_acc

        print('train Loss: {:.4f} Acc: {:.4f}'.format(epoch_loss, epoch_acc))
        time_epoch = time.time() - start_epoch
        print('This epoch waste {:.0f}m {:.0f}s'.format(
            time_epoch // 60, time_epoch % 60))

    print('##' * 12)

    time_elapsed = time.time() - start
    print('Training complete in {:.0f}m {:.0f}s'.format(
        time_elapsed // 60, time_elapsed % 60))
    print('Best val Acc: {:4f}'.format(best_acc))

    return model

3. 主要运行代码

import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pyplot as plt
import numpy as np
import torchvision
import torch.utils.data
import torchvision.transforms as transforms
import time
from tensorboardX import SummaryWriter
import MyResNet


def imshow(img):
    mean = torch.as_tensor([0.4914, 0.4822, 0.4465])
    std = torch.as_tensor([0.2023, 0.1994, 0.2010])
    mean = mean.view(-1, 1, 1)
    std = std.view(-1, 1, 1)
    #print(mean)
    img.mul_(std).add_(mean)
    npimg = img.numpy()
    plt.imshow((np.transpose(npimg, (1, 2, 0))*255).astype('uint8'))
    plt.show()


if __name__ == '__main__':
    transform_train = transforms.Compose(
        [transforms.RandomHorizontalFlip(),
         transforms.ToTensor(),
         transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))])

    transform_test = transforms.Compose(
        [transforms.ToTensor(),
         transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))])

    trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                            download=False, transform=transform_train)
    trainloader = torch.utils.data.DataLoader(trainset, batch_size=64,
                                              shuffle=True, num_workers=1)

    testset = torchvision.datasets.CIFAR10(root='./data', train=False,
                                           download=False, transform=transform_test)
    testloader = torch.utils.data.DataLoader(testset, batch_size=64,
                                             shuffle=False, num_workers=1)

    classes = ('plane', 'car', 'bird', 'cat',
               'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

    # # get some random training images
    # dataiter = iter(trainloader)
    # images, labels = dataiter.next()
    # # show images
    # imshow(torchvision.utils.make_grid(images))
    print("训练集的长度:{}".format(len(trainset)))
    print("测试集的长度:{}".format(len(testset)))

    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    print(device)

    writer = SummaryWriter('D:/MY_pyCharm_project/pythonProject/resnetForCifar-100/runs')

    net = MyResNet.ResNet_cifar10()
    net.to(device)
    criterion = nn.CrossEntropyLoss()
    optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.9, weight_decay=1e-4)
    model = train(net, criterion, optimizer, epochs=3)

    input = torch.rand(32, 3, 32, 32)  # 示例输入
    input = input.to(device)

    writer.add_graph(model, (input,))
    writer.close()

4. 程序介绍

        我们使用cifar10数据集(从官方数据库中下载,第一次运行可设置download=True),利用resnet对cifar10数据完成分类,并使用tensorboard进行训练过程和模型的可视化,使用交叉熵损失作为损失函数,使用SGD优化器。学习率设置为0.1,动量为0.9,权重衰减设置为0.0001,最后进行三轮训练。各种超参数可以自由调整(上述代码中都是随意设置的,本着运行速度和测试代码的初衷)

        我们没有使用文献 Deep Residual Learning for Image Recognition中的cifar10分类实验的网络结构,这里进行了简单的修改,如果有兴趣,可以参考原文。

        运行环境torch.__version__= '1.10.0'

参考文献

Deep Residual Learning for Image Recognition

https://github.com/weiaicunzai/pytorch-cifar100

写在最后

欢迎大家学习交流,共同进步QQ:2634110636

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值