残差网络ResNet的实现与数据测试

残差网络的实现与数据测试

参考文献

为什么提出残差网络

随着卷积神经网络的发展和普及,网络深度和架构研究早已经成为人们常见的问题,所以,现在卷积神经网络的趋势发展趋势就是:足够深、足够广。我们就考虑网络足够深,是不是效果会更好?经过许多科研大佬们研究,也发现随着网络的深入,一些经典的问题也就随之出现,例如梯度弥散和梯度爆炸。在这科普一下什么交梯度弥散和梯度爆炸。下面看看网友们给出的解释:

  • 梯度弥散:当使用反向传播方法计算导数的时候,随着网络的深度的增加,反向传播的梯度(从输出层到网络的最初几层)的幅度值会急剧地减小。结果就造成了整体的损失函数相对于最初几层的权重的导数非常小。这样,当使用梯度下降法的时候,最初几层的权重变化非常缓慢,以至于它们不能够从样本中进行有效的学习,这种问题通常被称为“梯度的弥散”。
  • 梯度爆炸:梯度在网络训练时被用来得到网络参数更新的方向和幅度,进而在正确的方向上以合适的幅度更新网络参数。在深层网络或递归神经网络中,误差梯度在更新中累积得到一个非常大的梯度,这样的梯度会大幅度更新网络参数,进而导致网络不稳定。在极端情况下,权重的值变得特别大,以至于结果会溢出(NaN值,无穷与非数值)。当梯度爆炸发生时,网络层之间反复乘以大于1.0的梯度值使得梯度值成倍增长。
  • 这两个不良影响也常常困扰着卷积神经网络的设计者和使用者,也成为人们不得不思考的问题。现在发现了问题,所以这些爱科研的大佬们就一直思考,怎么去解决这些问题呢?非常开心的是最近学习的Resnet就有效的解决了这个问题。Resnet在2015年提出之后,立马获得了当年的Image图像比赛第一名,并且准确率奇高,牛逼!

最初Resnet的提出目的并不是为了解决梯度弥散,有效的消除梯度弥散可以说是运气和实力并存。最开始的问题是什么呢?

  • 人们发现随着层数增加,按理来说在训练集上误差会更好(即可能出现过拟合),而在下图中我们可以看到诡异的变化。这个变化用之前的过拟合显然是没法解释的。
    在这里插入图片描述
    随着后续的发现,人们给出了一些解释
  • 发现问题主要来自于两个,第一个是恒等函数的问题,第二个就是来自于梯度爆炸和梯度弥散的问题。
  • 深层网络应该优于浅层网络,可以说是所有人都认同的的事实,但是随着网络的加深,一些层通常是没有必要出现的,如果训练好参数随着后面的网络扰动,会被类似于白噪音的问题使参数重新偏移变差。
  • 因此,人们想到了恒等函数,也就是说,直接在后面几层使得 ‘ F ( x ) = x ‘ `F(x)=x` F(x)=x,来进行等效传递。但是问题是,等效传递并不容易,更不用说Sigmoid函数等特殊函数曲线的存在。因此,利用残差块,可以完美的解决这个问题,终于引出残差块了,下面我们来看看经典残差网络是怎么实现的吧。
    下面就是一个残差块的介绍
    在这里插入图片描述
  • 对于输出函数变为了 ‘ H ( x ) ‘ `H(x)` H(x),并且 ‘ H ( x ) = F ( x ) + x ‘ `H(x)=F(x)+x` H(x)=F(x)+x。此时,为了使得进行恒等变换,只用使F(x)足够小就可以了,此时,输出也就近似变成了 ‘ H ( x ) = x ‘ `H(x)=x` H(x)=x。也就是说,假如优化目标函数是逼近一个恒等映射, 而不是0映射,那么学习找到对恒等映射的扰动会比重新学习一个映射函数要容易。

resent网络实现过程

1 导入模块

%matplotlib inline
import torchvision
import torchvision.transforms as transforms
import torch
import time
from torch import nn, optim
import torch.nn.functional as F
from matplotlib import pyplot as plt 
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

2 下载数据集

transform_train = transforms.Compose([
    transforms.RandomCrop(32, padding=4),
    transforms.RandomHorizontalFlip(),
    transforms.ToTensor(),
    transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
transform_test = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
trainset = torchvision.datasets.CIFAR10(root='./data_cifar10', train=True, download=False, transform=transform_train)
train_iter = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=0)

testset = torchvision.datasets.CIFAR10(root='./data_cifar10', train=False, download=False, transform=transform_test)
test_iter = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False, num_workers=2)

3 定义网络

3.1 定义模块1

class basic_1(nn.Module):#计算模块1,两层卷积
    expansion = 1
    def __init__(self, in_planes, planes, stride=1):
        super(basic_1, self).__init__()
        self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(planes)
        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(planes)

        self.shortcut = nn.Sequential() #判断是否模块输入输出通道是否匹配,需要在加一个1X1卷积匹配通道
        if stride != 1 or in_planes != self.expansion*planes:
            self.shortcut = nn.Sequential(
                nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(self.expansion*planes)
            )
    def forward(self, x):
        out = F.relu(self.bn1(self.conv1(x)))
        out = self.bn2(self.conv2(out))
        out += self.shortcut(x)
        out = F.relu(out)
        return out

3.2 定义模块2

class basic_2(nn.Module):#计算模块2,三层卷积
    expansion = 4

    def __init__(self, in_planes, planes, stride=1):
        super(basic_2, self).__init__()
        self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
        self.bn1 = nn.BatchNorm2d(planes)
        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(planes)
        self.conv3 = nn.Conv2d(planes, self.expansion*planes, kernel_size=1, bias=False)
        self.bn3 = nn.BatchNorm2d(self.expansion*planes)

        self.shortcut = nn.Sequential()
        if stride != 1 or in_planes != self.expansion*planes:
            self.shortcut = nn.Sequential(
                nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(self.expansion*planes)
            )

    def forward(self, x):
        out = F.relu(self.bn1(self.conv1(x)))
        out = F.relu(self.bn2(self.conv2(out)))
        out = self.bn3(self.conv3(out))
        out += self.shortcut(x)
        out = F.relu(out)
        return out

3.3 定义残差网络

构建ResNet网络是通过ResNet这个类进行的。首先还是继承PyTorch中网络的基类:torch.nn.Module,其次主要的是重写初始化__init__和forward方法。
在初始化__init__中主要是定义一些层的参数。forward方法中主要是定义数据在层之间的流动顺序,也就是层的连接顺序。

还可以在类中定义其他私有方法用来模块化一些操作,比如这里的_make_layer方法是用来构建ResNet网络中的4个blocks。_make_layer方法的第一个输入block是Bottleneck或BasicBlock类,第二个输入是该blocks的输出channel,第三个输入是每个blocks中包含多少个residual子结构,因此layers这个列表就是前面resnet50的[3, 4, 6, 3]。

_make_layer方法中比较重要的两行代码是:

  • layers.append(block(self.inplanes, planes, stride, downsample)),该部分是将每个blocks的第一个residual结构保存在layers列表中。
class _resnet(nn.Module):
    def __init__(self, block, num_blocks, num_classes=10):
        super(_resnet, self).__init__()
        self.in_planes = 64 #定义最开始的输入通道为64
        
        self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)#第一个模块使用步长为1,                                                                    
        self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2) #剩余三个模快使用步长为2
        self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
        self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
        self.linear = nn.Linear(512*block.expansion, num_classes)

    def _make_layer(self, block, planes, num_blocks, stride):
        strides = [stride] + [1]*(num_blocks-1)
        layers = []
        for stride in strides:#这里可以写成for stride in num_blocks意味着这里的block写几个
            layers.append(block(self.in_planes, planes, stride))
            self.in_planes = planes * block.expansion
        return nn.Sequential(*layers)

    def forward(self, x):
        out = F.relu(self.bn1(self.conv1(x)))
        out = self.layer1(out)
        out = self.layer2(out)
        out = self.layer3(out)
        out = self.layer4(out)
        out = F.avg_pool2d(out, 4)
        out = out.view(out.size(0), -1)
        out = self.linear(out)
        return out

4 定义四个模型

def resnet18():
    return _resnet(basic_1, [2,2,2,2])

def resnet34():
    return _resnet(basic_1, [3,4,6,3])

def resnet50():
    return _resnet(basic_2, [3,4,6,3])

def resnet101():
    return _resnet(basic_2, [3,4,23,3])

def resnet152():
    return _resnet(basic_2, [3,8,36,3])

5 参数设置

net18 = resnet18()#网络实例化
net34 = resnet34()
net50 = resnet50()
net101 = resnet101()
net152 = resnet152()#网络实例化
batch_size = 156
lr, num_epochs = 0.001, 25#设置学习率和循环次数
optimizer = torch.optim.Adam(net18.parameters(), lr=lr)#优化器选择暴力求解
optimizer = torch.optim.Adam(net34.parameters(), lr=lr)
optimizer = torch.optim.Adam(net50.parameters(), lr=lr)
optimizer = torch.optim.Adam(net101.parameters(), lr=lr)
optimizer = torch.optim.Adam(net152.parameters(), lr=lr)

6 训练

def evaluate_accuracy(data_iter, net, device=None):#测试集函数
    if device is None and isinstance(net, torch.nn.Module):
        # 如果没指定device就使用net的device
        device = list(net.parameters())[0].device
    acc_sum, n = 0.0, 0
    with torch.no_grad():
        for X, y in data_iter:
            if isinstance(net, torch.nn.Module):
                net.eval() # 评估模式, 这会关闭dropout
                acc_sum += (net(X.to(device)).argmax(dim=1) == y.to(device)).float().sum().cpu().item()
                net.train() # 改回训练模式
            else: # 自定义的模型
                if('is_training' in net.__code__.co_varnames): # 如果有is_training这个参数
                    # 将is_training设置成False
                    acc_sum += (net(X, is_training=False).argmax(dim=1) == y).float().sum().item() 
                else:
                    acc_sum += (net(X).argmax(dim=1) == y).float().sum().item() 
            n += y.shape[0]
    return acc_sum / n
def train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs):
    net = net.to(device)
    print("training on ", device)
    loss = torch.nn.CrossEntropyLoss()
    train_ls, test_ls = [], []#每次误差
    batch_count = 0
    for epoch in range(num_epochs):
        train_l_sum, train_acc_sum, n, start = 0.0, 0.0, 0, time.time()
        for X, y in train_iter:
            X = X.to(device)
            y = y.to(device)
            y_hat = net(X)
            l = loss(y_hat, y)
            optimizer.zero_grad()
            l.backward()
            optimizer.step()
            train_l_sum += l.cpu().item()
            train_acc_sum += (y_hat.argmax(dim=1) == y).sum().cpu().item()
            n += y.shape[0]
            batch_count += 1
        test_acc = evaluate_accuracy(test_iter, net)
        print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f, time %.1f sec'
              % (epoch + 1, train_l_sum / batch_count, train_acc_sum / n, test_acc, time.time() - start))
        train_ls.append(train_l_sum / batch_count)
    plt.plot(range(1,num_epochs+1),train_ls,label="train error",color="#F08080")
error",color="#DB7093",linestyle="--")
    plt.xlabel("n")
    plt.ylabel("loss")
    plt.legend()
    plt.show()

训练Net18

train_ch5(net18, train_iter, test_iter, batch_size, optimizer, device, num_epochs)
training on  cuda
epoch 1, loss 2.3803, train acc 0.098, test acc 0.099, time 60.3 sec
epoch 2, loss 1.1902, train acc 0.098, test acc 0.099, time 60.6 sec
epoch 3, loss 0.7937, train acc 0.098, test acc 0.099, time 60.4 sec
epoch 4, loss 0.5953, train acc 0.099, test acc 0.100, time 60.4 sec
epoch 5, loss 0.4764, train acc 0.099, test acc 0.099, time 60.5 sec
epoch 6, loss 0.3968, train acc 0.099, test acc 0.099, time 60.4 sec
epoch 7, loss 0.3402, train acc 0.098, test acc 0.099, time 60.4 sec
epoch 8, loss 0.2977, train acc 0.098, test acc 0.099, time 60.5 sec
epoch 9, loss 0.2645, train acc 0.099, test acc 0.099, time 60.5 sec
epoch 10, loss 0.2380, train acc 0.099, test acc 0.099, time 60.3 sec
epoch 11, loss 0.2164, train acc 0.099, test acc 0.099, time 60.4 sec
epoch 12, loss 0.1984, train acc 0.098, test acc 0.099, time 60.4 sec
epoch 13, loss 0.1832, train acc 0.098, test acc 0.099, time 60.3 sec
epoch 14, loss 0.1700, train acc 0.098, test acc 0.099, time 60.5 sec
epoch 15, loss 0.1587, train acc 0.098, test acc 0.099, time 60.5 sec
epoch 16, loss 0.1488, train acc 0.099, test acc 0.099, time 60.5 sec
epoch 17, loss 0.1401, train acc 0.098, test acc 0.099, time 60.4 sec
epoch 18, loss 0.1323, train acc 0.099, test acc 0.099, time 60.4 sec
epoch 19, loss 0.1253, train acc 0.098, test acc 0.099, time 60.4 sec
epoch 20, loss 0.1191, train acc 0.098, test acc 0.099, time 60.4 sec
epoch 21, loss 0.1134, train acc 0.098, test acc 0.099, time 60.4 sec
epoch 22, loss 0.1082, train acc 0.098, test acc 0.099, time 60.4 sec
epoch 23, loss 0.1035, train acc 0.098, test acc 0.099, time 60.4 sec
epoch 24, loss 0.0992, train acc 0.098, test acc 0.099, time 60.4 sec
epoch 25, loss 0.0952, train acc 0.098, test acc 0.099, time 60.3 sec

在这里插入图片描述

训练ResNet34

train_ch5(net34, train_iter, test_iter, batch_size, optimizer, device, num_epochs)
training on  cuda
epoch 1, loss 2.4469, train acc 0.120, test acc 0.106, time 98.3 sec
epoch 2, loss 1.2233, train acc 0.120, test acc 0.107, time 98.3 sec
epoch 3, loss 0.8156, train acc 0.119, test acc 0.107, time 98.3 sec
epoch 4, loss 0.6114, train acc 0.121, test acc 0.108, time 98.3 sec
epoch 5, loss 0.4891, train acc 0.120, test acc 0.108, time 98.3 sec
epoch 6, loss 0.4077, train acc 0.122, test acc 0.109, time 98.3 sec
epoch 7, loss 0.3496, train acc 0.120, test acc 0.109, time 98.3 sec
epoch 8, loss 0.3059, train acc 0.120, test acc 0.109, time 98.4 sec
epoch 9, loss 0.2718, train acc 0.120, test acc 0.110, time 98.3 sec
epoch 10, loss 0.2445, train acc 0.119, test acc 0.106, time 98.3 sec
epoch 11, loss 0.2224, train acc 0.120, test acc 0.109, time 98.2 sec
epoch 12, loss 0.2039, train acc 0.121, test acc 0.110, time 98.3 sec
epoch 13, loss 0.1881, train acc 0.121, test acc 0.107, time 98.2 sec
epoch 14, loss 0.1748, train acc 0.119, test acc 0.107, time 98.3 sec
epoch 15, loss 0.1631, train acc 0.121, test acc 0.107, time 98.3 sec
epoch 16, loss 0.1529, train acc 0.121, test acc 0.107, time 98.3 sec
epoch 17, loss 0.1439, train acc 0.121, test acc 0.107, time 98.3 sec
epoch 18, loss 0.1359, train acc 0.121, test acc 0.109, time 98.3 sec
epoch 19, loss 0.1287, train acc 0.121, test acc 0.108, time 98.4 sec
epoch 20, loss 0.1223, train acc 0.120, test acc 0.107, time 98.3 sec
epoch 21, loss 0.1165, train acc 0.122, test acc 0.107, time 98.2 sec
epoch 22, loss 0.1111, train acc 0.123, test acc 0.106, time 98.2 sec
epoch 23, loss 0.1064, train acc 0.120, test acc 0.109, time 98.3 sec
epoch 24, loss 0.1019, train acc 0.120, test acc 0.107, time 98.3 sec
epoch 25, loss 0.0978, train acc 0.120, test acc 0.107, time 98.3 sec

训练ResNet50

train_ch5(net50, train_iter, test_iter, batch_size, optimizer, device, num_epochs)
training on  cuda
epoch 1, loss 2.5254, train acc 0.101, test acc 0.100, time 164.6 sec
epoch 2, loss 1.2632, train acc 0.100, test acc 0.101, time 164.6 sec
epoch 3, loss 0.8422, train acc 0.099, test acc 0.100, time 164.7 sec
epoch 4, loss 0.6319, train acc 0.100, test acc 0.099, time 164.8 sec
epoch 5, loss 0.5052, train acc 0.101, test acc 0.098, time 164.7 sec

在这里插入图片描述

训练ResNet101

train_ch5(net101, train_iter, test_iter, batch_size, optimizer, device, num_epochs)
train_ch5(net152, train_iter, test_iter, batch_size, optimizer, device, num_epochs)

感悟

  • 在学习了经典resent代码后,觉得代码中规范技巧是我们需要学习的地方,而且resent封装得也非常好,只需要修该其中的些许参数就能实现,但是每个代码代表的含义是值得细细去品味的,这也是需要我们一步步去积累与学习的。
  • 最近学习了梯度下降优化算法,对于前面文章中所提到的梯度弥散和梯度爆炸更深入的内容还待学习,相信这些梯度方面存在的问题能够加深我对梯度下降优化算法的理解。
  • 在跑代码的时候,想着把loss变化图画出来,结果画成训练集和测试集上的测试误差了,浪费了时间,编代码的时候太不专心了,暴击一下,下次注意。
  • 11
    点赞
  • 27
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

pinn山里娃

原创不易请多多支持

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值