深度学习day11 高级卷积神经网络
本文参考视频链接
本文参考博客
前言
AlexNet、VGGNet、GoogLeNet、ResNet、DenseNet的介绍
本文只讲GoogLeNet、ResNet。
一、GoogLeNet (Inception)
- 简单的CNN:串行连接,1个卷积核(4个24x24的卷积矩阵)
复杂的CNN:非串行连接,可以通过类和函数减少代码冗余 - 其中重复的模块叫做inception模块:沿着四条路走完之后图像的宽度和高度要保持不变,最后拼接起来。 (b,c,w,h)只改变通道数c。作用:卷积核大小参数选择困难,通过训练自动找到卷积的最佳组合(调整权重)。
- 信息融合的概念:代表着不同通道的同一位置像素点的所有信息和。(比如高考使用各科总分的例子)。
- 通过使用1x1的卷积核可以改变通道的数量,降低计算量。(网络里的网络)
- 其中第一个卷积层参数分别表示(输入通道数量,输出通道数量16,卷积核大小1x1)
第二个卷积层参数分别表示(输入通道数量16,输出通道数量24,卷积核大小5x5,为了不改变图像的宽度和高度要使用padding进行0填充=kernel_size/2=2)
self.branch5x5_1 = nn.Conv2d(in_channels, 16, kernel_size=1)
self.branch5x5_2 = nn.Conv2d(16, 24, kernel_size=5, padding=2)
代码
- 使用一个类将inception模块封装起来,并保留一个输入通道的参数in_channels
- 在设计网络模型时,注意输入通道、输出通道的数量和全连接层的特征向量大小都要自己提前算出来。1408这个数据可以通过x = x.view(in_size, -1)后调用x.shape得到(二维数组转化为一维数组)。
- 结果比之前好1%,但注意最佳结果不一定是轮次越多越好,每当网络训练达到最佳效果时,我们就将当前参数存盘。
二、ResidualNet(ResNet残差神经网络)
- 如果把3x3的网络不断累加下去,效果会不会越来越好?对于c-20数据集,20层的卷积要比56层的效果更好。
- 出现以上这种情况的原因之一:梯度消失(在反向传播过程中,需要用链式法则把一连串的梯度乘起来,W1=W1-α(δLoss/δW1),假如每一处的梯度都是小于1的,那这个值就会越乘越小,趋近于0,当梯度趋近于0的时候,那么网络里面的权重就得不到更新,就是离输入比较近的这块的权重没办法得到充分的训练,不能更新)
- 为了解决梯度消失,以前提出的训练方法:逐层训练(对每一层加锁),但这对深度神经网络肯定不行,层太多了。
- 每一个residual block块使用跳连接:在输出的z函数后面加上一个输入x,这样往回传梯度的时候就会使z对x的导数维持在1的附近,不会趋近于0,防止梯度消失,这样可以很好的训练离输入较近处的权重。
- 每一个residual block块的输入和输出通道、宽高大小必须一样。
代码
- 先构造一个residual block类,然后再设计网络结构,在进行训练时,可以debug调试一些代码,比如注释掉forward中的一些步骤,看看输出的张量大小和自己设想的是否一致。
- 使用ResidualNet可以使mnist数字集的预测准确率达到99%。
- 另外两个作业:《He K, Zhang X, Ren S, et al. Identity Mappings in Deep Residual Networks[C]》论文中有许多不同的residual block块;《Huang G, Liu Z, Laurens V D M, et al. Densely Connected Convolutional Networks[J]. 2016:2261-2269.》论文中使用了不同形式的跳连接。去实现几种并在miniconda上面测试。
扩展:DenseNet
后面的路
- 更深入理解深度学习的模型-看深度学习的花书。
- 阅读pytorch的文档,至少通读一遍。
- 复现经典工作(不是下载代码跑起来就行,这只是代表你会配置环境,而是要去读代码,学习他的整个系统架构、训练架构、测试架构、数据读取架构,尝试自己写代码)。
- 扩充视野,选一个特定的领域,读论文,找创新。
源代码
GoogLeNet代码说明:
1、先使用类对Inception Moudel进行封装
2、先是1个卷积层(conv,maxpooling,relu),然后inceptionA模块(输出的channels是24+16+24+24=88),接下来又是一个卷积层(conv,mp,relu),然后inceptionA模块,最后一个全连接层(fc)。
3、1408这个数据可以通过x = x.view(in_size, -1)后调用x.shape得到。
import torch
import torch.nn as nn
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch.optim as optim
# prepare dataset
batch_size = 64
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) # 归一化,均值和方差
train_dataset = datasets.MNIST(root='../dataset/mnist/', train=True, download=True, transform=transform)
train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)
test_dataset = datasets.MNIST(root='../dataset/mnist/', train=False, download=True, transform=transform)
test_loader = DataLoader(test_dataset, shuffle=False, batch_size=batch_size)
# design model using class
class InceptionA(nn.Module):
def __init__(self, in_channels):
super(InceptionA, self).__init__()
self.branch1x1 = nn.Conv2d(in_channels, 16, kernel_size=1)
self.branch5x5_1 = nn.Conv2d(in_channels, 16, kernel_size=1)
self.branch5x5_2 = nn.Conv2d(16, 24, kernel_size=5, padding=2)
self.branch3x3_1 = nn.Conv2d(in_channels, 16, kernel_size=1)
self.branch3x3_2 = nn.Conv2d(16, 24, kernel_size=3, padding=1)
self.branch3x3_3 = nn.Conv2d(24, 24, kernel_size=3, padding=1)
self.branch_pool = nn.Conv2d(in_channels, 24, kernel_size=1)
def forward(self, x):
branch1x1 = self.branch1x1(x)
branch5x5 = self.branch5x5_1(x)
branch5x5 = self.branch5x5_2(branch5x5)
branch3x3 = self.branch3x3_1(x)
branch3x3 = self.branch3x3_2(branch3x3)
branch3x3 = self.branch3x3_3(branch3x3)
branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
branch_pool = self.branch_pool(branch_pool)
outputs = [branch1x1, branch5x5, branch3x3, branch_pool]
return torch.cat(outputs, dim=1) # b,c,w,h c对应的是dim=1
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(88, 20, kernel_size=5) # 88 = 24x3 + 16
self.incep1 = InceptionA(in_channels=10) # 与conv1 中的10对应
self.incep2 = InceptionA(in_channels=20) # 与conv2 中的20对应
self.mp = nn.MaxPool2d(2)
self.fc = nn.Linear(1408, 10)
def forward(self, x):
in_size = x.size(0)
x = F.relu(self.mp(self.conv1(x)))
x = self.incep1(x)
x = F.relu(self.mp(self.conv2(x)))
x = self.incep2(x)
x = x.view(in_size, -1)
x = self.fc(x)
return x
model = Net()
# construct loss and optimizer
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
# training cycle forward, backward, update
def train(epoch):
running_loss = 0.0
for batch_idx, data in enumerate(train_loader, 0):
inputs, target = data
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, target)
loss.backward()
optimizer.step()
running_loss += loss.item()
if batch_idx % 300 == 299:
print('[%d, %5d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
running_loss = 0.0
def test():
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
images, labels = data
outputs = model(images)
_, predicted = torch.max(outputs.data, dim=1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('accuracy on test set: %d %% ' % (100 * correct / total))
if __name__ == '__main__':
for epoch in range(10):
train(epoch)
test()
ResNet代码说明:
1、要解决的问题:梯度消失
2、跳连接,H(x) = F(x) + x,张量维度必须一样,加完后再激活。不要做pooling,张量的维度会发生变化。
import torch
import torch.nn as nn
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch.optim as optim
# prepare dataset
batch_size = 64
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) # 归一化,均值和方差
train_dataset = datasets.MNIST(root='../dataset/mnist/', train=True, download=True, transform=transform)
train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)
test_dataset = datasets.MNIST(root='../dataset/mnist/', train=False, download=True, transform=transform)
test_loader = DataLoader(test_dataset, shuffle=False, batch_size=batch_size)
# design model using class
class ResidualBlock(nn.Module):
def __init__(self, channels):
super(ResidualBlock, self).__init__()
self.channels = channels
self.conv1 = nn.Conv2d(channels, channels, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(channels, channels, kernel_size=3, padding=1)
def forward(self, x):
y = F.relu(self.conv1(x))
y = self.conv2(y)
return F.relu(x + y)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=5)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5) # 88 = 24x3 + 16
self.rblock1 = ResidualBlock(16)
self.rblock2 = ResidualBlock(32)
self.mp = nn.MaxPool2d(2)
self.fc = nn.Linear(512, 10) # 暂时不知道1408咋能自动出来的
def forward(self, x):
in_size = x.size(0)
x = self.mp(F.relu(self.conv1(x)))
x = self.rblock1(x)
x = self.mp(F.relu(self.conv2(x)))
x = self.rblock2(x)
x = x.view(in_size, -1)
x = self.fc(x)
return x
model = Net()
# construct loss and optimizer
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
# training cycle forward, backward, update
def train(epoch):
running_loss = 0.0
for batch_idx, data in enumerate(train_loader, 0):
inputs, target = data
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, target)
loss.backward()
optimizer.step()
running_loss += loss.item()
if batch_idx % 300 == 299:
print('[%d, %5d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
running_loss = 0.0
def test():
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
images, labels = data
outputs = model(images)
_, predicted = torch.max(outputs.data, dim=1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('accuracy on test set: %d %% ' % (100 * correct / total))
if __name__ == '__main__':
for epoch in range(10):
train(epoch)
test()