随着卷积神经网络层数的变深,最先带来的问题是:
vanishing/exploding gradient 梯度消失/爆炸
以往解决上述问题的方案: 归一化
- normalized initialization 初始归一化(对输入数据)
- intermediate normalization 中间层归一化
同时要注意到,网络变深导致最后测试集上的性能不如较浅的网络,不是因为overfitting过拟合
因为过拟合是指训练精度很高,测试精度却很低,深度较大的普通网络却随着网络越深,trian acc和test acc 同步下降
理论上分析,上述问题(层数越深,acc越差)问题不应该出现,因为:
例如,向上图中20层的网络添加额外的layer使其变深,理论上被添加的layer应该是可以学习到一种identity mapping关系,以此来保证这些新添加的layer输入什么就输出什么,不出现效果越来越差的问题。(至少得和浅层网络保持差不多的正确率)
Kaiming He大佬团队给出的解决方案:
我们要求解的是映射为:H(x)
现在将这个问题转换为求解网络的残差映射函数,也就是F(x),其中F(x) = H(x)-x
残差:观测值与估计值之间的差。
这里H(x)就是观测值,x就是估计值。
输入和输出尺寸不同如何完成最后的加法工作?
- extra zero entries padded for increasing dimensions 填零
- projection 投影(使用卷积来实现)
网络实现
#基于飞桨的模型复现 ResNet_18
class Identity(nn.Layer):
def __init__(self):
super().__init__()
def forward(self,x):
return x
class Block(nn.Layer):
def __init__(self,in_dim, out_dim,stride):
super().__init__()
self.conv1 = nn.Conv2D(in_dim,out_dim,3,stride=stride,padding=1,bias_attr=False)
self.bn1 = nn.BatchNorm2D(out_dim)
self.conv2 = nn.Conv2D(out_dim,out_dim,3,stride=1,padding=1,bias_attr=False)
self.bn2 = nn.BatchNorm2D(out_dim)
self.relu = nn.ReLU()
if stride == 2 or in_dim != out_dim:
self.downsample = nn.Sequential(*[
nn.Conv2D(in_dim,out_dim,1,stride=stride),
nn.BatchNorm2D(out_dim)
])
else:
self.downsample = Identity()
def forward(self,x):
h = x
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.conv2(x)
x = self.bn2(x)
identity = self.downsample(h)
x = x + identity
x = self.relu(x)
return x
class ResNet(nn.Layer):
def __init__(self,in_dim = 64,num_classes=10):
super().__init__()
self.in_dim = in_dim
#steam layers
self.conv1 = nn.Conv2D(in_channels=3,
out_channels=in_dim,
kernel_size=3,
stride=1,
padding=1,
bias_attr=False)
self.bn1 = nn.BatchNorm2D(in_dim)
self.relu = nn.ReLU()
#blocks
self.layers1 = self._make_layer(dim=64,n_blocks=2,stride = 1)
self.layers2 = self._make_layer(dim=128,n_blocks=2,stride = 2)
self.layers3 = self._make_layer(dim=256,n_blocks=2,stride = 2)
self.layers4 = self._make_layer(dim=512,n_blocks=2,stride = 2)
#classify layer
self.avgpool = nn.AdaptiveAvgPool2D(1)
self.classifier = nn.Linear(512,num_classes)
def _make_layer(self,dim,n_blocks,stride):
layer_list = []
layer_list.append(Block(self.in_dim,dim,stride=stride))
self.in_dim = dim
for i in range(1,n_blocks):
layer_list.append(Block(self.in_dim,dim,stride=1))
return nn.Sequential(*layer_list)
def forward(self,x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.layers1(x)
x = self.layers2(x)
x = self.layers3(x)
x = self.layers4(x)
x = self.avgpool(x)
x = x.flatten(1)
x = self.classifier(x)
return x