Resnet解读

pytorch的resnet模块在torchvision的models中。

里面可以选择的resnet类型有: 

_all_列表的每一个resnet都提供了实现的函数:


 
 
  1. def resnet18(pretrained=False, progress=True, **kwargs):
  2. """Constructs a ResNet-18 model.
  3. Args:
  4. pretrained (bool): If True, returns a model pre-trained on ImageNet
  5. progress (bool): If True, displays a progress bar of the download to stderr
  6. """
  7. return _resnet( 'resnet18', BasicBlock, [ 2, 2, 2, 2], pretrained, progress,
  8. **kwargs)
  9. def resnet34(pretrained=False, progress=True, **kwargs):
  10. """Constructs a ResNet-34 model.
  11. Args:
  12. pretrained (bool): If True, returns a model pre-trained on ImageNet
  13. progress (bool): If True, displays a progress bar of the download to stderr
  14. """
  15. return _resnet( 'resnet34', BasicBlock, [ 3, 4, 6, 3], pretrained, progress,
  16. **kwargs)
  17. def resnet50(pretrained=False, progress=True, **kwargs):
  18. """Constructs a ResNet-50 model.
  19. Args:
  20. pretrained (bool): If True, returns a model pre-trained on ImageNet
  21. progress (bool): If True, displays a progress bar of the download to stderr
  22. """
  23. return _resnet( 'resnet50', Bottleneck, [ 3, 4, 6, 3], pretrained, progress,
  24. **kwargs)
  25. def resnet101(pretrained=False, progress=True, **kwargs):
  26. """Constructs a ResNet-101 model.
  27. Args:
  28. pretrained (bool): If True, returns a model pre-trained on ImageNet
  29. progress (bool): If True, displays a progress bar of the download to stderr
  30. """
  31. return _resnet( 'resnet101', Bottleneck, [ 3, 4, 23, 3], pretrained, progress,
  32. **kwargs)
  33. def resnet152(pretrained=False, progress=True, **kwargs):
  34. """Constructs a ResNet-152 model.
  35. Args:
  36. pretrained (bool): If True, returns a model pre-trained on ImageNet
  37. progress (bool): If True, displays a progress bar of the download to stderr
  38. """
  39. return _resnet( 'resnet152', Bottleneck, [ 3, 8, 36, 3], pretrained, progress,
  40. **kwargs)
  41. def resnext50_32x4d(pretrained=False, progress=True, **kwargs):
  42. """Constructs a ResNeXt-50 32x4d model.
  43. Args:
  44. pretrained (bool): If True, returns a model pre-trained on ImageNet
  45. progress (bool): If True, displays a progress bar of the download to stderr
  46. """
  47. kwargs[ 'groups'] = 32
  48. kwargs[ 'width_per_group'] = 4
  49. return _resnet( 'resnext50_32x4d', Bottleneck, [ 3, 4, 6, 3],
  50. pretrained, progress, **kwargs)
  51. def resnext101_32x8d(pretrained=False, progress=True, **kwargs):
  52. """Constructs a ResNeXt-101 32x8d model.
  53. Args:
  54. pretrained (bool): If True, returns a model pre-trained on ImageNet
  55. progress (bool): If True, displays a progress bar of the download to stderr
  56. """
  57. kwargs[ 'groups'] = 32
  58. kwargs[ 'width_per_group'] = 8
  59. return _resnet( 'resnext101_32x8d', Bottleneck, [ 3, 4, 23, 3],
  60. pretrained, progress, **kwargs)

一,简单介绍残差网络Resnet:

resnet是由很多以下的结构组成:

这种结构,当有1x1卷积核的时候,我们叫bottleneck,当没有1x1卷积核时,我们称其为BasicBlock。残差网络一般就是由这两个结构组成的。

残差网络的结构:(例resnet18)

(彩图resnet18的结构图中,虚曲线表示不同维度的连接,实曲线表示相同维度的连接)

从上图可以看到几个重点的关于resnet的特点:

1.resnet18都是由BasicBlock组成的,并且从表中也可以得知,50层(包括50层)以上的resnet才由Bottleneck组成

2.所有类型的resnet卷积操作的通道数(无论是输入通道还是输出通道)都是64的倍数

3.所有类型的resnet的卷积核只有3x3和1x1两种

4.无论哪一种resnet,除了公共部分(conv1)外,都是由4大块组成(con2_x,con3_x,con4_x,con5_x,),每一块的起始通道数都是64,128,256,512这点非常重要。暂且称它为“基准 通道数”

了解这些有利于我们理解resnet的源码。

二,代码解析

1.两个重要结构:BasicBlock和Bottleneck

在pytorch的resnet模块在torchvision的models中。由于resnet中的卷积核不是1x1就是3x3,所以首先定义了这两个卷积核的操作:


 
 
  1. def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):
  2. """3x3 convolution with padding"""
  3. return nn.Conv2d(in_planes, out_planes, kernel_size= 3, stride=stride,
  4. padding=dilation, groups=groups, bias= False, dilation=dilation)
  5. def conv1x1(in_planes, out_planes, stride=1):
  6. """1x1 convolution"""
  7. return nn.Conv2d(in_planes, out_planes, kernel_size= 1, stride=stride, bias= False)

作为重头戏的BasicBlock和Bottleneck都是以类的形式书写:

BasicBlock:


 
 
  1. class BasicBlock(nn.Module):
  2. expansion = 1 #expansion是BasicBlock和Bottleneck的核心区别之一
  3. def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
  4. base_width=64, dilation=1, norm_layer=None):
  5. super(BasicBlock, self).__init__()
  6. if norm_layer is None:
  7. norm_layer = nn.BatchNorm2d
  8. if groups != 1 or base_width != 64:
  9. raise ValueError( 'BasicBlock only supports groups=1 and base_width=64')
  10. if dilation > 1:
  11. raise NotImplementedError( "Dilation > 1 not supported in BasicBlock")
  12. # Both self.conv1 and self.downsample layers downsample the input when stride != 1
  13. self.conv1 = conv3x3(inplanes, planes, stride)
  14. self.bn1 = norm_layer(planes)
  15. self.relu = nn.ReLU(inplace= True)
  16. self.conv2 = conv3x3(planes, planes)
  17. self.bn2 = norm_layer(planes)
  18. self.downsample = downsample
  19. self.stride = stride
  20. def forward(self, x):
  21. identity = x
  22. out = self.conv1(x)
  23. out = self.bn1(out)
  24. out = self.relu(out)
  25. out = self.conv2(out)
  26. out = self.bn2(out)
  27. if self.downsample is not None:
  28. identity = self.downsample(x)
  29. out += identity
  30. out = self.relu(out)
  31. return out

看到代码 self.downsample = downsample,在默认情况downsample=None,表示不做downsample,但有一个情况需要做,就是一个 BasicBlock的分支x要与output相加时,若x和output的通道数不一样,则要做一个downsample,剧透一下,在resnet里的downsample就是用一个1x1的卷积核处理,变成想要的通道数。为什么要这样做?因为最后要x要和output相加啊, 通道不同相加不了。所以downsample是专门用来改变x的通道数的。

接下来分析BasicBlock处理后的图像的维度是如何变化的:

我们看到,BasicBlock虽然是经过两个3x3的卷积,但是前一个是设置了步长的(剧透给你,设置的步长是2),后一个则是没有设置步长的,这意味着用的是默认步长(剧透给你,默认步长是1),接下来也不想剧透了,直接给看con3x3的定义吧:

可以看到padding在没指定的情况下,默认也是1.根据公式:

W为特征图的长度(或宽度),F为卷积核的长度(或宽度),P为padding,S为步长。所以卷积后的特征图的尺寸就跟步长很有关系,我们知道卷积核为3x3,所以F=3,也知道P=1.所以 当S=1时,W是不变的。当S=2时,W会减少两倍。下面的Bottleneck也是这个原理。就不多写了。

BasicBlock即:

接下来是

Bottleneck:


 
 
  1. class Bottleneck(nn.Module):
  2. expansion = 4 #expansion是BasicBlock和Bottleneck的核心区别之一
  3. def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
  4. base_width=64, dilation=1, norm_layer=None):
  5. super(Bottleneck, self).__init__()
  6. if norm_layer is None:
  7. norm_layer = nn.BatchNorm2d
  8. width = int(planes * (base_width / 64.)) * groups
  9. # Both self.conv2 and self.downsample layers downsample the input when stride != 1
  10. self.conv1 = conv1x1(inplanes, width)
  11. self.bn1 = norm_layer(width)
  12. self.conv2 = conv3x3(width, width, stride, groups, dilation)
  13. self.bn2 = norm_layer(width)
  14. self.conv3 = conv1x1(width, planes * self.expansion)
  15. self.bn3 = norm_layer(planes * self.expansion)
  16. self.relu = nn.ReLU(inplace= True)
  17. self.downsample = downsample
  18. self.stride = stride
  19. def forward(self, x):
  20. identity = x
  21. out = self.conv1(x)
  22. out = self.bn1(out)
  23. out = self.relu(out)
  24. out = self.conv2(out)
  25. out = self.bn2(out)
  26. out = self.relu(out)
  27. out = self.conv3(out)
  28. out = self.bn3(out)
  29. if self.downsample is not None:
  30. identity = self.downsample(x)
  31. out += identity
  32. out = self.relu(out)
  33. return out

Bottleneck即:

BasicBlock和Bottleneck的两点核心区别:

1.BasicBlock的卷积核都是2个3x3,Bottleneck则是一个1x1,3x3,1x1共三个卷积核组成。

2.BasicBlock的expansion为1,即输入和输出的通道数是一致的。而Bottleneck的expansion为4,即输出通道数是输入通道数的4倍。

了解这些有利于代码 的理解。

关于downsample:

不管是BasicBlock还是Bottleneck,最后都会做一个判断是否需要给x做downsample,因为必须要把x的通道数变成与主枝的输出的通道一致,才能相加。

2.resnet的本体(ResNet类)

源码中的ResNet类可以根据输入参数的不同,变成resnet18,34,50,101等。

这个ResNet非常重要,由于考虑篇幅问题,后面再细讲。

3.调用pytorch源码使用resnet:


 
 
  1. from torchvision.models.resnet import resnet50, Bottleneck
  2. resnet = resnet50(pretrained= True)

短短两行代码就可以调用resnet了。接下来我们顺着源码“顺藤摸瓜”地看一下代码的执行流程:

resnet50(pretrained=True)可以看出,调用的是resnet50网络,然后通过在torchvision.models.resnet模块中可以找到resnet50()的定义:


 
 
  1. def resnet50(pretrained=False, progress=True, **kwargs):
  2. """Constructs a ResNet-50 model.
  3. Args:
  4. pretrained (bool): If True, returns a model pre-trained on ImageNet
  5. progress (bool): If True, displays a progress bar of the download to stderr
  6. """
  7. return _resnet( 'resnet50', Bottleneck, [ 3, 4, 6, 3], pretrained, progress,
  8. **kwargs)

可以看到,resnet50()仅仅做了一件事,就是调用了配备了特定参数的_resnet()方法。

_resnet('resnet50', Bottleneck, [3, 4, 6, 3], pretrained, progress, **kwargs)

第一个参数不用说了,而我们看到 Bottleneck参数,就可以联想到_resnet的第二个参数是resnet网络的组成部分,即不是BasicBlock就是Bottleneck。而[3,4,6,3]具体是什么意思我们要接着看_resnet()的定义:


 
 
  1. def _resnet(arch, block, layers, pretrained, progress, **kwargs):
  2. model = ResNet(block, layers, **kwargs)
  3. if pretrained:
  4. state_dict = load_state_dict_from_url(model_urls[arch],
  5. progress=progress)
  6. model.load_state_dict(state_dict)
  7. return model

我们可以看到在_resnet()[3,4,6,3]对应的位置,是显示layers的,且把其作为参数传给了残差网络的本体类Resnet。

接着判断是否需要预训练(pretrained是否为True),为True则加载权重后返回模型,为False就直接返回模型。

_resnet()block参数对应的位置就是BasicBlock或Bottleneck。即block就是表示BasicBlock或Bottleneck。

所以一句话概括_resnet()作用就是先调用Resnet类生成一个Resnet的壳子,若需要预训练则加载权重,不需要就直接返回Resnet的壳子

以resnet50为例,_resnet()arch参数填入的是'resnet50',然后会根据这个参数到model_urls字典中找寻相对应的权重资源下载。

该字典如下:

===================

接下来就仔细看看Resnet类的代码了:

ResNet:


 
 
  1. class ResNet(nn.Module):
  2. def __init__(self, block, layers, num_classes=1000, zero_init_residual=False,
  3. groups=1, width_per_group=64, replace_stride_with_dilation=None,
  4. norm_layer=None):
  5. super(ResNet, self).__init__()
  6. if norm_layer is None:
  7. norm_layer = nn.BatchNorm2d
  8. self._norm_layer = norm_layer
  9. self.inplanes = 64
  10. self.dilation = 1
  11. if replace_stride_with_dilation is None:
  12. # each element in the tuple indicates if we should replace
  13. # the 2x2 stride with a dilated convolution instead
  14. replace_stride_with_dilation = [ False, False, False]
  15. if len(replace_stride_with_dilation) != 3:
  16. raise ValueError( "replace_stride_with_dilation should be None "
  17. "or a 3-element tuple, got {}".format(replace_stride_with_dilation))
  18. self.groups = groups
  19. self.base_width = width_per_group
  20. self.conv1 = nn.Conv2d( 3, self.inplanes, kernel_size= 7, stride= 2, padding= 3,
  21. bias= False)
  22. self.bn1 = norm_layer(self.inplanes)
  23. self.relu = nn.ReLU(inplace= True)
  24. self.maxpool = nn.MaxPool2d(kernel_size= 3, stride= 2, padding= 1)
  25. self.layer1 = self._make_layer(block, 64, layers[ 0])
  26. self.layer2 = self._make_layer(block, 128, layers[ 1], stride= 2,
  27. dilate=replace_stride_with_dilation[ 0])
  28. self.layer3 = self._make_layer(block, 256, layers[ 2], stride= 2,
  29. dilate=replace_stride_with_dilation[ 1])
  30. self.layer4 = self._make_layer(block, 512, layers[ 3], stride= 2,
  31. dilate=replace_stride_with_dilation[ 2])
  32. self.avgpool = nn.AdaptiveAvgPool2d(( 1, 1))
  33. self.fc = nn.Linear( 512 * block.expansion, num_classes)
  34. for m in self.modules():
  35. if isinstance(m, nn.Conv2d):
  36. nn.init.kaiming_normal_(m.weight, mode= 'fan_out', nonlinearity= 'relu')
  37. elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
  38. nn.init.constant_(m.weight, 1)
  39. nn.init.constant_(m.bias, 0)
  40. # Zero-initialize the last BN in each residual branch,
  41. # so that the residual branch starts with zeros, and each residual block behaves like an identity.
  42. # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
  43. if zero_init_residual:
  44. for m in self.modules():
  45. if isinstance(m, Bottleneck):
  46. nn.init.constant_(m.bn3.weight, 0)
  47. elif isinstance(m, BasicBlock):
  48. nn.init.constant_(m.bn2.weight, 0)
  49. def _make_layer(self, block, planes, blocks, stride=1, dilate=False):
  50. norm_layer = self._norm_layer
  51. downsample = None
  52. previous_dilation = self.dilation
  53. if dilate:
  54. self.dilation *= stride
  55. stride = 1
  56. if stride != 1 or self.inplanes != planes * block.expansion:
  57. downsample = nn.Sequential(
  58. conv1x1(self.inplanes, planes * block.expansion, stride),
  59. norm_layer(planes * block.expansion),
  60. )
  61. layers = []
  62. layers.append(block(self.inplanes, planes, stride, downsample, self.groups,
  63. self.base_width, previous_dilation, norm_layer))
  64. self.inplanes = planes * block.expansion
  65. for _ in range( 1, blocks):
  66. layers.append(block(self.inplanes, planes, groups=self.groups,
  67. base_width=self.base_width, dilation=self.dilation,
  68. norm_layer=norm_layer))
  69. return nn.Sequential(*layers)
  70. def forward(self, x):
  71. x = self.conv1(x)
  72. x = self.bn1(x)
  73. x = self.relu(x)
  74. x = self.maxpool(x)
  75. x = self.layer1(x)
  76. x = self.layer2(x)
  77. x = self.layer3(x)
  78. x = self.layer4(x)
  79. x = self.avgpool(x)
  80. x = x.reshape(x.size( 0), -1)
  81. x = self.fc(x)
  82. return x

一眼看下去不知道该从何入手,但看pytorch的网络,要想知道它的执行顺序是怎么样的,看它的forward方法即可。

从ResNet的forward代码来看,它是先经过conv1()bnrelumaxpool()。从之前的resnet表格得知,这几层无论是resnet18,resnet34,resnet50,resnet101等等的resnet一开始都必须经过这几层。这是静态的。

然后进入四层layer()才是动态以区别具体是resnet18,resnet34,resnet50,resnet101等等中的哪一个。

从动态代码来看,所有的resnet都有4个动态层。追寻layer1,2,3,4究竟是如何定义的我们可以看到:

其实所谓的layer1,2,3,4都是由不同参数的_make_layer()方法得到的。看_make_layer()的参数,发现了layers[0~3]就是上面输入的[3,4,6,3],即layers[0]是3,layers[1]是4,layers[2]是6,layers[3]是3。我们继续追寻_make_layer()的定义看看这些数字表示什么意思:


 
 
  1. def _make_layer(self, block, planes, blocks, stride=1, dilate=False):
  2. norm_layer = self._norm_layer
  3. downsample = None
  4. previous_dilation = self.dilation
  5. if dilate:
  6. self.dilation *= stride
  7. stride = 1
  8. if stride != 1 or self.inplanes != planes * block.expansion:
  9. downsample = nn.Sequential(
  10. conv1x1(self.inplanes, planes * block.expansion, stride),
  11. norm_layer(planes * block.expansion),
  12. )
  13. layers = []
  14. layers.append(block(self.inplanes, planes, stride, downsample, self.groups,
  15. self.base_width, previous_dilation, norm_layer))
  16. self.inplanes = planes * block.expansion
  17. for _ in range( 1, blocks):
  18. layers.append(block(self.inplanes, planes, groups=self.groups,
  19. base_width=self.base_width, dilation=self.dilation,
  20. norm_layer=norm_layer))
  21. return nn.Sequential(*layers)

(注意:_make_layer()中的planes参数是“基准通道数”,不是输出通道数!!!不是输出通道数!!!不是输出通道数!!!)

我们定位到_make_layer()的第三个(不算上self)参数blocks,在_make_layer()中用到blocks的地方是:


 
 
  1. layers = []
  2. layers.append(block(self.inplanes, planes, stride, downsample, self.groups,
  3. self.base_width, previous_dilation, norm_layer))
  4. self.inplanes = planes * block.expansion
  5. for _ in range( 1, blocks):
  6. layers.append(block(self.inplanes, planes, groups=self.groups,
  7. base_width=self.base_width, dilation=self.dilation,
  8. norm_layer=norm_layer))

可以看到其实blocks这个整数就是表示生成的block的数目。由于在resnet50填入的block是Bottleneck,所以blocks表示Bottleneck的数目。因此[3,4,6,3]表示按次序生成3个Bottleneck,4个Bottleneck,6个Bottleneck,3个Bottleneck。这个表格中resnet50的结构是一致的。所以layers[0]是3个Bottleneck,layers[1]是4个Bottleneck,layers[2]是6个Bottleneck,layers[3]是3个Bottleneck。

===========================

综合来说:

==========================

接下来是resnet一些点的分析:

1.首先是输出维度:

我们看到output size那里,维度是依次下降两倍的,56-28-14-7,但是我们会不会有疑问,经过那么多个卷积核为什么才下降两倍呢?那我们来看看是怎么实现的,这个实现是与ResNet类中的_make_layer()方法密切相关:

看到_make_layer()源码,有没有怀疑过为什么都是建立block,但是要分开两个步骤建呢,是因为上面那个是设置了步长的,步长为2,下面那个是用默认步长的,步长为1。当步长为2时,结合卷积核分析,会减低2倍特征图尺寸。步长为1时,则不变。

 

总结一些小知识:

1.无论哪种resnet,都有4个layer,进入layer之前,输入图片就已经被缩小了4倍了(一个卷积和一个最大池化操作各1/2)。除了第一个layer不会缩小图片外,其余三个layer都会缩小一半图片。

==========================

最后附上完整的torchvision.models.resnet:


 
 
  1. import torch.nn as nn
  2. from .utils import load_state_dict_from_url
  3. __all__ = [ 'ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
  4. 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d']
  5. model_urls = {
  6. 'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
  7. 'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
  8. 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
  9. 'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
  10. 'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
  11. 'resnext50_32x4d': 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth',
  12. 'resnext101_32x8d': 'https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth',
  13. }
  14. def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):
  15. """3x3 convolution with padding"""
  16. return nn.Conv2d(in_planes, out_planes, kernel_size= 3, stride=stride,
  17. padding=dilation, groups=groups, bias= False, dilation=dilation)
  18. def conv1x1(in_planes, out_planes, stride=1):
  19. """1x1 convolution"""
  20. return nn.Conv2d(in_planes, out_planes, kernel_size= 1, stride=stride, bias= False)
  21. class BasicBlock(nn.Module):
  22. expansion = 1
  23. def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
  24. base_width=64, dilation=1, norm_layer=None):
  25. super(BasicBlock, self).__init__()
  26. if norm_layer is None:
  27. norm_layer = nn.BatchNorm2d
  28. if groups != 1 or base_width != 64:
  29. raise ValueError( 'BasicBlock only supports groups=1 and base_width=64')
  30. if dilation > 1:
  31. raise NotImplementedError( "Dilation > 1 not supported in BasicBlock")
  32. # Both self.conv1 and self.downsample layers downsample the input when stride != 1
  33. self.conv1 = conv3x3(inplanes, planes, stride)
  34. self.bn1 = norm_layer(planes)
  35. self.relu = nn.ReLU(inplace= True)
  36. self.conv2 = conv3x3(planes, planes)
  37. self.bn2 = norm_layer(planes)
  38. self.downsample = downsample
  39. self.stride = stride
  40. def forward(self, x):
  41. identity = x
  42. out = self.conv1(x)
  43. out = self.bn1(out)
  44. out = self.relu(out)
  45. out = self.conv2(out)
  46. out = self.bn2(out)
  47. if self.downsample is not None:
  48. identity = self.downsample(x)
  49. out += identity
  50. out = self.relu(out)
  51. return out
  52. class Bottleneck(nn.Module):
  53. expansion = 4
  54. def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
  55. base_width=64, dilation=1, norm_layer=None):
  56. super(Bottleneck, self).__init__()
  57. if norm_layer is None:
  58. norm_layer = nn.BatchNorm2d
  59. width = int(planes * (base_width / 64.)) * groups
  60. # Both self.conv2 and self.downsample layers downsample the input when stride != 1
  61. self.conv1 = conv1x1(inplanes, width)
  62. self.bn1 = norm_layer(width)
  63. self.conv2 = conv3x3(width, width, stride, groups, dilation)
  64. self.bn2 = norm_layer(width)
  65. self.conv3 = conv1x1(width, planes * self.expansion)
  66. self.bn3 = norm_layer(planes * self.expansion)
  67. self.relu = nn.ReLU(inplace= True)
  68. self.downsample = downsample
  69. self.stride = stride
  70. def forward(self, x):
  71. identity = x
  72. out = self.conv1(x)
  73. out = self.bn1(out)
  74. out = self.relu(out)
  75. out = self.conv2(out)
  76. out = self.bn2(out)
  77. out = self.relu(out)
  78. out = self.conv3(out)
  79. out = self.bn3(out)
  80. if self.downsample is not None:
  81. identity = self.downsample(x)
  82. out += identity
  83. out = self.relu(out)
  84. return out
  85. class ResNet(nn.Module):
  86. def __init__(self, block, layers, num_classes=1000, zero_init_residual=False,
  87. groups=1, width_per_group=64, replace_stride_with_dilation=None,
  88. norm_layer=None):
  89. super(ResNet, self).__init__()
  90. if norm_layer is None:
  91. norm_layer = nn.BatchNorm2d
  92. self._norm_layer = norm_layer
  93. self.inplanes = 64
  94. self.dilation = 1
  95. if replace_stride_with_dilation is None:
  96. # each element in the tuple indicates if we should replace
  97. # the 2x2 stride with a dilated convolution instead
  98. replace_stride_with_dilation = [ False, False, False]
  99. if len(replace_stride_with_dilation) != 3:
  100. raise ValueError( "replace_stride_with_dilation should be None "
  101. "or a 3-element tuple, got {}".format(replace_stride_with_dilation))
  102. self.groups = groups
  103. self.base_width = width_per_group
  104. self.conv1 = nn.Conv2d( 3, self.inplanes, kernel_size= 7, stride= 2, padding= 3,
  105. bias= False)
  106. self.bn1 = norm_layer(self.inplanes)
  107. self.relu = nn.ReLU(inplace= True)
  108. self.maxpool = nn.MaxPool2d(kernel_size= 3, stride= 2, padding= 1)
  109. self.layer1 = self._make_layer(block, 64, layers[ 0])
  110. self.layer2 = self._make_layer(block, 128, layers[ 1], stride= 2,
  111. dilate=replace_stride_with_dilation[ 0])
  112. self.layer3 = self._make_layer(block, 256, layers[ 2], stride= 2,
  113. dilate=replace_stride_with_dilation[ 1])
  114. self.layer4 = self._make_layer(block, 512, layers[ 3], stride= 2,
  115. dilate=replace_stride_with_dilation[ 2])
  116. self.avgpool = nn.AdaptiveAvgPool2d(( 1, 1))
  117. self.fc = nn.Linear( 512 * block.expansion, num_classes)
  118. for m in self.modules():
  119. if isinstance(m, nn.Conv2d):
  120. nn.init.kaiming_normal_(m.weight, mode= 'fan_out', nonlinearity= 'relu')
  121. elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
  122. nn.init.constant_(m.weight, 1)
  123. nn.init.constant_(m.bias, 0)
  124. # Zero-initialize the last BN in each residual branch,
  125. # so that the residual branch starts with zeros, and each residual block behaves like an identity.
  126. # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
  127. if zero_init_residual:
  128. for m in self.modules():
  129. if isinstance(m, Bottleneck):
  130. nn.init.constant_(m.bn3.weight, 0)
  131. elif isinstance(m, BasicBlock):
  132. nn.init.constant_(m.bn2.weight, 0)
  133. def _make_layer(self, block, planes, blocks, stride=1, dilate=False):
  134. norm_layer = self._norm_layer
  135. downsample = None
  136. previous_dilation = self.dilation
  137. if dilate:
  138. self.dilation *= stride
  139. stride = 1
  140. if stride != 1 or self.inplanes != planes * block.expansion:
  141. downsample = nn.Sequential(
  142. conv1x1(self.inplanes, planes * block.expansion, stride),
  143. norm_layer(planes * block.expansion),
  144. )
  145. layers = []
  146. layers.append(block(self.inplanes, planes, stride, downsample, self.groups,
  147. self.base_width, previous_dilation, norm_layer))
  148. self.inplanes = planes * block.expansion
  149. for _ in range( 1, blocks):
  150. layers.append(block(self.inplanes, planes, groups=self.groups,
  151. base_width=self.base_width, dilation=self.dilation,
  152. norm_layer=norm_layer))
  153. return nn.Sequential(*layers)
  154. def forward(self, x):
  155. x = self.conv1(x)
  156. x = self.bn1(x)
  157. x = self.relu(x)
  158. x = self.maxpool(x)
  159. x = self.layer1(x)
  160. x = self.layer2(x)
  161. x = self.layer3(x)
  162. x = self.layer4(x)
  163. x = self.avgpool(x)
  164. x = x.reshape(x.size( 0), -1)
  165. x = self.fc(x)
  166. return x
  167. def _resnet(arch, block, layers, pretrained, progress, **kwargs):
  168. model = ResNet(block, layers, **kwargs)
  169. if pretrained:
  170. state_dict = load_state_dict_from_url(model_urls[arch],
  171. progress=progress)
  172. model.load_state_dict(state_dict)
  173. return model
  174. def resnet18(pretrained=False, progress=True, **kwargs):
  175. """Constructs a ResNet-18 model.
  176. Args:
  177. pretrained (bool): If True, returns a model pre-trained on ImageNet
  178. progress (bool): If True, displays a progress bar of the download to stderr
  179. """
  180. return _resnet( 'resnet18', BasicBlock, [ 2, 2, 2, 2], pretrained, progress,
  181. **kwargs)
  182. def resnet34(pretrained=False, progress=True, **kwargs):
  183. """Constructs a ResNet-34 model.
  184. Args:
  185. pretrained (bool): If True, returns a model pre-trained on ImageNet
  186. progress (bool): If True, displays a progress bar of the download to stderr
  187. """
  188. return _resnet( 'resnet34', BasicBlock, [ 3, 4, 6, 3], pretrained, progress,
  189. **kwargs)
  190. def resnet50(pretrained=False, progress=True, **kwargs):
  191. """Constructs a ResNet-50 model.
  192. Args:
  193. pretrained (bool): If True, returns a model pre-trained on ImageNet
  194. progress (bool): If True, displays a progress bar of the download to stderr
  195. """
  196. return _resnet( 'resnet50', Bottleneck, [ 3, 4, 6, 3], pretrained, progress,
  197. **kwargs)
  198. def resnet101(pretrained=False, progress=True, **kwargs):
  199. """Constructs a ResNet-101 model.
  200. Args:
  201. pretrained (bool): If True, returns a model pre-trained on ImageNet
  202. progress (bool): If True, displays a progress bar of the download to stderr
  203. """
  204. return _resnet( 'resnet101', Bottleneck, [ 3, 4, 23, 3], pretrained, progress,
  205. **kwargs)
  206. def resnet152(pretrained=False, progress=True, **kwargs):
  207. """Constructs a ResNet-152 model.
  208. Args:
  209. pretrained (bool): If True, returns a model pre-trained on ImageNet
  210. progress (bool): If True, displays a progress bar of the download to stderr
  211. """
  212. return _resnet( 'resnet152', Bottleneck, [ 3, 8, 36, 3], pretrained, progress,
  213. **kwargs)
  214. def resnext50_32x4d(pretrained=False, progress=True, **kwargs):
  215. """Constructs a ResNeXt-50 32x4d model.
  216. Args:
  217. pretrained (bool): If True, returns a model pre-trained on ImageNet
  218. progress (bool): If True, displays a progress bar of the download to stderr
  219. """
  220. kwargs[ 'groups'] = 32
  221. kwargs[ 'width_per_group'] = 4
  222. return _resnet( 'resnext50_32x4d', Bottleneck, [ 3, 4, 6, 3],
  223. pretrained, progress, **kwargs)
  224. def resnext101_32x8d(pretrained=False, progress=True, **kwargs):
  225. """Constructs a ResNeXt-101 32x8d model.
  226. Args:
  227. pretrained (bool): If True, returns a model pre-trained on ImageNet
  228. progress (bool): If True, displays a progress bar of the download to stderr
  229. """
  230. kwargs[ 'groups'] = 32
  231. kwargs[ 'width_per_group'] = 8
  232. return _resnet( 'resnext101_32x8d', Bottleneck, [ 3, 4, 23, 3],
  233. pretrained, progress, **kwargs)

 

 

  • 2
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值