版权声明:本文为博主原创文章,欢迎转载,并请注明出处。联系方式:[email protected]
在前一篇中的ResNet-34残差网络,经过训练准确率只达到80%。
这里对网络做点小修改,在最开始的卷积层中用更小(3*3)的卷积核,并且不缩小图片尺寸,相应的最后的平均池化的核改为4*4。
具体修改如下:
1 classResNet34(nn.Module):2 def __init__(self, block):3 super(ResNet34, self).__init__()4
5 #初始卷积层核池化层
6 self.first =nn.Sequential(7 #卷基层1:3*3kernel,1stride,1padding,outmap:32-3+1*2 / 1 + 1,32*32
8 nn.Conv2d(3, 64, 3, 1, 1),9 nn.BatchNorm2d(64),10 nn.ReLU(inplace=True),11
12 #最大池化,3*3kernel,1stride(保持尺寸),1padding,
13 #outmap:32-3+2*1 / 1 + 1,32*32
14 nn.MaxPool2d(3, 1, 1)15 )16
17 #第一层,通道数不变
18 self.layer1 = self.make_layer(block, 64, 64, 3, 1)19
20 #第2、3、4层,通道数*2,图片尺寸/2
21 self.layer2 = self.make_layer(block, 64, 128, 4, 2) #输出16*16
22 self.layer3 = self.make_layer(block, 128, 256, 6, 2) #输出8*8
23 self.layer4 = self.make_layer(block, 256, 512, 3, 2) #输出4*4
24
25 self.avg_pool = nn.AvgPool2d(4) #输出512*1
26 self.fc = nn.Linear(512, 10)
运行结果:
Files already downloaded and verified
ResNet34(
(first): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
)
(layer1): Sequential(
(0): ResBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine