深度学习---VGG(使用重复元素的网络)

VGG块的组成规律是:连续使用数个相同的填充为1、窗口形状为3x3的卷积层后接上一个步幅为2、窗口形状为2x2的最大池化层。卷积层保持输入的高和宽不变,而池化层则对其减半。

VGG网络由卷积层模块后接全连接层模块构成。卷积层模块串联数个vgg_block,其超参数由变量conv_arch定义。该变量指定了每个VGG块里卷积层个数和输入通道数及输出通道数。全连接模块则与AlexNet中一样。

VGG-11网络包含5个卷积块,前2块使用单卷积层,而后3块使用双卷积层。第一块的输出通道是64,之后每次对输出通道数翻倍,直到变为512。因为这个网络使用了8个卷积层和3个全连接层,所以经常被称为VGG-11

def vgg_block(num_convs, in_channels, out_channels):
    blk = nn.Sequential()

    blk_item = nn.Sequential(
        nn.Conv2d(in_channels, out_channels, 3, padding=1),
        nn.ReLU(inplace=True),
    )
    blk_item1 = nn.Sequential(
        nn.MaxPool2d(2, 2)
    )
    layer = []
    for i in range(num_convs):
        layer += blk_item
    layer += [nn.MaxPool2d(2, 2)]
    return nn.Sequential(*layer)

def vgg(conv_arch):
    print(conv_arch)
    #net = nn.Sequential()
    #卷积部分
    layers = []
    for (num_convs, in_channels, out_channels) in conv_arch:
        #print(num_convs, in_channels, out_channels)
        #net.add_module('conv',vgg_block(num_convs,in_channels, out_channels))
        layers += vgg_block(num_convs,in_channels, out_channels)
    #net.add_module('linear0',nn.Linear(512*7*7, 4096))
    layers += [nn.Conv2d(256, 512, 3, padding=1), nn.ReLU(inplace=True),nn.Conv2d(512, 512, 3, padding=1), nn.ReLU(inplace=True)]
    layers += [nn.MaxPool2d(2, 2)]
    layers += [nn.Conv2d(512, 512, 3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(512, 512, 3, padding=1),
               nn.ReLU(inplace=True)]
    layers += [nn.MaxPool2d(2, 2)]
    layers += [nn.Linear(512*7*7, 4096),nn.ReLU(inplace=True),nn.Dropout(0.5)]
    layers += [nn.Linear(4096, 4096), nn.ReLU(inplace=True),nn.Dropout(0.5)]
    layers += [nn.Linear(4096,10)]

    return nn.Sequential(*layers)

net1 = vgg(((1,1,64),(1,64,128),(1,128,256)))
print(net1)
x = torch.randn((1,1,224,224))
for blk in net1:
    x = blk(x)
    if (1,512,7,7) == x.shape:
        x = x.view(-1,25088)
        print('x shape is:',x.shape)
    print('output shape:\t',x.shape)
	
output shape:	 torch.Size([1, 64, 224, 224])
output shape:	 torch.Size([1, 64, 224, 224])
output shape:	 torch.Size([1, 64, 112, 112])
output shape:	 torch.Size([1, 128, 112, 112])
output shape:	 torch.Size([1, 128, 112, 112])
output shape:	 torch.Size([1, 128, 56, 56])
output shape:	 torch.Size([1, 256, 56, 56])
output shape:	 torch.Size([1, 256, 56, 56])
output shape:	 torch.Size([1, 256, 28, 28])
output shape:	 torch.Size([1, 512, 28, 28])
output shape:	 torch.Size([1, 512, 28, 28])
output shape:	 torch.Size([1, 512, 28, 28])
output shape:	 torch.Size([1, 512, 28, 28])
output shape:	 torch.Size([1, 512, 14, 14])
output shape:	 torch.Size([1, 512, 14, 14])
output shape:	 torch.Size([1, 512, 14, 14])
output shape:	 torch.Size([1, 512, 14, 14])
output shape:	 torch.Size([1, 512, 14, 14])
x shape is: torch.Size([1, 25088])
output shape:	 torch.Size([1, 25088])
output shape:	 torch.Size([1, 4096])
output shape:	 torch.Size([1, 4096])
output shape:	 torch.Size([1, 4096])
output shape:	 torch.Size([1, 4096])
output shape:	 torch.Size([1, 4096])
output shape:	 torch.Size([1, 4096])
output shape:	 torch.Size([1, 10])

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值