vgg模型-卷积核2D转化3D

vgg模型源码(之前网上搜vgg模型,下了之后一直当源码模型用,本源码是参考PyCharm的源码):

from keras.models import Sequential
from keras.layers import Dense,Conv2D,MaxPooling2D,Flatten

def vgg():
    model = Sequential()
    model.add(Conv2D(64, (3, 3),
           activation='relu',
           padding='same',
           name='block1_conv1',
           dim_ordering='tf',
           input_shape=(255,255,3)))
    model.add(Conv2D(64, (3, 3),
           activation='relu',
           padding='same',
           name='block1_conv2'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool'))

    model.add(Conv2D(128, (3, 3),
                      activation='relu',
                      padding='same',
                      name='block2_conv1'))
    model.add(Conv2D(128, (3, 3),
           activation='relu',
           padding='same',
           name='block2_conv2'))
    print(model.output.shape)
    model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool'))
    model.add(Conv2D(256, (3, 3),
                      activation='relu',
                      padding='same',
                      name='block3_conv1'))
    model.add(Conv2D(256, (3, 3),
                      activation='relu',
                      padding='same',
                      name='block3_conv2'))
    model.add(Conv2D(256, (3, 3),
                      activation='relu',
                      padding='same',
                      name='block3_conv3'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool'))
    model.add(Conv2D(512, (3, 3),
                      activation='relu',
                      padding='same',
                      name='block4_conv1'))
    model.add(Conv2D(512, (3, 3),
                      activation='relu',
                      padding='same',
                      name='block4_conv2'))
    model.add(Conv2D(512, (3, 3),
                      activation='relu',
                      padding='same',
                      name='block4_conv3'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool'))
    model.add(Conv2D(512, (3, 3),
                      activation='relu',
                      padding='same',
                      name='block5_conv1'))
    model.add(Conv2D(512, (3, 3),
                      activation='relu',
                      padding='same',
                      name='block5_conv2'))

    model.add(Conv2D(512, (3, 3),
                      activation='relu',
                      padding='same',
                      name='block5_conv3'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool'))

    model.add(Flatten(name='flatten'))
    model.add(Dense(4096, activation='relu',name='fc1'))

    model.add(Dense(4096, activation='relu', name='fc2'))

    model.add(Dense(2, activation='sigmoid', name='predictions'))
    print(model.summary())
    return model

if __name__ == '__main__':
    vgg()

因为需要训练四维的图片,所以这里面卷积核,池化层都需要改成3D的,先附上修改好的代码,再说坑:

from keras.models import Sequential
from keras.layers import Dense,Conv3D,MaxPooling3D,Flatten


def vgg():
    model = Sequential()
    model.add(Conv3D(64, (3, 3,3),
           activation='relu',
           padding='same',
           name='block1_conv1',
           dim_ordering='tf',
           input_shape=(255,255,3,4)))
    model.add(Conv3D(64, (3, 3,3),
           activation='relu',
           padding='same',
           name='block1_conv2'))
    model.add(MaxPooling3D(
            pool_size=(2, 2,2),
            strides=(2, 2,2),
            name='block1_pool',
            padding='same'))
    model.add(Conv3D(128, (3, 3,3),
                      activation='relu',
                      padding='same',
                      name='block2_conv1'))
    model.add(Conv3D(128, (3, 3,3),
           activation='relu',
           padding='same',
           name='block2_conv2'))


    model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block2_pool'))

    model.add(Conv3D(256, (3, 3,3),
                      activation='relu',
                      padding='same',
                      name='block3_conv1'))
    model.add(Conv3D(256, (3,3, 3),
                      activation='relu',
                      padding='same',
                      name='block3_conv2'))
    model.add(Conv3D(256, (3,3, 3),
                      activation='relu',
                      padding='same',
                      name='block3_conv3'))
    model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block3_pool'))
    model.add(Conv3D(512, (3,3, 3),
                      activation='relu',
                      padding='same',
                      name='block4_conv1'))
    model.add(Conv3D(512, (3,3, 3),
                      activation='relu',
                      padding='same',
                      name='block4_conv2'))
    model.add(Conv3D(512, (3,3, 3),
                      activation='relu',
                      padding='same',
                      name='block4_conv3'))
    model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block4_pool'))
    model.add(Conv3D(512, (3,3, 3),
                      activation='relu',
                      padding='same',
                      name='block5_conv1'))
    model.add(Conv3D(512, (3,3, 3),
                      activation='relu',
                      padding='same',
                      name='block5_conv2'))

    model.add(Conv3D(512, (3,3, 3),
                      activation='relu',
                      padding='same',
                      name='block5_conv3'))
    model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block5_pool'))

    model.add(Flatten(name='flatten'))
    model.add(Dense(4096, activation='relu',name='fc1'))

    model.add(Dense(4096, activation='relu',name='fc2'))

    model.add(Dense(2, activation='sigmoid', name='predictions'))
    print(model.summary())
    return model

if __name__ == '__main__':
    vgg()

如果按照2D的源码修改,无非就是把2D改3D,然后,卷积核和池化层都改成3维的,验证了那句话,理想很骨感,现实很残忍。

现来看看报什么错:

ValueError: Negative dimension size caused by subtracting 2 from 1 for 'block2_pool/MaxPool3D'

报错的位置在这一句:model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),#padding='same',name='block2_pool'))

看语法也没错啊,网上搜了一下都是说tf和th的问题,发现我代码不存在这个问题,无奈只能一层一层的打印(对比之前随便写的一个3D模型),果然发现了问题:padding='same',

先来看一下padding这个参数的解释:参考https://blog.csdn.net/wuzqchom/article/details/74785643

我在第一个MaxPooling3D层,没有设置padding这个参数,默认的padding='VALID',看下对应的网络结构(只看前四层):

Layer (type)                 Output Shape              Param #   
=================================================================
block1_conv1 (Conv3D)        (None, 255, 255, 3, 64)   6976      
_________________________________________________________________
block1_conv2 (Conv3D)        (None, 255, 255, 3, 64)   110656    
_________________________________________________________________
block1_pool (MaxPooling3D)   (None, 127, 127, 1, 64)   0         
_________________________________________________________________
block2_conv1 (Conv3D)        (None, 127, 127, 1, 128)  221312    
_________________________________________________________________
block2_conv2 (Conv3D)        (None, 127, 127, 1, 128)  442496    
=================================================================

加了padding='same',再看网络结构(只看前四层):

Layer (type)                 Output Shape              Param #   
=================================================================
block1_conv1 (Conv3D)        (None, 255, 255, 3, 64)   6976      
_________________________________________________________________
block1_conv2 (Conv3D)        (None, 255, 255, 3, 64)   110656    
_________________________________________________________________
block1_pool (MaxPooling3D)   (None, 128, 128, 2, 64)   0         
_________________________________________________________________
block2_conv1 (Conv3D)        (None, 128, 128, 2, 128)  221312    
_________________________________________________________________
block2_conv2 (Conv3D)        (None, 128, 128, 2, 128)  442496    

就是这里的问题,能力有限暂时解释不了这个原因。

所有2D改3D的时候,在所有的池化层加上参数padding='same',整个3D模型就ok了。

 

 

  • 2
    点赞
  • 25
    收藏
    觉得还不错? 一键收藏
  • 7
    评论
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值