从keras看VGG16结构图

vgg16训练

上面放了一个keras用vgg16训练测试的例子,我也试过用vgg16训练然后测试自己的例子,效果一般,这里我们来分析一下vgg16的网络结果

keras代码如下

def VGG_16(weights_path=None):
    model = Sequential()
    model.add(ZeroPadding2D((1,1),input_shape=(3,224,224)))#卷积输入层,指定了输入图像的大小
    model.add(Convolution2D(64, 3, 3, activation='relu'))#64个3x3的卷积核,生成64*224*224的图像,激活函数为relu
    model.add(ZeroPadding2D((1,1)))#补0,保证图像卷积后图像大小不变,其实用padding = 'valid'参数就可以了
    model.add(Convolution2D(64, 3, 3, activation='relu'))#再来一次卷积 生成64*224*224
    model.add(MaxPooling2D((2,2), strides=(2,2)))#pooling操作,相当于变成64*112*112

    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(128, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(128, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))#128*56*56

    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(256, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(256, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(256, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))#256*28*28

    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))#512*14*14

    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))  #到这里已经变成了512*7*7

    model.add(Flatten())#压平上述向量,变成一维25088
    model.add(Dense(4096, activation='relu'))#全连接层有4096个神经核,参数个数就是4096*25088
    model.add(Dropout(0.5))#0.5的概率抛弃一些连接
    model.add(Dense(4096, activation='relu'))#再来一个全连接
    model.add(Dropout(0.5))
    model.add(Dense(1000, activation='softmax'))

    if weights_path:
        model.load_weights(weights_path)

    return model


vgg16详细信息

下面是详细的参数个数

INPUT: [224x224x3]        memory:  224*224*3=150K   weights: 0
CONV3-64: [224x224x64]  memory:  224*224*64=3.2M   weights: (3*3*3)*64 = 1,728
CONV3-64: [224x224x64]  memory:  224*224*64=3.2M   weights: (3*3*64)*64 = 36,864
POOL2: [112x112x64]  memory:  112*112*64=800K   weights: 0
CONV3-128: [112x112x128]  memory:  112*112*128=1.6M   weights: (3*3*64)*128 = 73,728
CONV3-128: [112x112x128]  memory:  112*112*128=1.6M   weights: (3*3*128)*128 = 147,456
POOL2: [56x56x128]  memory:  56*56*128=400K   weights: 0
CONV3-256: [56x56x256]  memory:  56*56*256=800K   weights: (3*3*128)*256 = 294,912
CONV3-256: [56x56x256]  memory:  56*56*256=800K   weights: (3*3*256)*256 = 589,824
CONV3-256: [56x56x256]  memory:  56*56*256=800K   weights: (3*3*256)*256 = 589,824
POOL2: [28x28x256]  memory:  28*28*256=200K   weights: 0
CONV3-512: [28x28x512]  memory:  28*28*512=400K   weights: (3*3*256)*512 = 1,179,648
CONV3-512: [28x28x512]  memory:  28*28*512=400K   weights: (3*3*512)*512 = 2,359,296
CONV3-512: [28x28x512]  memory:  28*28*512=400K   weights: (3*3*512)*512 = 2,359,296
POOL2: [14x14x512]  memory:  14*14*512=100K   weights: 0
CONV3-512: [14x14x512]  memory:  14*14*512=100K   weights: (3*3*512)*512 = 2,359,296
CONV3-512: [14x14x512]  memory:  14*14*512=100K   weights: (3*3*512)*512 = 2,359,296
CONV3-512: [14x14x512]  memory:  14*14*512=100K   weights: (3*3*512)*512 = 2,359,296
POOL2: [7x7x512]  memory:  7*7*512=25K  weights: 0
FC: [1x1x4096]  memory:  4096  weights: 7*7*512*4096 = 102,760,448
FC: [1x1x4096]  memory:  4096  weights: 4096*4096 = 16,777,216
FC: [1x1x1000]  memory:  1000 weights: 4096*1000 = 4,096,000

TOTAL memory: 24M * 4 bytes ~= 93MB / image (only forward! ~*2 for bwd)
TOTAL params: 138M parameters




  • 4
    点赞
  • 27
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
对于猫狗像分类,VGG16是一个常用的卷积神经网络模型。VGG16模型由16个卷积层和3个全连接层组成,其结构相对简单且易于理解。 首先,你需要准备一个包含猫和狗片的数据集,并将其分为训练集和测试集。然后,使用VGG16模型对片进行特征提取和分类。 你可以使用Keras库来实现VGG16模型。首先,导入所需的库和模块: ```python from keras.applications.vgg16 import VGG16 from keras.models import Sequential from keras.layers import Dense, Flatten from keras.preprocessing.image import ImageDataGenerator ``` 接下来,加载预训练的VGG16模型: ```python vgg16 = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3)) ``` 然后,创建一个新的顺序模型,并添加VGG16模型作为其第一层: ```python model = Sequential() model.add(vgg16) ``` 在VGG16之后,添加一些全连接层来进行分类: ```python model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dense(1, activation='sigmoid')) ``` 接下来,编译模型并指定优化器、损失函数和评估指标: ```python model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) ``` 然后,使用ImageDataGenerator来生成批次的像数据,并进行模型的训练: ```python train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory('train_directory', target_size=(224, 224), batch_size=32, class_mode='binary') test_generator = test_datagen.flow_from_directory('test_directory', target_size=(224, 224), batch_size=32, class_mode='binary') model.fit(train_generator, steps_per_epoch=len(train_generator), epochs=10, validation_data=test_generator, validation_steps=len(test_generator)) ``` 最后,你可以使用训练好的模型对新的猫狗像进行分类: ```python predictions = model.predict(test_images) ``` 这样,你就可以使用VGG16模型对猫狗像进行分类了。希望这个回答能对你有所帮助!

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值