python实现卷积神经网络vgg16_Keras实现VGG16

_________________________________________________________________Layer (type) Output Shape Param#

=================================================================input_9 (InputLayer) (None,48, 48, 3) 0_________________________________________________________________block1_conv1 (Conv2D) (None,48, 48, 64) 1792

_________________________________________________________________block1_conv2 (Conv2D) (None,48, 48, 64) 36928

_________________________________________________________________block1_pool (MaxPooling2D) (None,24, 24, 64) 0_________________________________________________________________block2_conv1 (Conv2D) (None,24, 24, 128) 73856

_________________________________________________________________block2_conv2 (Conv2D) (None,24, 24, 128) 147584

_________________________________________________________________block2_pool (MaxPooling2D) (None,12, 12, 128) 0_________________________________________________________________block3_conv1 (Conv2D) (None,12, 12, 256) 295168

_________________________________________________________________block3_conv2 (Conv2D) (None,12, 12, 256) 590080

_________________________________________________________________block3_conv3 (Conv2D) (None,12, 12, 256) 590080

_________________________________________________________________block3_pool (MaxPooling2D) (None,6, 6, 256) 0_________________________________________________________________block4_conv1 (Conv2D) (None,6, 6, 512) 1180160

_________________________________________________________________block4_conv2 (Conv2D) (None,6, 6, 512) 2359808

_________________________________________________________________block4_conv3 (Conv2D) (None,6, 6, 512) 2359808

_________________________________________________________________block4_pool (MaxPooling2D) (None,3, 3, 512) 0_________________________________________________________________block5_conv1 (Conv2D) (None,3, 3, 512) 2359808

_________________________________________________________________block5_conv2 (Conv2D) (None,3, 3, 512) 2359808

_________________________________________________________________block5_conv3 (Conv2D) (None,3, 3, 512) 2359808

_________________________________________________________________block5_pool (MaxPooling2D) (None,1, 1, 512) 0_________________________________________________________________flatten (Flatten) (None,512) 0_________________________________________________________________fc1 (Dense) (None,4096) 2101248

_________________________________________________________________fc2 (Dense) (None,4096) 16781312

_________________________________________________________________dropout_9 (Dropout) (None,4096) 0_________________________________________________________________dense_5 (Dense) (None,10) 40970

=================================================================Total params:33,638,218Trainable params:18,923,530Non-trainable params: 14,714,688

_________________________________________________________________

_________________________________________________________________Layer (type) Output Shape Param#

=================================================================input_10 (InputLayer) (None,224, 224, 3) 0_________________________________________________________________block1_conv1 (Conv2D) (None,224, 224, 64) 1792

_________________________________________________________________block1_conv2 (Conv2D) (None,224, 224, 64) 36928

_________________________________________________________________block1_pool (MaxPooling2D) (None,112, 112, 64) 0_________________________________________________________________block2_conv1 (Conv2D) (None,112, 112, 128) 73856

_________________________________________________________________block2_conv2 (Conv2D) (None,112, 112, 128) 147584

_________________________________________________________________block2_pool (MaxPooling2D) (None,56, 56, 128) 0_________________________________________________________________block3_conv1 (Conv2D) (None,56, 56, 256) 295168

_________________________________________________________________block3_conv2 (Conv2D) (None,56, 56, 256) 590080

_________________________________________________________________block3_conv3 (Conv2D) (None,56, 56, 256) 590080

_________________________________________________________________block3_pool (MaxPooling2D) (None,28, 28, 256) 0_________________________________________________________________block4_conv1 (Conv2D) (None,28, 28, 512) 1180160

_________________________________________________________________block4_conv2 (Conv2D) (None,28, 28, 512) 2359808

_________________________________________________________________block4_conv3 (Conv2D) (None,28, 28, 512) 2359808

_________________________________________________________________block4_pool (MaxPooling2D) (None,14, 14, 512) 0_________________________________________________________________block5_conv1 (Conv2D) (None,14, 14, 512) 2359808

_________________________________________________________________block5_conv2 (Conv2D) (None,14, 14, 512) 2359808

_________________________________________________________________block5_conv3 (Conv2D) (None,14, 14, 512) 2359808

_________________________________________________________________block5_pool (MaxPooling2D) (None,7, 7, 512) 0_________________________________________________________________flatten_5 (Flatten) (None,25088) 0_________________________________________________________________fc1 (Dense) (None,4096) 102764544

_________________________________________________________________fc2 (Dense) (None,4096) 16781312

_________________________________________________________________dropout_10 (Dropout) (None,4096) 0_________________________________________________________________prediction (Dense) (None,10) 40970

=================================================================Total params:134,301,514Trainable params:119,586,826Non-trainable params: 14,714,688

_________________________________________________________________(1000, 48, 48, 3)

(1000, 48, 48, 3)

Train on1000 samples, validate on 1000samples

Epoch1/20

1000/1000 [==============================] - 175s 175ms/step - loss: 2.1289 - acc: 0.2350 - val_loss: 1.9100 - val_acc: 0.4230Epoch2/20

1000/1000 [==============================] - 190s 190ms/step - loss: 1.7685 - acc: 0.4420 - val_loss: 1.6503 - val_acc: 0.4930Epoch3/20

1000/1000 [==============================] - 265s 265ms/step - loss: 1.5582 - acc: 0.5140 - val_loss: 1.5005 - val_acc: 0.5440Epoch4/20

1000/1000 [==============================] - 373s 373ms/step - loss: 1.4210 - acc: 0.5710 - val_loss: 1.3019 - val_acc: 0.6160Epoch5/20

1000/1000 [==============================] - 295s 295ms/step - loss: 1.1946 - acc: 0.6490 - val_loss: 1.1182 - val_acc: 0.7280Epoch6/20

1000/1000 [==============================] - 277s 277ms/step - loss: 1.0291 - acc: 0.7330 - val_loss: 1.0279 - val_acc: 0.7430Epoch7/20

1000/1000 [==============================] - 177s 177ms/step - loss: 1.0065 - acc: 0.7060 - val_loss: 0.9229 - val_acc: 0.7690Epoch8/20

1000/1000 [==============================] - 169s 169ms/step - loss: 0.8438 - acc: 0.7810 - val_loss: 0.9716 - val_acc: 0.6670Epoch9/20

1000/1000 [==============================] - 169s 169ms/step - loss: 0.8898 - acc: 0.7230 - val_loss: 0.9710 - val_acc: 0.6660Epoch10/20

1000/1000 [==============================] - 166s 166ms/step - loss: 0.8258 - acc: 0.7460 - val_loss: 0.9026 - val_acc: 0.7130Epoch11/20

1000/1000 [==============================] - 169s 169ms/step - loss: 0.7592 - acc: 0.7640 - val_loss: 0.9691 - val_acc: 0.6730Epoch12/20

1000/1000 [==============================] - 165s 165ms/step - loss: 0.7793 - acc: 0.7520 - val_loss: 0.8350 - val_acc: 0.6800Epoch13/20

1000/1000 [==============================] - 164s 164ms/step - loss: 0.6677 - acc: 0.7780 - val_loss: 0.7203 - val_acc: 0.7730Epoch14/20

1000/1000 [==============================] - 164s 164ms/step - loss: 0.7018 - acc: 0.7630 - val_loss: 0.6947 - val_acc: 0.7760Epoch15/20

1000/1000 [==============================] - 163s 163ms/step - loss: 0.6129 - acc: 0.8100 - val_loss: 0.7025 - val_acc: 0.7610Epoch16/20

1000/1000 [==============================] - 163s 163ms/step - loss: 0.6104 - acc: 0.8190 - val_loss: 0.6385 - val_acc: 0.8220Epoch17/20

1000/1000 [==============================] - 163s 163ms/step - loss: 0.5507 - acc: 0.8320 - val_loss: 0.6273 - val_acc: 0.8290Epoch18/20

1000/1000 [==============================] - 164s 164ms/step - loss: 0.5205 - acc: 0.8360 - val_loss: 0.8740 - val_acc: 0.6750Epoch19/20

1000/1000 [==============================] - 163s 163ms/step - loss: 0.5852 - acc: 0.8150 - val_loss: 0.6614 - val_acc: 0.7890Epoch20/20

1000/1000 [==============================] - 166s 166ms/step - loss: 0.5310 - acc: 0.8340 - val_loss: 0.5718 - val_acc: 0.8250

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,下面是基于TensorFlow(Keras)搭建的VGG16卷积神经网络实现手写数字识别的代码: ```python import tensorflow as tf from tensorflow.keras import layers, models, datasets, utils # 加载数据集 (train_images, train_labels), (test_images, test_labels) = datasets.mnist.load_data() # 数据预处理 train_images = train_images.reshape((60000, 28, 28, 1)) / 255.0 test_images = test_images.reshape((10000, 28, 28, 1)) / 255.0 # 将标签进行one-hot编码 train_labels = utils.to_categorical(train_labels) test_labels = utils.to_categorical(test_labels) # 构建VGG16模型 model = models.Sequential() model.add(layers.Conv2D(64, (3, 3), activation='relu', padding='same', input_shape=(28, 28, 1))) model.add(layers.Conv2D(64, (3, 3), activation='relu', padding='same')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu', padding='same')) model.add(layers.Conv2D(128, (3, 3), activation='relu', padding='same')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(256, (3, 3), activation='relu', padding='same')) model.add(layers.Conv2D(256, (3, 3), activation='relu', padding='same')) model.add(layers.Conv2D(256, (3, 3), activation='relu', padding='same')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(512, (3, 3), activation='relu', padding='same')) model.add(layers.Conv2D(512, (3, 3), activation='relu', padding='same')) model.add(layers.Conv2D(512, (3, 3), activation='relu', padding='same')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(512, (3, 3), activation='relu', padding='same')) model.add(layers.Conv2D(512, (3, 3), activation='relu', padding='same')) model.add(layers.Conv2D(512, (3, 3), activation='relu', padding='same')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dense(4096, activation='relu')) model.add(layers.Dense(4096, activation='relu')) model.add(layers.Dense(10, activation='softmax')) # 模型编译 model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) # 模型训练 model.fit(train_images, train_labels, epochs=5, batch_size=64, validation_data=(test_images, test_labels)) # 模型评估 test_loss, test_acc = model.evaluate(test_images, test_labels) print('Test accuracy:', test_acc) ``` 需要注意的是,这个模型的训练需要较长时间,可以在Colab等云环境中进行训练。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值