keras_实例化一个小型的卷积神经网络

keras_深度学习用于计算机视觉

参考:
https://blog.csdn.net/xiewenrui1996/article/details/104009618

import keras
keras.__version__
from keras import layers
from keras import models

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))

# 重要的是,卷积神经网络接收形状为 (image_height, image_width, image_channels)
# 的输入张量(不包括批量维度)。本例中设置卷积神经网络处理大小为 (28, 28, 1) 的输入张量,
# 这正是 MNIST 图像的格式。我们向第一层传入参数 input_shape=(28, 28, 1) 来完成此设置。
# 我们来看一下目前卷积神经网络的架构


model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 13, 13, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 11, 11, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 5, 5, 64)          0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 3, 3, 64)          36928     
=================================================================
Total params: 55,744
Trainable params: 55,744
Non-trainable params: 0

# 可以看到,每个 Conv2D 层和 MaxPooling2D 层的输出都是一个形状为 (height, width,
# channels) 的 3D 张量。宽度和高度两个维度的尺寸通常会随着网络加深而变小。通道数量由传
# 入 Conv2D 层的第一个参数所控制(32 或 64)。
# 下一步是将最后的输出张量[大小为 (3, 3, 64)]输入到一个密集连接分类器网络中,
# 即 Dense 层的堆叠,你已经很熟悉了。这些分类器可以处理 1D 向量,而当前的输出是 3D 张量。
# 首先,我们需要将 3D 输出展平为 1D,然后在上面添加几个 Dense 层


model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))

# 如你所见,在进入两个 Dense 层之前,形状 (3, 3, 64) 的输出被展平为形状 (576,) 的向量。

network looks like:

model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 13, 13, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 11, 11, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 5, 5, 64)          0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 3, 3, 64)          36928     
_________________________________________________________________
flatten_1 (Flatten)          (None, 576)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 64)                36928     
_________________________________________________________________
dense_2 (Dense)              (None, 10)                650       
=================================================================
Total params: 93,322
Trainable params: 93,322
Non-trainable params: 0



# 下面我们在 MNIST 数字图像上训练这个卷积神经网络from keras.datasets import mnist
from keras.utils import to_categorical
​
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
​
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype('float32') / 255
​
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype('float32') / 255
​
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5, batch_size=64)
Epoch 1/5
60000/60000 [==============================] - 8s - loss: 0.1766 - acc: 0.9440     
Epoch 2/5
60000/60000 [==============================] - 7s - loss: 0.0462 - acc: 0.9855     
Epoch 3/5
60000/60000 [==============================] - 7s - loss: 0.0322 - acc: 0.9902     
Epoch 4/5
60000/60000 [==============================] - 7s - loss: 0.0241 - acc: 0.9926     
Epoch 5/5
60000/60000 [==============================] - 7s - loss: 0.0187 - acc: 0.9943     
<keras.callbacks.History at 0x7fbd9c4cd828>
Let's evaluate the model on the test data:

test_loss, test_acc = model.evaluate(test_images, test_labels)
 9536/10000 [===========================>..] - ETA: 0s
第 2 章密集连接网络的测试精度为 97.8%,但这个简单卷积神经网络的测试精度达到了
# 99.3%,我们将错误率降低了 68%(相对比例)。相当不错!
# 我们在测试数据上对模型进行评估。
test_acc
​
# 第 2 章密集连接网络的测试精度为 97.8%,但这个简单卷积神经网络的测试精度达到了
# 99.3%,我们将错误率降低了 68%(相对比例)。相当不错!
0.99129999999999996



  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值