TensorFlow(keras)入门课程--04 卷积神经网络

目录

  • 1 简介
  • 2 使用卷积提高计算机视觉准确度
  • 3 可视化卷积核池

1 简介

在本节中,我们将学习如何使用卷积神经网络来改进图像分类模型。

2 使用卷积提高计算机视觉准确度

在之前的实验中 ,使用了包含了三个层的深度神经网络进行时尚图像识别-输入层(以输入数据的形状)、输出层(以及所需输出的形状)和一个隐藏层,

为方便起见,先运行DNN的代码并打印出测试精度。

import tensorflow as tf
fashion_mnist = tf.keras.datasets.fashion_mnist
(training_images,training_labels),(test_images,test_labels) = fashion_mnist.load_data()
trainging_images = training_images / 255.0
test_images = test_images / 255.0
model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(128,activation="relu"),
    tf.keras.layers.Dense(10,activation="softmax")
])
model.compile(optimizer="adam",loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(trainging_images,training_labels,epochs=5)
test_loss, test_accuracy = model.evaluate(test_images, test_labels)
print ('Test loss: {}, Test accuracy: {}'.format(test_loss, test_accuracy*100))
Epoch 1/5
60000/60000 [==============================] - 4s 72us/sample - loss: 0.4982 - acc: 0.8257
Epoch 2/5
60000/60000 [==============================] - 4s 74us/sample - loss: 0.3746 - acc: 0.8649
Epoch 3/5
60000/60000 [==============================] - 5s 77us/sample - loss: 0.3388 - acc: 0.8765
Epoch 4/5
60000/60000 [==============================] - 4s 74us/sample - loss: 0.3133 - acc: 0.8858
Epoch 5/5
60000/60000 [==============================] - 4s 73us/sample - loss: 0.2991 - acc: 0.8905
10000/10000 [==============================] - 0s 28us/sample - loss: 0.3888 - acc: 0.8607
Test loss: 0.38882760289907453, Test accuracy: 86.0700011253357

DNN测试集的准确率为86%。

import tensorflow as tf
print(tf.__version__)
fashion_mnist = tf.keras.datasets.fashion_mnist
(trainging_images,training_labels),(test_images,test_labels) = fashion_mnist.load_data()
training_images = trainging_images.reshape(60000,28,28,1)
trainging_images = trainging_images / 255.0
test_images = test_images.reshape(10000,28,28,1)
test_images = test_images / 255.0
model = tf.keras.models.Sequential([
    tf.keras.layers.Conv2D(64,(3,3),activation="relu",input_shape=(28,28,1)),
    tf.keras.layers.MaxPooling2D(2,2),
    tf.keras.layers.Conv2D(64,(3,3),activation="relu"),
    tf.keras.layers.MaxPooling2D(2,2),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(128,activation="relu"),
    tf.keras.layers.Dense(10,activation="softmax")
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
model.fit(training_images,training_labels,epochs=5)
test_loss, test_accuracy = model.evaluate(test_images, test_labels)
print ('Test loss: {}, Test accuracy: {}'.format(test_loss, test_accuracy*100))
1.13.1
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_4 (Conv2D)            (None, 26, 26, 64)        640       
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 13, 13, 64)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 11, 11, 64)        36928     
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 5, 5, 64)          0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 1600)              0         
_________________________________________________________________
dense_4 (Dense)              (None, 128)               204928    
_________________________________________________________________
dense_5 (Dense)              (None, 10)                1290      
=================================================================
Total params: 243,786
Trainable params: 243,786
Non-trainable params: 0
_________________________________________________________________
Epoch 1/5
60000/60000 [==============================] - 75s 1ms/sample - loss: 13.4033 - acc: 0.1681
Epoch 2/5
60000/60000 [==============================] - 74s 1ms/sample - loss: 14.5063 - acc: 0.1000
Epoch 3/5
60000/60000 [==============================] - 72s 1ms/sample - loss: 14.5063 - acc: 0.1000
Epoch 4/5
60000/60000 [==============================] - 73s 1ms/sample - loss: 14.5063 - acc: 0.1000
Epoch 5/5
60000/60000 [==============================] - 73s 1ms/sample - loss: 14.5063 - acc: 0.1000
10000/10000 [==============================] - 4s 364us/sample - loss: 8.4485 - acc: 0.1000
Test loss: 8.44854741897583, Test accuracy: 10.000000149011612

3 可视化卷积和池化

import matplotlib.pyplot as plt
f, axarr = plt.subplots(3,4)
FIRST_IMAGE=0
SECOND_IMAGE=23
THIRD_IMAGE=28
CONVOLUTION_NUMBER = 6
from tensorflow.keras import models
layer_outputs = [layer.output for layer in model.layers]
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)
for x in range(0,4):
    f1 = activation_model.predict(test_images[FIRST_IMAGE].reshape(1, 28, 28, 1))[x]
    axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
    axarr[0,x].grid(False)
    f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(1, 28, 28, 1))[x]
    axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
    axarr[1,x].grid(False)
    f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(1, 28, 28, 1))[x]
    axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
    axarr[2,x].grid(False)

在这里插入图片描述


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值