DL之CNN:利用卷积神经网络算法(2→2,基于Keras的API-Functional或Sequential)利用MNIST(手写数字图片识别)数据集实现多分类预测

28 篇文章 10 订阅

DL之CNN:利用卷积神经网络算法(2→2,基于Keras的API-Functional或Sequential)利用MNIST(手写数字图片识别)数据集实现多分类预测

目录

利用卷积神经网络算法(2→2,基于Keras的API-Functional)利用MNIST(手写数字图片识别)数据集实现多分类预测

输出结果

设计思路

核心代码

利用卷积神经网络算法(2→2,基于Keras的API-Sequential)利用MNIST(手写数字图片识别)数据集实现多分类预测

输出结果

设计思路

核心代码


利用卷积神经网络算法(2→2,基于Keras的API-Functional)利用MNIST(手写数字图片识别)数据集实现多分类预测

输出结果

下边两张图对应查看,可知,数字0有965个是被准确识别到!

1.10.0
Size of:
- Training-set:		55000
- Validation-set:	5000
- Test-set:		10000
Epoch 1/1

  128/55000 [..............................] - ETA: 14:24 - loss: 2.3439 - acc: 0.0938
  256/55000 [..............................] - ETA: 14:05 - loss: 2.2695 - acc: 0.1016
  384/55000 [..............................] - ETA: 13:20 - loss: 2.2176 - acc: 0.1302
  512/55000 [..............................] - ETA: 13:30 - loss: 2.1608 - acc: 0.2109
  640/55000 [..............................] - ETA: 13:29 - loss: 2.0849 - acc: 0.2500
  768/55000 [..............................] - ETA: 13:23 - loss: 2.0309 - acc: 0.2734
  896/55000 [..............................] - ETA: 13:30 - loss: 1.9793 - acc: 0.2946
 1024/55000 [..............................] - ETA: 13:23 - loss: 1.9105 - acc: 0.3369
 1152/55000 [..............................] - ETA: 13:22 - loss: 1.8257 - acc: 0.3776
……
53760/55000 [============================>.] - ETA: 18s - loss: 0.2106 - acc: 0.9329
53888/55000 [============================>.] - ETA: 16s - loss: 0.2103 - acc: 0.9330
54016/55000 [============================>.] - ETA: 14s - loss: 0.2100 - acc: 0.9331
54144/55000 [============================>.] - ETA: 13s - loss: 0.2096 - acc: 0.9333
54272/55000 [============================>.] - ETA: 11s - loss: 0.2092 - acc: 0.9334
54400/55000 [============================>.] - ETA: 9s - loss: 0.2089 - acc: 0.9335 
54528/55000 [============================>.] - ETA: 7s - loss: 0.2086 - acc: 0.9336
54656/55000 [============================>.] - ETA: 5s - loss: 0.2082 - acc: 0.9337
54784/55000 [============================>.] - ETA: 3s - loss: 0.2083 - acc: 0.9337
54912/55000 [============================>.] - ETA: 1s - loss: 0.2082 - acc: 0.9337
55000/55000 [==============================] - 837s 15ms/step - loss: 0.2080 - acc: 0.9338

   32/10000 [..............................] - ETA: 21s
  160/10000 [..............................] - ETA: 8s 
  288/10000 [..............................] - ETA: 6s
  448/10000 [>.............................] - ETA: 5s
  576/10000 [>.............................] - ETA: 5s
  736/10000 [=>............................] - ETA: 4s
  864/10000 [=>............................] - ETA: 4s
 1024/10000 [==>...........................] - ETA: 4s
 1152/10000 [==>...........................] - ETA: 4s
 1312/10000 [==>...........................] - ETA: 4s
 1440/10000 [===>..........................] - ETA: 4s
 1600/10000 [===>..........................] - ETA: 3s
 1728/10000 [====>.........................] - ETA: 3s
……
 3008/10000 [========>.....................] - ETA: 3s
 3168/10000 [========>.....................] - ETA: 3s
 3296/10000 [========>.....................] - ETA: 3s
 3456/10000 [=========>....................] - ETA: 2s
……
 5248/10000 [==============>...............] - ETA: 2s
 5376/10000 [===============>..............] - ETA: 2s
 5536/10000 [===============>..............] - ETA: 2s
 5664/10000 [===============>..............] - ETA: 1s
 5792/10000 [================>.............] - ETA: 1s
……
 7360/10000 [=====================>........] - ETA: 1s
 7488/10000 [=====================>........] - ETA: 1s
 7648/10000 [=====================>........] - ETA: 1s
 7776/10000 [======================>.......] - ETA: 1s
 7936/10000 [======================>.......] - ETA: 0s
 8064/10000 [=======================>......] - ETA: 0s
 8224/10000 [=======================>......] - ETA: 0s
……
 9760/10000 [============================>.] - ETA: 0s
 9920/10000 [============================>.] - ETA: 0s
10000/10000 [==============================] - 4s 449us/step
loss 0.05686537345089018
acc 0.982
acc: 98.20%
[[ 965    0    4    0    0    0    4    1    2    4]
 [   0 1128    3    0    0    0    0    1    3    0]
 [   0    0 1028    0    0    0    0    1    3    0]
 [   0    0   10  991    0    2    0    2    3    2]
 [   0    0    3    0  967    0    1    1    1    9]
 [   2    0    1    7    1  863    5    1    4    8]
 [   2    3    0    0    3    2  946    0    2    0]
 [   0    1   17    1    1    0    0  987    2   19]
 [   2    0    9    2    0    1    0    1  955    4]
 [   1    4    3    2    8    0    0    0    1  990]]


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 784)               0         
_________________________________________________________________
reshape (Reshape)            (None, 28, 28, 1)         0         
_________________________________________________________________
layer_conv1 (Conv2D)         (None, 28, 28, 16)        416       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 16)        0         
_________________________________________________________________
layer_conv2 (Conv2D)         (None, 14, 14, 36)        14436     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 7, 7, 36)          0         
_________________________________________________________________
flatten (Flatten)            (None, 1764)              0         
_________________________________________________________________
dense (Dense)                (None, 128)               225920    
_________________________________________________________________
dense_1 (Dense)              (None, 10)                1290      
=================================================================
Total params: 242,062
Trainable params: 242,062
Non-trainable params: 0
_________________________________________________________________
(5, 5, 1, 16)
(1, 28, 28, 16)

设计思路

核心代码

后期更新……

path_model = 'Functional_model.keras'  
                  
from tensorflow.python.keras.models import load_model  
model2_1 = load_model(path_model)      

model_weights_path = 'Functional_model_weights.keras'
model2_1.save_weights(model_weights_path )                  
model2_1.load_weights(model_weights_path, by_name=True ) 
model2_1.load_weights(model_weights_path)  


result = model.evaluate(x=data.x_test,
                        y=data.y_test)
  
for name, value in zip(model.metrics_names, result):
    print(name, value)
print("{0}: {1:.2%}".format(model.metrics_names[1], result[1]))


y_pred = model.predict(x=data.x_test) 
cls_pred = np.argmax(y_pred, axis=1)   
plot_example_errors(cls_pred)        
plot_confusion_matrix(cls_pred)     
 
 

images = data.x_test[0:9]                      
cls_true = data.y_test_cls[0:9]                 
y_pred = model.predict(x=images)               
cls_pred = np.argmax(y_pred, axis=1)            
title = 'MNIST(Sequential Model): plot predicted example, resl VS predict'
plot_images(title, images=images,               
            cls_true=cls_true,
            cls_pred=cls_pred)

利用卷积神经网络算法(2→2,基于Keras的API-Sequential)利用MNIST(手写数字图片识别)数据集实现多分类预测

输出结果

1.10.0
Size of:
- Training-set:		55000
- Validation-set:	5000
- Test-set:		10000
Epoch 1/1
  128/55000 [..............................] - ETA: 15:39 - loss: 2.3021 - acc: 0.0703
  256/55000 [..............................] - ETA: 13:40 - loss: 2.2876 - acc: 0.1172
  384/55000 [..............................] - ETA: 14:24 - loss: 2.2780 - acc: 0.1328
  512/55000 [..............................] - ETA: 13:57 - loss: 2.2613 - acc: 0.1719
  640/55000 [..............................] - ETA: 13:57 - loss: 2.2414 - acc: 0.1828
  768/55000 [..............................] - ETA: 13:58 - loss: 2.2207 - acc: 0.2135
  896/55000 [..............................] - ETA: 14:01 - loss: 2.1926 - acc: 0.2467
 1024/55000 [..............................] - ETA: 13:34 - loss: 2.1645 - acc: 0.2725
 1152/55000 [..............................] - ETA: 13:38 - loss: 2.1341 - acc: 0.2969
 1280/55000 [..............................] - ETA: 13:40 - loss: 2.0999 - acc: 0.3273
 1408/55000 [..............................] - ETA: 13:37 - loss: 2.0555 - acc: 0.3629
……
54016/55000 [============================>.] - ETA: 15s - loss: 0.2200 - acc: 0.9350
54144/55000 [============================>.] - ETA: 13s - loss: 0.2198 - acc: 0.9350
54272/55000 [============================>.] - ETA: 11s - loss: 0.2194 - acc: 0.9351
54400/55000 [============================>.] - ETA: 9s - loss: 0.2191 - acc: 0.9352 
54528/55000 [============================>.] - ETA: 7s - loss: 0.2189 - acc: 0.9352
54656/55000 [============================>.] - ETA: 5s - loss: 0.2185 - acc: 0.9354
54784/55000 [============================>.] - ETA: 3s - loss: 0.2182 - acc: 0.9354
54912/55000 [============================>.] - ETA: 1s - loss: 0.2180 - acc: 0.9355
55000/55000 [==============================] - 863s 16ms/step - loss: 0.2177 - acc: 0.9356

   32/10000 [..............................] - ETA: 22s
  160/10000 [..............................] - ETA: 8s 
  288/10000 [..............................] - ETA: 6s
  416/10000 [>.............................] - ETA: 5s
  544/10000 [>.............................] - ETA: 5s
  672/10000 [=>............................] - ETA: 5s
  800/10000 [=>............................] - ETA: 5s
  928/10000 [=>............................] - ETA: 4s
 1056/10000 [==>...........................] - ETA: 4s
 1184/10000 [==>...........................] - ETA: 4s
 1312/10000 [==>...........................] - ETA: 4s
 1440/10000 [===>..........................] - ETA: 4s
……
 9088/10000 [==========================>...] - ETA: 0s
 9216/10000 [==========================>...] - ETA: 0s
 9344/10000 [===========================>..] - ETA: 0s
 9472/10000 [===========================>..] - ETA: 0s
 9600/10000 [===========================>..] - ETA: 0s
 9728/10000 [============================>.] - ETA: 0s
 9856/10000 [============================>.] - ETA: 0s
 9984/10000 [============================>.] - ETA: 0s
10000/10000 [==============================] - 5s 489us/step
loss 0.060937872195523234
acc 0.9803
acc: 98.03%
[[ 963    0    0    1    1    0    4    1    4    6]
 [   0 1128    0    2    0    1    2    0    2    0]
 [   2    9 1006    1    1    0    0    3   10    0]
 [   1    0    2  995    0    3    0    5    2    2]
 [   0    1    0    0  977    0    0    1    0    3]
 [   2    0    0    7    0  874    3    1    1    4]
 [   2    3    0    0    6    1  943    0    3    0]
 [   0    5    7    3    1    1    0  990    1   20]
 [   4    1    3    3    2    1    7    2  944    7]
 [   4    6    0    4    9    1    0    1    1  983]]

设计思路

后期更新……

核心代码

后期更新……

result = model.evaluate(x=data.x_test,
                        y=data.y_test)
  
for name, value in zip(model.metrics_names, result):
    print(name, value)
print("{0}: {1:.2%}".format(model.metrics_names[1], result[1]))


y_pred = model.predict(x=data.x_test) 
cls_pred = np.argmax(y_pred, axis=1)   
plot_example_errors(cls_pred)        
plot_confusion_matrix(cls_pred)     
 
 

images = data.x_test[0:9]                      
cls_true = data.y_test_cls[0:9]                 
y_pred = model.predict(x=images)               
cls_pred = np.argmax(y_pred, axis=1)            
title = 'MNIST(Sequential Model): plot predicted example, resl VS predict'
plot_images(title, images=images,               
            cls_true=cls_true,
            cls_pred=cls_pred)

利用tensorflow实现卷积神经网络来进行MNIST手写数字图像的分类。 #导入numpy模块 import numpy as np #导入tensorflow模块,程序使用tensorflow来实现卷积神经网络 import tensorflow as tf #下载mnist数据集,并从mnist_data目录中读取数据 from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('mnist_data',one_hot=True) #(1)这里的“mnist_data” 是和当前文件相同目录下的一个文件夹。自己先手工建立这个文件夹,然后从https://yann.lecun.com/exdb/mnist/ 下载所需的4个文件(即该网址中第三段“Four files are available on this site:”后面的四个文件),并放到目录MNIST_data下即可。 #(2)MNIST数据集手写数字字符的数据集。每个样本都是一张28*28像素的灰度手写数字图片。 #(3)one_hot表示独热编码,其值被设为true。在分类问题的数据集标注时,如何不采用独热编码的方式, 类别通常就是一个符号而已,比如说是9。但如果采用独热编码的方式,则每个类表示为一个列表list,共计有10个数值,但只有一个为1,其余均为0。例如,“9”的独热编码可以为[00000 00001]. #定义输入数据x和输出y的形状。函数tf.placeholder的目的是定义输入,可以理解为采用占位符进行占位。 #None这个位置的参数在这里被用于表示样本的个数,而由于样本个数此时具体是多少还无法确定,所以这设为None。而每个输入样本的特征数目是确定的,即为28*28。 input_x = tf.placeholder(tf.float32,[None,28*28])/255 #因为每个像素的取值范围是 0~255 output_y = tf.placeholder(tf.int32,[None,10]) #10表示10个类别 #输入层的输入数据input_x被reshape成四维数据,其中第一维的数据代表了图片数量 input_x_images = tf.reshape(input_x,[-1,28,28,1]) test_x = mnist.test.images[:3000] #读取测试集图片的特征,读取3000个图片 test_y = mnist.test.labels[:3000] #读取测试集图片的标签。就是这3000个图片所对应的标签
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

一个处女座的程序猿

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值