基于keras的手写数字识别

欢迎关注

基础训练一:基于keras的手写数字识别

开发环境为jupyter notebook

1. 导入相关包

import numpy as np
from keras.datasets import mnist
from keras.models import Sequential,Model
from keras.layers.core import Dense,Activation,Dropout
from keras.utils import np_utils
import matplotlib.pyplot as plt
import matplotlib.image as processimage

2. 下载数据集

(x_train,y_train),(x_test,y_test)=mnist.load_data()

print(x_train.shape,y_train.shape)
print(x_test.shape,y_test.shape)

运行结果:

(60000, 28, 28) (60000,)
(10000, 28, 28) (10000,)
#(60000, 28, 28)#60000个训练样本,每一个是28*28*1(灰度图像只有一层所以乘1,代码中省略) 
#(60000,)       #训练样本一一对应的标签
#(10000, 28, 28)#10000个测试样本,每一个是28*28*1(灰度图像只有一层所以乘1,代码中省略)
#(10000,)       #测试样本一一对应的标签

3. 处理数据以便神经网络识别

x_train=x_train.reshape(60000,784)#由数组转化成60000行,每一行784列,代表一张图片
x_test=x_test.reshape(10000,784)#由数组转化成10000行,每一行784列,代表一张图片
# print(x_train)
#把数据类型转化为浮点型
x_train=x_train.astype('float32')
x_test=x_test.astype('float32')
# print(x_train)
#颜色数字归一化
x_train/=255#颜色数值范围0-255
# print(x_train[999])
x_test/=255

4. 配置网络基本参数

#一次送入几张图片
bach_size = 1024
#训练类的个数,识别0——9,就是10个类
nb_class = 10
#数据迭代次数
nb_epochs= 20
#初始化类向量,设置10个类{0,0,0,0,0,1(6),0,0,0,0}
# print(y_test[9999])
# print('-'*80)
print(y_test[0])#
print('-'*50)
#没有转换之前,y存储的是十进制数。
y_test = np_utils.to_categorical(y_test,nb_class)#y都是标签nb_class=10
#转换之后,y存储的包含十个二进制的列表。
print(y_test[0])
y_train = np_utils.to_categorical(y_train,nb_class)#nb_class = 10
# print(y_test[9999])##把程序全部运行看效果
# print(y_test.shape)#10000个数据10个类
# print(y_test)#10000行,每行一列有十个数:代表十个类

运行结果:

7
--------------------------------------------------
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]

5. 网络结构设置

model = Sequential()
#第一层网络(输入层)
model.add(Dense(512,input_dim=784))#512是输出的下一层,也可用input_shape(784,)#逗号不能丢,input_shape与input_dim参数用于指定输入数据的形状
model.add(Activation('relu'))#激活函数
model.add(Dropout(0.2))#防止过拟合,丢掉一些数据
#第二层网络(隐藏层)
model.add(Dense(256))
model.add(Activation('relu'))#激活函数
model.add(Dropout(0.2))#防止过拟合,丢掉一些数据
#第三层网络(输出层)
model.add(Dense(10))
model.add(Activation('softmax'))#输出最后一层

​机器学习通常包括定义模型、定义优化目标、输入数据、训练模型,最后通常还需要使用测试数据评估模型的性能。keras中的Sequential模型构建也包含这些步骤。Sequential的详细解释可参考以下博客:

理解keras中的sequential模型

6. 编译-设置代价函数、梯度下降法、训练标准

model.compile(
    loss='categorical_crossentropy',#当使用categorical_crossentropy损失函数时,你的标签应为多类模式,例如如果你有10个类别,每一个样本的标签应该是一个10维的向量,该向量在对应有值的索引位置为1其余为0,这个目标函数适用于多分类任务,它也是与softmax激活关联的默认选择。
    optimizer='rmsprop',#SGD
    metrics=['accuracy']#模型评判指标
)

更详细的keras配置参考以下博客:

keras系列(一):参数设置

7. 启动训练网络

Trainning=model.fit(
    x_train,y_train,
    batch_size=bach_size,
    epochs=nb_epochs,
    validation_data=(x_test,y_test),
    verbose=1#设置显示样式
)

运行结果:

Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 3s 43us/step - loss: 0.5243 - accuracy: 0.8381 - val_loss: 0.2300 - val_accuracy: 0.9284
Epoch 2/20
60000/60000 [==============================] - 2s 41us/step - loss: 0.2129 - accuracy: 0.9354 - val_loss: 0.1599 - val_accuracy: 0.9515
Epoch 3/20
60000/60000 [==============================] - 3s 42us/step - loss: 0.1444 - accuracy: 0.9563 - val_loss: 0.1143 - val_accuracy: 0.9644
Epoch 4/20
60000/60000 [==============================] - 3s 42us/step - loss: 0.1070 - accuracy: 0.9671 - val_loss: 0.0842 - val_accuracy: 0.9718
Epoch 5/20
60000/60000 [==============================] - 3s 43us/step - loss: 0.0836 - accuracy: 0.9750 - val_loss: 0.0956 - val_accuracy: 0.9703
Epoch 6/20
60000/60000 [==============================] - 2s 40us/step - loss: 0.0706 - accuracy: 0.9779 - val_loss: 0.0757 - val_accuracy: 0.9744
Epoch 7/20
60000/60000 [==============================] - 2s 40us/step - loss: 0.0560 - accuracy: 0.9827 - val_loss: 0.0696 - val_accuracy: 0.9788
Epoch 8/20
60000/60000 [==============================] - 2s 40us/step - loss: 0.0480 - accuracy: 0.9844 - val_loss: 0.0750 - val_accuracy: 0.9760
Epoch 9/20
60000/60000 [==============================] - 2s 40us/step - loss: 0.0404 - accuracy: 0.9870 - val_loss: 0.0613 - val_accuracy: 0.9817
Epoch 10/20
60000/60000 [==============================] - 2s 40us/step - loss: 0.0343 - accuracy: 0.9894 - val_loss: 0.0648 - val_accuracy: 0.9804
Epoch 11/20
60000/60000 [==============================] - 3s 42us/step - loss: 0.0305 - accuracy: 0.9897 - val_loss: 0.0592 - val_accuracy: 0.9824
Epoch 12/20
60000/60000 [==============================] - 2s 39us/step - loss: 0.0248 - accuracy: 0.9924 - val_loss: 0.0668 - val_accuracy: 0.9795
Epoch 13/20
60000/60000 [==============================] - 2s 39us/step - loss: 0.0230 - accuracy: 0.9924 - val_loss: 0.0766 - val_accuracy: 0.9789
Epoch 14/20
60000/60000 [==============================] - 2s 40us/step - loss: 0.0203 - accuracy: 0.9937 - val_loss: 0.0648 - val_accuracy: 0.9818
Epoch 15/20
60000/60000 [==============================] - 2s 41us/step - loss: 0.0196 - accuracy: 0.9936 - val_loss: 0.0565 - val_accuracy: 0.9835
Epoch 16/20
60000/60000 [==============================] - 2s 40us/step - loss: 0.0147 - accuracy: 0.9952 - val_loss: 0.0605 - val_accuracy: 0.9842
Epoch 17/20
60000/60000 [==============================] - 2s 40us/step - loss: 0.0140 - accuracy: 0.9955 - val_loss: 0.0612 - val_accuracy: 0.9833
Epoch 18/20
60000/60000 [==============================] - 2s 40us/step - loss: 0.0127 - accuracy: 0.9960 - val_loss: 0.0601 - val_accuracy: 0.9837
Epoch 19/20
60000/60000 [==============================] - 2s 41us/step - loss: 0.0115 - accuracy: 0.9962 - val_loss: 0.0641 - val_accuracy: 0.9838
Epoch 20/20
60000/60000 [==============================] - 2s 40us/step - loss: 0.0102 - accuracy: 0.9968 - val_loss: 0.0737 - val_accuracy: 0.9828
Trainning.history#查看训练数据历史

运行结果:

{'val_loss': [0.22997916753292083,
  0.15992866771221162,
  0.11426119041442871,
  0.08417677365541458,
  0.09561246933937073,
  0.07569937653541566,
  0.06964178583621979,
  0.07499785485267639,
  0.0613335898399353,
  0.0647744504570961,
  0.05919458869695664,
  0.06679673582315444,
  0.07660992671251297,
  0.06481298291683198,
  0.056510058510303496,
  0.06049015622735023,
  0.0612234160900116,
  0.060136277318000794,
  0.06409240868091583,
  0.07370031311511993],
 'val_accuracy': [0.9283999800682068,
  0.9514999985694885,
  0.9643999934196472,
  0.9718000292778015,
  0.970300018787384,
  0.974399983882904,
  0.9787999987602234,
  0.9760000109672546,
  0.9817000031471252,
  0.980400025844574,
  0.9824000000953674,
  0.9794999957084656,
  0.9789000153541565,
  0.9818000197410583,
  0.9835000038146973,
  0.9842000007629395,
  0.983299970626831,
  0.9836999773979187,
  0.9837999939918518,
  0.9828000068664551],
 'loss': [0.5243418522755305,
  0.21287762469450633,
  0.14441904651323953,
  0.106958281036218,
  0.08355111027558644,
  0.070580098036925,
  0.05602771078944206,
  0.04795542423526446,
  0.04044479195078214,
  0.034297494173049926,
  0.030499845439195632,
  0.02480206112364928,
  0.022994316766659417,
  0.02029480942885081,
  0.019580822547276815,
  0.014703940067688624,
  0.013984730113049348,
  0.012680219892660776,
  0.011547798774639765,
  0.010245305199424426],
 'accuracy': [0.83811665,
  0.9353833,
  0.95625,
  0.96708333,
  0.97503334,
  0.97791666,
  0.9827167,
  0.98438334,
  0.98705,
  0.98936665,
  0.9897,
  0.9924167,
  0.9924333,
  0.99373335,
  0.99355,
  0.99523336,
  0.99548334,
  0.996,
  0.99618334,
  0.9967833]}
Trainning.params#查看网络设置参数

运行结果:

{'batch_size': 1024,
 'epochs': 20,
 'steps': None,
 'samples': 60000,
 'verbose': 1,
 'do_validation': True,
 'metrics': ['loss', 'accuracy', 'val_loss', 'val_accuracy']}

8. 从x_test里取图看一看

testrun=x_test[9999].reshape(1,784)#取出测试集中最后一张图片进行测试,转化成神经网络输入的格式
print('标签数据:',y_test[9999])#取出其对应标签
###############下面两条语句设置画布参数,可注释掉,不影响程序运行################
plt.figure(figsize=(15,10))
plt.tick_params(colors='white')#设置坐标刻度颜色
########################################################################
plt.imshow(testrun.reshape([28,28]))##查看这张图片为数值多少,预测结果与它一样就正确

运行结果:

标签数据:[0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]

9. 预测x_test数据😍

pred=model.predict(testrun)
# print(testrun)
print('标签数据',y_test[9999])
print('预测结果:',pred)
print('预测结果:',pred.argmax())#取最大值,即输出预测结果

运行结果:

标签数据 [0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
预测结果: [[1.1697168e-14 6.3868352e-18 1.1747726e-17 1.5051377e-18 2.1559784e-17
  2.3793245e-13 1.0000000e+00 5.7173407e-22 4.1866145e-15 1.6837463e-18]]
预测结果:6

预测结果为6,与从测试集取出的图片数值一样,预测结果正确

10. 手工画图测试

因为电脑没有安装photoshop,所以用windows自带画图软件进行测试,相关配置如下图:

但,通过此方法画出的图像是RGB三通道,需要先转化成与MNIST相同的灰度图像。相关方法可查看下面两篇博客:

图像处理库PIL中图像格式转换(一)

图像处理库PIL中图像格式转换(二)

10.1 转化成灰度图像

from PIL import Image,ImageOps

#自己画图的存放地址
picture=Image.open(r'D:/LearnPython/JupyterNotebook/Python3/First_deep_Learning/number3.png')
# print(picture)
# print(picture.mode)
# print(picture.getpixel((15,10)))
picture_L=picture.convert('L',)
# print(picture_L)
# print(picture_L.size)
# print(picture_L.getpixel((15,10)))
# print(picture_L.mode)
inverted_image=ImageOps.invert(picture_L)#因为画图软件是白底黑字,与MNIST相反,所以要反转一下
# plt.figure(figsize=(15,15))
# plt.tick_params(colors='white')#设置图片坐标刻度颜色
######生成的图像已经归一化,像素点数值在0-1之间
inverted_image.save(r'D:\LearnPython\number3_gray.png')
plt.figure(figsize=(15,10))
plt.tick_params(colors='white')#设置坐标刻度颜色
plt.imshow(inverted_image)#显示要预测的图片

运行结果:

10.2 预测手工画图数据😘

target_img=processimage.imread(r'D:/LearnPython/number3_gray.png')        
print('转化之前: ',target_img.shape)
# print(target_img)
plt.figure(figsize=(15,10))
plt.tick_params(colors='white')#设置坐标刻度颜色
target_img=target_img.reshape(1,784)
print('转化之后:',target_img.shape)
#########################################################
target_img=np.array(target_img)
#########################################################
target_img=target_img.astype('float32')
##target_img/=255#不用再归一化,上面生成的图像已经进行过归一化
# print(target_img)
mypred=model.predict(target_img)
print(mypred)
print('预测数据:',mypred.argmax())


运行结果:

转化之前:  (28, 28)
转化之后: (1, 784)
[[9.6096153e-17 1.3537466e-09 2.9311008e-05 9.9996686e-01 3.6989629e-24
  3.7747996e-06 1.4655872e-09 2.5000689e-11 1.2039729e-10 1.2898114e-13]]
预测数据: 3

经过几次测试,有时候数据预测不那么准确。运用卷积神经网络会有更好的结果。持续更新中…………

  • 2
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值