初识keras mnist手写数字识别 卷积神经网络 感知机

最近开始从keras入门深度学习.最先接触的就是深度学习领域的hello world 手写数字识别,mnist数据集的训练识别.

这里使用的是tensorflow里面的keras,封装了各种api后,使用起来真的很舒服.尤其是封装的方式的api还很想sklearn的api

首先是数据的读取,我认为这是一个需要好好研究的一步,大部分的示例都是读取的自带数据集,即处理好的数据,但实际应用的时候肯定是得自己处理数据这些.现在只是入门,暂时先跟着官网示例走一下.

github官方example

在这里插入图片描述
注意一下写法就好,用元组来获取训练集和测试集.如果是第一次使用的话,会需要进行下载,耐心等待.

得到原始数据后,稍稍处理一下,比较重要的就是归一化处理.还有把数据调成四维的,这里会有两种数据类型,channel_first和channel_last,就是rgb通道数在前还是在后,默认都是在后的.然后将标签转为二进制类型(one-hot编码)
在这里插入图片描述

然后就是关于网络的搭建了.首先就是创建一个keras.Sequential(),然后不断往里面添加(add())网络层即可,我比较喜欢这种直接写的方式.

在这里插入图片描述
当然也可以像官网那样写
在这里插入图片描述
再对刚刚搭建的网络指定一些参数,比如损失函数,优化器,评价标准等
在这里插入图片描述
最后往里面喂数据就ok了

在这里插入图片描述
跑出来的效果不咋样…应该是因为我的batch_size设置小了的原因.

给个对比,直接使用感知机的结果
在这里插入图片描述

不过对于这个数据集,应该努力达到100%的识别率,多训练应该还可以提升.

这里有个小东西,想到以后或许训练的量比较大,所以配置好了gpu版本的.不过当我使用jupyter的时候,不同代码会显示cudnn初始化失败,得关干净了才能再跑,弄得我实在烦了,小的代码,就可以使用这两行代码强制使用cpu来跑.

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"

jupyter的md导入
感知机

from tensorflow.keras.datasets import mnist
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
(x_train,y_train),(x_test,y_test) = mnist.load_data()
from tensorflow.keras import layers
from tensorflow import keras
import tensorflow as tf
from tensorflow.keras.optimizers import SGD
x_train =x_train / 255
x_test = x_test/255
x_train.shape
(60000, 28, 28)
model = keras.Sequential([
    layers.Flatten(input_shape=(28,28)),
    layers.Dense(128,activation=tf.nn.relu),
    layers.Dense(10,activation=tf.nn.softmax)
])
model.compile(optimizer=SGD(),loss=keras.losses.sparse_categorical_crossentropy,metrics=['accuracy'])
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense (Dense)                (None, 128)               100480    
_________________________________________________________________
dense_1 (Dense)              (None, 10)                1290      
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________
model.fit(x_train,y_train,epochs=10,batch_size=32)
Epoch 1/10
1875/1875 [==============================] - 2s 959us/step - loss: 0.6418 - accuracy: 0.8366
Epoch 2/10
1875/1875 [==============================] - 2s 905us/step - loss: 0.3375 - accuracy: 0.9047
Epoch 3/10
1875/1875 [==============================] - 2s 914us/step - loss: 0.2888 - accuracy: 0.9180
Epoch 4/10
1875/1875 [==============================] - 2s 894us/step - loss: 0.2581 - accuracy: 0.9267
Epoch 5/10
1875/1875 [==============================] - 2s 868us/step - loss: 0.2350 - accuracy: 0.9335
Epoch 6/10
1875/1875 [==============================] - 2s 864us/step - loss: 0.2161 - accuracy: 0.9387
Epoch 7/10
1875/1875 [==============================] - 2s 859us/step - loss: 0.2005 - accuracy: 0.9428
Epoch 8/10
1875/1875 [==============================] - 2s 849us/step - loss: 0.1870 - accuracy: 0.9474
Epoch 9/10
1875/1875 [==============================] - 2s 852us/step - loss: 0.1753 - accuracy: 0.9506
Epoch 10/10
1875/1875 [==============================] - 2s 835us/step - loss: 0.1650 - accuracy: 0.9534





<tensorflow.python.keras.callbacks.History at 0x256640d4a48>

卷积神经网路

from tensorflow import keras
from tensorflow.keras.datasets import mnist
import tensorflow as tf

import os
# os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
(x_train,y_train),(x_test,y_test) = mnist.load_data()
print(x_train.shape)
# print(x_train)
(60000, 28, 28)
x_train = x_train.reshape(x_train.shape[0],28,28,1)
x_test = x_test.reshape(x_test.shape[0],28,28,1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
x_train.shape
(60000, 28, 28, 1)
model = keras.Sequential([
    keras.layers.Conv2D(32,kernel_size=(3,3),activation = 'relu',padding='same',input_shape=(28,28,1)),
    keras.layers.Conv2D(64,kernel_size=(3,3),activation =  'relu',padding='same'),
    keras.layers.MaxPooling2D(pool_size=(2,2)),
    keras.layers.Dropout(0.25),
    keras.layers.Flatten(),
    keras.layers.Dense(128,activation=tf.nn.relu),
    keras.layers.Dropout(0.5),
    keras.layers.Dense(10,activation='softmax')
])
model.compile(loss=keras.losses.categorical_crossentropy,optimizer=keras.optimizers.Adadelta(),metrics=['accuracy'])
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 28, 28, 32)        320       
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 28, 28, 64)        18496     
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 64)        0         
_________________________________________________________________
dropout (Dropout)            (None, 14, 14, 64)        0         
_________________________________________________________________
flatten (Flatten)            (None, 12544)             0         
_________________________________________________________________
dense (Dense)                (None, 128)               1605760   
_________________________________________________________________
dropout_1 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 10)                1290      
=================================================================
Total params: 1,625,866
Trainable params: 1,625,866
Non-trainable params: 0
_________________________________________________________________
model.fit(x_train,y_train,batch_size=32,epochs=20,verbose=1, validation_data=(x_test, y_test))
Epoch 1/20
1875/1875 [==============================] - 16s 9ms/step - loss: 2.2189 - accuracy: 0.2555 - val_loss: 2.0866 - val_accuracy: 0.5861
Epoch 2/20
1875/1875 [==============================] - 17s 9ms/step - loss: 1.9392 - accuracy: 0.5121 - val_loss: 1.6835 - val_accuracy: 0.7145
Epoch 3/20
1875/1875 [==============================] - 17s 9ms/step - loss: 1.5294 - accuracy: 0.6248 - val_loss: 1.1962 - val_accuracy: 0.7845
Epoch 4/20
1875/1875 [==============================] - 17s 9ms/step - loss: 1.1665 - accuracy: 0.6884 - val_loss: 0.8554 - val_accuracy: 0.8186
Epoch 5/20
1875/1875 [==============================] - 16s 9ms/step - loss: 0.9460 - accuracy: 0.7285 - val_loss: 0.6701 - val_accuracy: 0.8401
Epoch 6/20
1875/1875 [==============================] - 16s 9ms/step - loss: 0.8144 - accuracy: 0.7556 - val_loss: 0.5681 - val_accuracy: 0.8589
Epoch 7/20
1875/1875 [==============================] - 16s 9ms/step - loss: 0.7345 - accuracy: 0.7774 - val_loss: 0.5058 - val_accuracy: 0.8686
Epoch 8/20
1875/1875 [==============================] - 18s 9ms/step - loss: 0.6799 - accuracy: 0.7933 - val_loss: 0.4633 - val_accuracy: 0.8778
Epoch 9/20
1875/1875 [==============================] - 17s 9ms/step - loss: 0.6351 - accuracy: 0.8047 - val_loss: 0.4325 - val_accuracy: 0.8835
Epoch 10/20
1875/1875 [==============================] - 17s 9ms/step - loss: 0.6029 - accuracy: 0.8150 - val_loss: 0.4090 - val_accuracy: 0.8893
Epoch 11/20
1875/1875 [==============================] - 17s 9ms/step - loss: 0.5780 - accuracy: 0.8239 - val_loss: 0.3899 - val_accuracy: 0.8929
Epoch 12/20
1875/1875 [==============================] - 17s 9ms/step - loss: 0.5555 - accuracy: 0.8311 - val_loss: 0.3748 - val_accuracy: 0.8965
Epoch 13/20
1875/1875 [==============================] - 17s 9ms/step - loss: 0.5378 - accuracy: 0.8353 - val_loss: 0.3621 - val_accuracy: 0.8988
Epoch 14/20
1875/1875 [==============================] - 16s 8ms/step - loss: 0.5184 - accuracy: 0.8425 - val_loss: 0.3499 - val_accuracy: 0.9016
Epoch 15/20
1875/1875 [==============================] - 16s 9ms/step - loss: 0.5065 - accuracy: 0.8458 - val_loss: 0.3401 - val_accuracy: 0.9037
Epoch 16/20
1875/1875 [==============================] - 16s 8ms/step - loss: 0.4921 - accuracy: 0.8504 - val_loss: 0.3311 - val_accuracy: 0.9045
Epoch 17/20
1875/1875 [==============================] - 15s 8ms/step - loss: 0.4843 - accuracy: 0.8539 - val_loss: 0.3238 - val_accuracy: 0.9072
Epoch 18/20
1875/1875 [==============================] - 15s 8ms/step - loss: 0.4768 - accuracy: 0.8551 - val_loss: 0.3164 - val_accuracy: 0.9090
Epoch 19/20
1875/1875 [==============================] - 15s 8ms/step - loss: 0.4620 - accuracy: 0.8600 - val_loss: 0.3099 - val_accuracy: 0.9105
Epoch 20/20
1875/1875 [==============================] - 15s 8ms/step - loss: 0.4540 - accuracy: 0.8607 - val_loss: 0.3038 - val_accuracy: 0.9117





<tensorflow.python.keras.callbacks.History at 0x1f5d91b4448>
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

rglkt

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值