哈啰哈啰~继之前的:
keras深度学习入门笔记附录1:让我们看看有多少种让搭建好的模型开始跑的方式(fit 和 train on batch)
https://blog.csdn.net/timcanby/article/details/103644089
从代码案例入门keras1:LeNet对手写数字Mnist分类
https://blog.csdn.net/timcanby/article/details/103620371
之后,小女子打算整理一个扫盲系列的第二个入门笔记:让我们来用一层卷积层分类Mnist:
(搬走代码给github小星星啊)
盆友们应该知道,全联接层为flatten后的数据注入类别信息,这就说明什么呢 ,其实我只要多一点epochs,只用一层fc都可以玩mnist分类 。数据的导入代码见(从代码案例入门keras1:LeNet对手写数字Mnist分类),模型部分的代码如下:
model = Sequential()
model.add(Flatten(name='flatten',input_shape=(28, 28, 1)))
model.add(Dense(10,activation='softmax',name='fc1'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=['accuracy']
完整代码:https://github.com/timcanby/Kares__StudyNotes/blob/master/oneFCTest.py
网络的结构就是:
Layer (type) Output Shape Param #
=================================================================
history=model.fit(x=X_train,y=Y_train, batch_size=batch_size,nb_epoch=n_epochs)
flatten (Flatten) (None, 784) 0
_________________________________________________________________
fc1 (Dense) (None, 10) 7850
=================================================================
Total params: 7,850
Trainable params: 7,850
Non-trainable params: 0
_________________________________________________________________
如此简单粗暴。换句话说就是数据读进去就把它拉成一维然后喂给输出是十个概率的softmax装载全联接层。这样只训练一道是肯定不行的比如epoch1:
Epoch 1/100
100/60000 [..............................] - ETA: 1:06 - loss: 14.8443 - acc: 0.0700
4200/60000 [=>............................] - ETA: 2s - loss: 13.9142 - acc: 0.1276
9200/60000 [===>..........................] - ETA: 1s - loss: 11.9051 - acc: 0.2503
14500/60000 [======>.......................] - ETA: 0s - loss: 10.2923 - acc: 0.3507
20700/60000 [=========>....................] - ETA: 0s - loss: 8.9224 - acc: 0.4362
26400/60000 [============>.................] - ETA: 0s - loss: 8.0683 - acc: 0.4899
31100/60000 [==============>...............] - ETA: 0s - loss: 7.5861 - acc: 0.5201
37200/60000 [=================>............] - ETA: 0s - loss: 6.9932 - acc: 0.5568
43300/60000 [====================>.........] - ETA: 0s - loss: 6.5184 - acc: 0.5865
48900/60000 [=======================>......] - ETA: 0s - loss: 6.1924 - acc: 0.6069
54000/60000 [==========================>...] - ETA: 0s - loss: 5.9157 - acc: 0.6243
59900/60000 [============================>.] - ETA: 0s - loss: 5.6578 - acc: 0.6403
60000/60000 [==============================] - 1s 11us/step - loss: 5.6542 - acc: 0.6406
为什么不行?感受一下如果你心不在学习上给你看一遍的东西你能记住吗??哈哈哈哈所以我们如果在不去仔细观察的前提下看得多了我们也可以记住,所以在100个epoch后我们可以看到:
100/60000 [.....................