tensorflow中使用Sequential()实现手写数字识别
代码示例:
#第一步,import import tensorflow as tf #导入模块 import numpy as np #第二步,train, test # mnist = tf.keras.datasets.mnist #导入mnist数据集 # (x_train, y_train), (x_test, y_test) = mnist.load_data() #分别配置训练集和测试集的输入和标签 path = './mnist.npz' #数据路径 f = np.load(path) #加载数据 x_train, y_train = f['x_train'], f['y_train'] #导入训练集的输入和标签 x_test, y_test = f['x_test'], f['y_test'] #导入测试集的输入和标签 f.close() x_train, x_test = x_train/255.0, x_test/255.0 #把输入特征做归一化处理,让值在0-1之间,更容易让神经网络吸收 #第三步,model.Sequentiao() model = tf.keras.models.Sequential([ # model.Sequential()搭建神经网络 tf.keras.layers.Flatten(), # 把数据集变成一位数组 tf.keras.layers.Dense(128, activation="relu"), # 构建128个神经元,激活函数为relu的全连接层 tf.keras.layers.Dense(10, activation="softmax") # 构建10个神经元,激活函数为softmax的全连接层 ]) #第四步,model.compile() model.compile( #model.compile()配置训练方法 optimizer = "adam", #设置优化器为adam loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits = False), #损失函数为SparseCategoricalCrossentropy,from_logits为FALSE,代表输出是概率分布 metrics = ['sparse_categorical_accuracy'] #标注网络评价指标为sparse_categorical_accuracy ) #第五步,model.fit() model.fit( #model.fit()用来执行训练过程 x_train, #训练集的输入 y_train, #训练集的标签 batch_size = 32, #每一次喂入的数据是32 epochs = 5, #迭代数是5 validation_data = (x_test, y_test), #测试集的输入特征和标签 validation_freq = 1 #测试的间隔次数 ) #第六步,model.summary() model.summary() #输入神经网络的网络参数
结果为:
省略........
46816/60000 [======================>.......] - ETA: 1s - loss: 0.0462 - sparse_categorical_accuracy: 0.9859
47232/60000 [======================>.......] - ETA: 1s - loss: 0.0461 - sparse_categorical_accuracy: 0.9859
47648/60000 [======================>.......] - ETA: 1s - loss: 0.0463 - sparse_categorical_accuracy: 0.9859
48000/60000 [=======================>......] - ETA: 1s - loss: 0.0462 - sparse_categorical_accuracy: 0.9859
48384/60000 [=======================>......] - ETA: 1s - loss: 0.0464 - sparse_categorical_accuracy: 0.9858
48832/60000 [=======================>......] - ETA: 1s - loss: 0.0463 - sparse_categorical_accuracy: 0.9858
49280/60000 [=======================>......] - ETA: 1s - loss: 0.0464 - sparse_categorical_accuracy: 0.9858
49728/60000 [=======================>......] - ETA: 1s - loss: 0.0464 - sparse_categorical_accuracy: 0.9858
50112/60000 [========================>.....] - ETA: 1s - loss: 0.0463 - sparse_categorical_accuracy: 0.9858
50528/60000 [========================>.....] - ETA: 1s - loss: 0.0464 - sparse_categorical_accuracy: 0.9858
50944/60000 [========================>.....] - ETA: 1s - loss: 0.0465 - sparse_categorical_accuracy: 0.9857
51392/60000 [========================>.....] - ETA: 1s - loss: 0.0465 - sparse_categorical_accuracy: 0.9857
51840/60000 [========================>.....] - ETA: 1s - loss: 0.0464 - sparse_categorical_accuracy: 0.9857
52288/60000 [=========================>....] - ETA: 1s - loss: 0.0464 - sparse_categorical_accuracy: 0.9857
52704/60000 [=========================>....] - ETA: 0s - loss: 0.0463 - sparse_categorical_accuracy: 0.9857
53120/60000 [=========================>....] - ETA: 0s - loss: 0.0462 - sparse_categorical_accuracy: 0.9858
53536/60000 [=========================>....] - ETA: 0s - loss: 0.0461 - sparse_categorical_accuracy: 0.9858
54016/60000 [==========================>...] - ETA: 0s - loss: 0.0459 - sparse_categorical_accuracy: 0.9859
54464/60000 [==========================>...] - ETA: 0s - loss: 0.0459 - sparse_categorical_accuracy: 0.9859
54880/60000 [==========================>...] - ETA: 0s - loss: 0.0460 - sparse_categorical_accuracy: 0.9859
55296/60000 [==========================>...] - ETA: 0s - loss: 0.0460 - sparse_categorical_accuracy: 0.9858
55744/60000 [==========================>...] - ETA: 0s - loss: 0.0460 - sparse_categorical_accuracy: 0.9858
56192/60000 [===========================>..] - ETA: 0s - loss: 0.0460 - sparse_categorical_accuracy: 0.9858
56608/60000 [===========================>..] - ETA: 0s - loss: 0.0459 - sparse_categorical_accuracy: 0.9859
57024/60000 [===========================>..] - ETA: 0s - loss: 0.0458 - sparse_categorical_accuracy: 0.9859
57440/60000 [===========================>..] - ETA: 0s - loss: 0.0457 - sparse_categorical_accuracy: 0.9860
57856/60000 [===========================>..] - ETA: 0s - loss: 0.0456 - sparse_categorical_accuracy: 0.9860
58240/60000 [============================>.] - ETA: 0s - loss: 0.0456 - sparse_categorical_accuracy: 0.9860
58592/60000 [============================>.] - ETA: 0s - loss: 0.0454 - sparse_categorical_accuracy: 0.9860
59040/60000 [============================>.] - ETA: 0s - loss: 0.0453 - sparse_categorical_accuracy: 0.9861
59488/60000 [============================>.] - ETA: 0s - loss: 0.0452 - sparse_categorical_accuracy: 0.9861
59936/60000 [============================>.] - ETA: 0s - loss: 0.0452 - sparse_categorical_accuracy: 0.9861
60000/60000 [==============================] - 9s 148us/sample - loss: 0.0452 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0795 - val_sparse_categorical_accuracy: 0.9753
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) multiple 0
_________________________________________________________________
dense (Dense) multiple 100480
_________________________________________________________________
dense_1 (Dense) multiple 1290
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________