TensorFlow2加载NumPy数组格式MNIST数据集完成神经网络构建

本案例将MNIST数据集.npz文件以NumPy数组的形式加载到tf.data.Dataset中,并喂入神经网络,完成建模过程。

1. 导入所需的库

import tensorflow as tf
import numpy as np
import tensorflow_datasets as tfds

for i in [tf,np,tfds]:
    print(i.__name__,": ",i.__version__,sep="")

输出:

tensorflow: 2.2.0
numpy: 1.17.4
tensorflow_datasets: 3.1.0

2. 下载并加载数据集

2.1 下载数据集

data_url = "https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz"
download_path = tf.keras.utils.get_file("mnist.npz",data_url)

with np.load(download_path) as data:
    trainImages = data["x_train"]
    trainLabels = data["y_train"]
    testImages = data["x_test"]
    testLabels = data["y_test"]

for i in [trainImages, trainLabels, testImages, testLabels]:
    print(type(i),i.shape)

输出:

<class 'numpy.ndarray'> (60000, 28, 28)
<class 'numpy.ndarray'> (60000,)
<class 'numpy.ndarray'> (10000, 28, 28)
<class 'numpy.ndarray'> (10000,)

2.2 使用tf.data.Dataset加载NumPy数组

trainDataset = tf.data.Dataset.from_tensor_slices((trainImages,trainLabels))
testDataset = tf.data.Dataset.from_tensor_slices((testImages,testLabels))

3. 模型构建

batchSize = 64
shuffleBufferSize = 100

# 获取batch数据
trainDataset = trainDataset.shuffle(shuffleBufferSize).batch(batchSize)
testDataset = testDataset.batch(batchSize)

# 搭建模型结构
model = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape=(28,28)),
    tf.keras.layers.Dense(128,activation="relu"),
    tf.keras.layers.Dense(10)
])

# 编译模型
model.compile(optimizer=tf.keras.optimizers.RMSprop(),
             loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
             metrics=["sparse_categorical_accuracy"])

model.summary()

输出:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense (Dense)                (None, 128)               100480    
_________________________________________________________________
dense_1 (Dense)              (None, 10)                1290      
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________
# 训练模型
model.fit(trainDataset, epochs=10)

输出:

Epoch 1/10
938/938 [==============================] - 1s 1ms/step - loss: 3.1737 - sparse_categorical_accuracy: 0.8685
Epoch 2/10
938/938 [==============================] - 1s 1ms/step - loss: 0.4865 - sparse_categorical_accuracy: 0.9283
Epoch 3/10
938/938 [==============================] - 1s 1ms/step - loss: 0.3662 - sparse_categorical_accuracy: 0.9443
Epoch 4/10
938/938 [==============================] - 1s 1ms/step - loss: 0.3150 - sparse_categorical_accuracy: 0.9531
Epoch 5/10
938/938 [==============================] - 1s 1ms/step - loss: 0.2934 - sparse_categorical_accuracy: 0.9594
Epoch 6/10
938/938 [==============================] - 1s 1ms/step - loss: 0.2617 - sparse_categorical_accuracy: 0.9633
Epoch 7/10
938/938 [==============================] - 1s 1ms/step - loss: 0.2403 - sparse_categorical_accuracy: 0.9665
Epoch 8/10
938/938 [==============================] - 1s 1ms/step - loss: 0.2185 - sparse_categorical_accuracy: 0.9680
Epoch 9/10
938/938 [==============================] - 1s 1ms/step - loss: 0.2148 - sparse_categorical_accuracy: 0.9715
Epoch 10/10
938/938 [==============================] - 1s 1ms/step - loss: 0.2032 - sparse_categorical_accuracy: 0.9733
Out[22]:
<tensorflow.python.keras.callbacks.History at 0x24316d68>
# 评估模型
model.evaluate(testDataset)

输出:

157/157 [==============================] - 0s 847us/step - loss: 0.7498 - sparse_categorical_accuracy: 0.9542
Out[23]:
[0.7497776746749878, 0.954200029373169]

 

 

 

 

 

 

 

 

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值