点击上方“AI与计算机视觉”,选择加"星标"或“置顶”
重磅干货,第一时间送达
以MNIST数据集为例来训练模型
# -*- coding: UTF-8 -*-"""Author: LGDFileName: fashion_mnist_tfdatasetDateTime: 2020/11/26 09:04 SoftWare: PyCharm"""import tensorflow as tfprint('Tensorflow version: {}'.format(tf.__version__))(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()# 数据归一化train_images = train_images / 255test_images = test_images / 255# 建立train_images的Datasetds_train_img = tf.data.Dataset.from_tensor_slices(train_images)print(ds_train_img)ds_train_label = tf.data.Dataset.from_tensor_slices(train_labels)print(ds_train_label)# 使用zip将数据合并到一起ds_train = tf.data.Dataset.zip((ds_train_img, ds_train_label))print(ds_train)# 对数据做变换,取出10000组数据乱序,循环,分批次,每批次数据量为64ds_train = ds_train.shuffle(10000).repeat().batch(64)# 建立模型model = tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10, activation='softmax')])# 编译模型model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])# # 训练# steps_per_epochs = train_images.shape[0] // 64 # 每次迭代64张图片,每个epoch迭代的步数# model.fit(# ds_train,# epochs=5,# steps_per_epoch=steps_per_epochs# )# 建立test_images的Datasetds_test = tf.data.Dataset.from_tensor_slices((test_images, test_labels))ds_test = ds_test.batch(64)# 训练steps_per_epochs = train_images.shape[0] // 64 # 每次迭代64张图片,每个epoch迭代的步数model.fit( ds_train, epochs=5, steps_per_epoch=steps_per_epochs, validation_data=ds_test, validation_steps=10000//64 # 由于有循环,必须要有step它才知道什么时候打印一下验证准确率。)
MNIST数据集
tf.keras.datasets.mnist.load_data()
使用上面代码,它就会自己下载,只要网速正常,它的下载速度还是很快的。
下载的最终文件包是放在C盘-->用户-->(你自己取的名字)-->.keras文件夹里。
最后就在datasets文件夹里面。
·END·
微信号:AI与计算机视觉