记录|深度学习100例-卷积神经网络(CNN)minist数字分类 | 第1天

记录|深度学习100例-卷积神经网络(CNN)minist数字分类 | 第2天

1. minist0-9数字分类效果图

数据集如下:
在这里插入图片描述分类及预测图如下:预测标签值和真实标签值如下图所示,成功预测
在这里插入图片描述

训练Loss/Accuracy图如下:
在这里插入图片描述

源码

# 深度学习100例-卷积神经网络(CNN)实现mnist手写数字识别 | 第1天
# USAGE
# python img_digit1.py

import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import datasets, layers, models

gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0]  # 如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True)  # 设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0], "GPU")

# 导入数据
(train_images, train_labels), (test_images, test_labels) = datasets.mnist.load_data()

# 将像素的值标准化至0到1的区间内。
train_images, test_images = train_images / 255.0, test_images / 255.0

print(train_images.shape, test_images.shape, train_labels.shape, test_labels.shape)

# 调整数据到我们需要的格式
train_images = train_images.reshape((60000, 28, 28, 1))
test_images = test_images.reshape((10000, 28, 28, 1))
print(train_images.shape, test_images.shape, train_labels.shape, test_labels.shape)

# 可视化
plt.figure(figsize=(20, 10))
for i in range(20):
    plt.subplot(5, 10, i + 1)
    plt.xticks([])
    plt.yticks([])
    plt.grid(False)
    plt.imshow(train_images[i], cmap=plt.cm.binary)
    plt.xlabel(train_labels[i])
plt.show()

# 构建网络
model = models.Sequential([
    layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),  # 卷积层1,卷积核3*3
    layers.MaxPooling2D((2, 2)),  # 池化层1,2*2采样
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.MaxPooling2D((2, 2)),  # 池化层2,2*2采样

    layers.Flatten(),  # Flatten层,连接卷积层与全连接层
    layers.Dense(64, activation='relu'),  # 全连接层,特征进一步提取
    layers.Dense(10)  # 输出层,输出预期结果
])

# 打印网络结构
model.summary()

# 编译模型
"""
设置优化器、损失函数以及metrics
这三者具体介绍可参考:https://blog.csdn.net/qq_38251616/category_10258234.html
"""
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

# 训练模型
"""
这里设置输入训练数据集(图片及标签)、验证数据集(图片及标签)以及迭代次数epochs
关于model.fit()函数的具体介绍可参考:https://blog.csdn.net/qq_38251616/category_10258234.html
"""
history = model.fit(train_images, train_labels, epochs=8,
                    validation_data=(test_images, test_labels))

pre = model.predict(test_images)
print('pre: ' + str(np.argmax(pre[2])) + ' real: ' + str(test_labels[2]))

plt.imshow(test_images[2])
plt.xticks([])
plt.yticks([])
plt.xlabel('pre: ' + str(np.argmax(pre[2])) + ' real: ' + str(test_labels[2]))
plt.show()

plt.plot(history.history["loss"], label="train_loss")
plt.plot(history.history["val_loss"], label="val_loss")
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label='val_accuracy')
plt.title("Training Loss and Accuracy (Simple NN)")
plt.xlabel('Epoch')
plt.ylabel('Loss/Accuracy')
# plt.ylim([0.5, 1])
plt.legend(loc='lower right')
plt.show()

test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print(test_acc)

参考

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
新论文:最近6个月以内的 Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models, S. Ioffe. Wasserstein GAN, M. Arjovsky et al. Understanding deep learning requires rethinking generalization, C. Zhang et al. [pdf] 老论文:2012年以前的 An analysis of single-layer networks in unsupervised feature learning (2011), A. Coates et al. Deep sparse rectifier neural networks (2011), X. Glorot et al. Natural language processing (almost) from scratch (2011), R. Collobert et al. Recurrent neural network based language model (2010), T. Mikolov et al. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion (2010), P. Vincent et al. Learning mid-level features for recognition (2010), Y. Boureau A practical guide to training restricted boltzmann machines (2010), G. Hinton Understanding the difficulty of training deep feedforward neural networks (2010), X. Glorot and Y. Bengio Why does unsupervised pre-training help deep learning (2010), D. Erhan et al. Recurrent neural network based language model (2010), T. Mikolov et al. Learning deep architectures for AI (2009), Y. Bengio. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations (2009), H. Lee et al. Greedy layer-wise training of deep networks (2007), Y. Bengio et al. Reducing the dimensionality of data with neural networks, G. Hinton and R. Salakhutdinov. A fast learning algorithm for deep belief nets (2006), G. Hinton et al. Gradient-based learning applied to document recognition (1998), Y. LeCun et al. Long short-term memory (1997), S. Hochreiter and J. Schmidhuber.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

程序媛一枚~

您的鼓励是我创作的最大动力。

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值