2.2 神经网络的表示
代码清单 2-1
# 解决编译器报错 using tenserflow backend的问题
import os
os.environ['KERAS_BACKEND']='tensorflow'
from keras.datasets import mnist
(train_images,train_labels),(test_images,test_labels) = mnist.load_data()
# 训练集大小
train_images.shape
len(train_labels)
# 训练集标签
train_labels
# 测试集
test_images.shape
len(test_labels)
# 测试集标签
test_labels
array([7, 2, 1, ..., 4, 5, 6], dtype=uint8)
代码清单 2-2, 2-3
from keras import models
from keras import layers
import numpy as np
# 神经网络框架
network = models.Sequential()
network.add(layers.Dense(512,activation='relu',input_shape=(28*28,)))
network.add(layers.Dense(10,activation='softmax'))
# 编译步骤
network.compile(optimizer = 'rmsprop', loss = 'categorical_crossentropy', metrics=['accuracy']);
代码清单 2-4
# 准备图像数据
train_images = train_images.reshape((60000,28*28))
train_images = train_images.astype('float32')/255
test_images = test_images.reshape((10000,28*28))
test_images = test_images.astype('float32')/255
代码清单 2-5-1
# 准备数据标签
# from keras.utils import to_categoriacl
from tensorflow.keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# 训练网络
# loss:训练数据的损失 acc:网络在训练数据上的精度
network.fit(train_images , train_labels , epochs = 5 , batch_size = 128)
Epoch 1/5
60000/60000 [==============================] - 3s 43us/step - loss: 0.0283 - accuracy: 0.9917
Epoch 2/5
60000/60000 [==============================] - 3s 44us/step - loss: 0.0216 - accuracy: 0.9937
Epoch 3/5
60000/60000 [==============================] - 3s 42us/step - loss: 0.0173 - accuracy: 0.9951
Epoch 4/5
60000/60000 [==============================] - 3s 44us/step - loss: 0.0127 - accuracy: 0.9965
Epoch 5/5
60000/60000 [==============================] - 3s 48us/step - loss: 0.0099 - accuracy: 0.9972
代码清单 2-5-2
test_loss, test_acc = network.evaluate( test_images , test_labels)
print('test_acc:',test_acc)
10000/10000 [==============================] - 0s 28us/step
test_acc: 0.9797000288963318
代码清单 2-6
观察MNIST数据集,MNIST数据集通常用于手写识别普通的神经网络,我们结合本小节提到的张量知识来观察该数据集
# 读入数据集
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# 显示张量mnist个数
print(train_images.ndim)
# 显示张量的形状
print(train_images.shape)
# 显示张量的数据类型
print(train_images.dtype)
3
(60000, 28, 28)
uint8
接着,我们观察MNIST数据集里面各个样本的样子
我们以第四个数字为例,显示第四个数字
digit = train_images[4]
import matplotlib.pyplot as plt
plt.imshow(digit,cmap=plt.cm.binary)
plt.show()
张量切片
通过冒号,选择特定的张良新元素叫做张量切片
# 对于一个三维的张量,有三种不同的写法
# 写法一
my_slice = train_images[10:100]
print(my_slice.shape)
# 写法二
my_slice2 = train_images[10:100, :, :]
print(my_slice2.shape)
# 写法三
my_slice3 = train_images[10:100, 0:28, 0:28]
print(my_slice3.shape)
(90, 28, 28)
(90, 28, 28)
(90, 28, 28)
写在最后
注:本文代码来自《Python 深度学习》,做成电子笔记的方式上传,仅供学习参考,作者均已运行成功,如有遗漏请练习本文作者
各位看官,都看到这里了,麻烦动动手指头给博主来个点赞8,您的支持作者最大的创作动力哟! <(^-^)>
才疏学浅,若有纰漏,恳请斧正
本文章仅用于各位同志作为学习交流之用,不作任何商业用途,若涉及版权问题请速与作者联系,望悉知