文章目录
版本:tensorflow2.0.0rcl
github地址 https://github.com/yang-ze-kang/LeNet5
MNIST数据集
MNIST数据集简介
- 包含0~9手写数字
- 60000个训练集、10000个测试集
- 数据格式:28*28
- 灰度图(单通道)
MNIST数据集加载
方法一:加载网络数据
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
方法二:将数据下载到本地电脑,加载本地电脑数据
数据下载地址:https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
f = np.load("path")
x_train, y_train = f['x_train'],f['y_train']
x_test, y_test = f['x_test'],f['y_test']
f.close()
导入成功检验
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
结果
MNIST数据集可视化
依赖第三方库
import matplotlib.pyplot as plt
显示图片及label
image_index = 123
print(y_train[image_index]) #查看随机一张图片的label
plt.imshow(x_train[image_index], cmap='Greys') #图片显示
plt.show()
数据集格式转换
x_train = np.pad(x_train, ((0, 0), (2, 2), (2, 2)), 'constant', constant_values=0) #将图片从28*28扩展成32*32
x_train = x_train.astype('float32') #数据类型转换
x_train /= 255 #数据正则化
x_train = x_train.reshape(x_train.shape[0], 32, 32, 1) #数据维度转换
print(x_train.shape)
tendorflow
模型类
Model
- 实例化
- 函数
sumary 查看模型
complie 编译时定义优化器、参数等
fit 训练模型
save 保存训练好的模型
evalute 评估模型
Sequetial
- 实例化:只有一个参数layers
- 继承自Model
卷积类Conv2D
filters, #卷积核个数
kernel_size, #卷积核大小
strides=(1,1), #步长
padding=’ ', #valid或SAME
data_format=None, #默认channels_last
activation, #激活函数
池化类AveragePooling2D
pool_size, #必须设置
stride
padding
data_format
LeNet模型
模型结构
模型构建
Model方法
class LeNet(tf.keras.Model):
def __init__(self):
super().__init__()
#模型
self.conv_layer_1 = tf.keras.layers.Conv2D(
filters=6,
kernel_size=(5, 5),
padding='valid',
activation=tf.nn.relu)
self.pool_layer_1 = tf.keras.layers.MaxPool2D(
pool_size=(2, 2),
padding='same')
self.conv_layer_2 = tf.keras.layers.Conv2D(
filters=16,
kernel_size=(5, 5),
padding='valid',
activation=tf.nn.relu)
self.pool_layer_2 = tf.keras.layers.MaxPool2D(
pool_size=(2, 2),
padding='same')
self.flatten = tf.keras.layers.Flatten()
self.fc_layer_1 = tf.keras.layers.Dense(
units=120,
activation=tf.nn.relu)
self.fc_layer_2 = tf.keras.layers.Dense(
units=84,
activation=tf.nn.relu)
self.output_layer = tf.keras.layers.Dense(
units=10,
activation=tf.nn.relu)
def call(self, inputs):
x = self.conv_layer_1(inputs)
x = self.pool_layer_1(x)
x = self.conv_layer_2(x)
x = self.pool_layer_2(x)
x = self.flatten(x)
x = self.fc_layer_1(x)
x = self.fc_layer_2(x)
output = self.output_layer(x)
return output
Sequential方法
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=6, kernel_size=(5, 5), padding='valid', activation=tf.nn.relu, input_shape=(32, 32, 1)),
tf.keras.layers.MaxPool2D(pool_size=(2, 2), padding='same'),
tf.keras.layers.Conv2D(filters=6, kernel_size=(5, 5), padding='valid', activation=tf.nn.relu),
tf.keras.layers.MaxPool2D(pool_size=(2, 2), padding='same'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=120, activation=tf.nn.relu),
tf.keras.layers.Dense(units=84, activation=tf.nn.relu),
tf.keras.layers.Dense(units=10, activation=tf.nn.relu)
])
模型summary结果
模型训练与预测
模型训练
超参数
epochs -> 数据被训练的次数
batch_size ->小批梯度下降每批的大小
learning_rate -> 学习率
优化:Adam
源码
#------------------------------【训练】---------------------------------
#超参数设置
num_epochs = 10
batch_size = 64
learning_rate = 0.01
#优化器
adam_optimizer = tf.keras.optimizers.Adam(learning_rate)
#编译
model.compile(optimizer=adam_optimizer,
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
#训练
start_time = datetime.datetime.now()
model.fit(x=x_train,
y=y_train,
batch_size=batch_size,
epochs=num_epochs)
endtime = datetime.datetime.now()
time_cost = endtime - start_time
print('time_cost = ', time_cost)
保存/加载模型
#保存/加载模型
model.save('lenet.h5')
#model = tf.keras.models.load_model('lenet.h5')
评估
#------------------------------【评估】---------------------------------
x_test = DataFormat(x_test)
print(x_test.shape)
print(model.evaluate(x_test, y_test))
预测
#------------------------------【预测】---------------------------------
image_index = 2333
print(x_test[image_index].shape)
plt.imshow(x_test[image_index].reshape(32, 32), cmap='Greys')
plt.show()
pred = model.predict(x_test[image_index].reshape(1, 32, 32, 1))
print(pred.argmax())