代码实现
编程效果
用于mnist手写体数字识别的CNN模型
CNN模型的层和输入输出
编程效果如下,经过几个epoch,测试集上的识别率到达了99%左右
完整代码
(1)输入层:和TensorFlow1不同的是不用设置batch_size
(2)卷积层:TensorFlow2中自带了卷积层实现类对卷积的计算,这里首先创建了一个类,通过设定卷积核数据、卷积核大小,padding补全方式和激活函数初始化整个卷积类
(3) maxpool层和Batch_normalization:这两层的目的是使得输入数据正则化,最大限度地减少模型的过拟合和增强模型的泛化能力
(4)训练评估基于keras框架complie、fit、evaluate方法,具体参考
(5)在读取数据集时用到了feature、parser_example_(x)、dataset.shuffle等,目的是读取TFRecord文件为tf.data类。参考1、参考2
import tensorflow as tf
import numpy as np
#读取训练测试集
feature = {
'xtrain':tf.io.FixedLenFeature([784],tf.float32),
'ytrain':tf.io.FixedLenFeature([10],tf.float32),
}
def parser_example_(x):
x = tf.io.parse_single_example(x,feature)
x['xtrain']=tf.reshape(x['xtrain'],[28,28,1])
return x['xtrain']/255,x['ytrain']
feature1 = {
'xtest':tf.io.FixedLenFeature([784],tf.float32),
'ytest':tf.io.FixedLenFeature([10],tf.float32),
}
def parser_example_1(x):
x = tf.io.parse_single_example(x,feature1)
x['xtest']=tf.reshape(x['xtest'],[28,28,1])
return x['xtest']/255,x['ytest']
dataset = tf.data.TFRecordDataset("mnist_train.tfrecords").map(parser_example_)
data_set = tf.data.TFRecordDataset("mnist_test.tfrecords").map(parser_example_1)
dataset = dataset.shuffle(buffer_size=60000).batch(250)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
data_set = data_set.shuffle(buffer_size=10000).batch(250)
num_epoch = 3
#构建模型
input = tf.keras.Input(shape=(28,28,1),name='input')
conv = tf.keras.layers.Conv2D(32,3,padding="SAME",activation = 'relu', name ='conv1')(input)
conv = tf.keras.layers.BatchNormalization()(conv)
conv = tf.keras.layers.Conv2D(64,3,padding="SAME",activation = 'relu', name ='conv2')(conv)
conv = tf.keras.layers.MaxPool2D(strides=[1,1])(conv)
conv = tf.keras.layers.Conv2D(128,3,padding="SAME",activation = 'relu', name ='conv3')(conv)
flat = tf.keras.layers.Flatten()(conv)
dense = tf.keras.layers.Dense(512,activation = 'relu', name ='dense')(flat)
out = tf.keras.layers.Dense(10,activation = 'softmax', name ='dense2')(dense)
model = tf.keras.Model(inputs=input,outputs=out)
print(model.summary())
#训练模型
model.compile(
optimizer = tf.keras.optimizers.Adam(1e-3),
loss= tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy]
)
model.fit(dataset,epochs=num_epoch)
print(model.evaluate(data_set))