LeNet-5网络学习(mnist数据集)tensorflow实战

 LeNet-5网络学习(mnist数据集)tensorflow2.0实战

目录

 LeNet-5网络学习(mnist数据集)tensorflow实战

1、导入相关库

2、数据的导入和预处理

3、建立 LeNet-5网络模型

4、模型优化

5、训练

6、测试集验证

7、完整代码


1、导入相关库

import tensorflow as tf
from tensorflow import keras

2、数据的导入和预处理

batch=32
def preprocess(x, y):
    x=tf.cast(x, dtype=tf.float32)/255.
    x=tf.reshape(x,[-1,28,28,1])
    y=tf.one_hot(y,depth=10)
    return x, y

(x_train,y_train),(x_test,y_test)=tf.keras.datasets.mnist.load_data()
train_db=tf.data.Dataset.from_tensor_slices((x_train,y_train))
train_db=train_db.shuffle(10000)
train_db=train_db.batch(128)
train_db=train_db.map(preprocess)

test_db=tf.data.Dataset.from_tensor_slices((x_test,y_test))
test_db=test_db.shuffle(10000)
test_db=test_db.batch(128)
test_db=test_db.map(preprocess)

3、建立 LeNet-5网络模型

model=tf.keras.Sequential([
    keras.layers.Conv2D(6,3),
    keras.layers.MaxPool2D(pool_size=2,strides=2),
    keras.layers.ReLU(),
    keras.layers.Conv2D(16,3),
    keras.layers.MaxPool2D(pool_size=2,strides=2),
    keras.layers.ReLU(),
    keras.layers.Flatten(),
    keras.layers.Dense(120,activation='relu'),
    keras.layers.Dense(84,activation='relu'),
    keras.layers.Dense(10,activation='softmax'),
])
model.build(input_shape=(batch,28,28,1))
model.summary()
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_4 (Conv2D)            multiple                  60        
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 multiple                  0         
_________________________________________________________________
re_lu_4 (ReLU)               multiple                  0         
_________________________________________________________________
conv2d_5 (Conv2D)            multiple                  880       
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 multiple                  0         
_________________________________________________________________
re_lu_5 (ReLU)               multiple                  0         
_________________________________________________________________
flatten_2 (Flatten)          multiple                  0         
_________________________________________________________________
dense_6 (Dense)              multiple                  48120     
_________________________________________________________________
dense_7 (Dense)              multiple                  10164     
_________________________________________________________________
dense_8 (Dense)              multiple                  850       
=================================================================
Total params: 60,074
Trainable params: 60,074
Non-trainable params: 0

4、模型优化

model.compile(optimizer=keras.optimizers.Adam(),
              loss=keras.losses.CategoricalCrossentropy(),
              metrics=['accuracy'])

5、训练

model.evaluate(test_db)
Epoch 1/5
469/469 [==============================] - 8s 17ms/step - loss: 0.2936 - accuracy: 0.9150
Epoch 2/5
469/469 [==============================] - 7s 14ms/step - loss: 0.0865 - accuracy: 0.9738
Epoch 3/5
469/469 [==============================] - 7s 14ms/step - loss: 0.0636 - accuracy: 0.9801
Epoch 4/5
469/469 [==============================] - 7s 15ms/step - loss: 0.0502 - accuracy: 0.9843
Epoch 5/5
469/469 [==============================] - 7s 14ms/step - loss: 0.0412 - accuracy: 0.9870

6、测试集验证

model.evaluate(test_db)
79/79 [==============================] - 1s 7ms/step - loss: 0.0391 - accuracy: 0.9884

Out[20]:

[0.03909292883723031, 0.9884]

7、完整代码

import tensorflow as tf
from tensorflow import keras
batch=32
def preprocess(x, y):
    x=tf.cast(x, dtype=tf.float32)/255.
    x=tf.reshape(x,[-1,28,28,1])
    y=tf.one_hot(y,depth=10)
    return x, y

(x_train,y_train),(x_test,y_test)=tf.keras.datasets.mnist.load_data()
train_db=tf.data.Dataset.from_tensor_slices((x_train,y_train))
train_db=train_db.shuffle(10000)
train_db=train_db.batch(128)
train_db=train_db.map(preprocess)

test_db=tf.data.Dataset.from_tensor_slices((x_test,y_test))
test_db=test_db.shuffle(10000)
test_db=test_db.batch(128)
test_db=test_db.map(preprocess)
model=tf.keras.Sequential([
    keras.layers.Conv2D(6,3),
    keras.layers.MaxPool2D(pool_size=2,strides=2),
    keras.layers.ReLU(),
    keras.layers.Conv2D(16,3),
    keras.layers.MaxPool2D(pool_size=2,strides=2),
    keras.layers.ReLU(),
    keras.layers.Flatten(),
    keras.layers.Dense(120,activation='relu'),
    keras.layers.Dense(84,activation='relu'),
    keras.layers.Dense(10,activation='softmax'),
])
model.build(input_shape=(batch,28,28,1))
model.summary()
model.compile(optimizer=keras.optimizers.Adam(),
              loss=keras.losses.CategoricalCrossentropy(),
              metrics=['accuracy'])
model.fit(train_db,epochs=5)
model.evaluate(test_db)

 

  • 3
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

岁月蹉跎的一杯酒

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值