LeNet-5:Gradient-Based Learning Applied to Document Recognition
1998年的LeNet-5是CNN的经典之作,但是该模型在后来未能火起来,主要原因是当时的计算力不足和数据量的不足。并且当时的SVM在可承受计算量的情况下,达到,甚至超过了神经模型。CNN虽然在当时没能受到重视,但是并没有掩盖CNN的强大能力。
下图是LeNet的架构图:
LeNet由两层conv,两层pool,三层fc组成。
以下是用TensorFlow和Keras混合编写的LeNet-5模型:
import tensorflow as tf
keras = tf.keras
from tensorflow.python.keras.layers import Conv2D,MaxPool2D,Dropout,Flatten,Dense
def inference(inputs,
num_classes=10,
is_training=True,
dropout_keep_prob=0.5):
'''
inputs: a tensor of images
num_classes: the num of category.
is_training: set ture when it used for training
dropout_keep_prob: the rate of dropout during training
'''
x = inputs
# conv1
x = Conv2D(6, [5,5], 1, activation='relu', name='conv1')(x)
# pool1
x = MaxPool2D([2,2], 2, name='pool1')