CNN LeNet5分步理解

CNN LeNet5分步理解

前言

LeNet-5出自论文Gradient-Based Learning Applied to Document Recognition,是一种用于手写体字符识别的非常高效的卷积神经网络。

上图是LeNet-5识别数字3的过程。
上图是LeNet-5识别数字3的过程。

1、INPUT层-输入层

首先是数据 INPUT 层,输入图像的尺寸统一归一化为32*32。
注意:本层不算LeNet-5的网络结构,传统上,不将输入层视为网络层次结构之一。

2、C1层-卷积层

输出特征图(feature map)大小: (32-5+1)x(32-5+1)= 28 x 28

# 第一个卷积层
# 这行代码指定了第一个卷积层作用域为C1-conv,在这个作用域下有两个变量conv1_weights和conv1_biases
    with tf.variable_scope("C1-conv",reuse=resuse):
     # tf.get_variable共享变量
     # [5, 5, 1, 32]前2个数字代表卷积核的尺寸,第3个数字代表有多少个channel最后代表核的数量,也就是这个卷积层会提取多少类的特征  
        conv1_weights = tf.get_variable("weight", [5, 5, 1, 32],
                             initializer=tf.truncated_normal_initializer(stddev=0.1))# 正太分布的标准差
        # tf.constant_initializer初始化为常数
        conv1_biases = tf.get_variable("bias", [32], initializer=tf.constant_initializer(0.0))
        # strides:都是1代表会不遗漏地划过图片的每一个点
        # padding=’SAME’,表示padding后卷积的图与原图尺寸一致,激活函数relu()
        # padding代表边界的处理方式
        conv1 = tf.nn.conv2d(input_tensor, conv1_weights, strides=[1, 1, 1, 1], padding="SAME")
        relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases))
3、S2层-池化层(下采样层)

输出特征图(feature map)大小: (28/2)x(28/2)= 14 x 14

# 第一个池化层
    # tf.name_scope的主要目的是为了更加方便地管理参数命名。与 tf.Variable() 结合使用。简化了命名
    # ksize:池化窗口的大小,取一个四维向量,一般是[1, height, width, 1],
    # 因为我们不想在batch和channels上做池化,一般也是[1, stride,stride, 1]    
    # strides:窗口在每一个维度上滑动的步长
    with tf.name_scope("S2-max_pool",):
       pool1 = tf.nn.max_pool(relu1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")
4、C3层-卷积层

输出特征图(feature map)大小: (14-5+1)x(14-5+1)= 10 x 10

# 第二个卷积层
     with tf.variable_scope("C3-conv",reuse=resuse):
        conv2_weights = tf.get_variable("weight", [5, 5, 32, 64],
                                     initializer=tf.truncated_normal_initializer(stddev=0.1))
        conv2_biases = tf.get_variable("bias", [64], initializer=tf.constant_initializer(0.0))
        conv2 = tf.nn.conv2d(pool1, conv2_weights, strides=[1, 1, 1, 1], padding="SAME")
        relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases))
     
5、S4层-池化层(下采样层)

输出特征图(feature map)大小:(10/2)x(10/2)= 5 x 5

# 第二个池化层
    with tf.name_scope("S4-max_pool",):
        pool2 = tf.nn.max_pool(relu2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")
        # get_shape()函数可以得到这一层维度信息,由于每一层网络的输入输出都是一个batch的矩阵,所以通过此函数得到的维度信息会包含这个batch中数据的个数信息
        # shape[1]是长度方向,shape[2]是宽度方向,shape[3]是深度方向
        # shape[0]是一个batch中数据的个数,reshape()函数原型reshape(tensor,shape,name)
        shape = pool2.get_shape().as_list()
        nodes = shape[1] * shape[2] * shape[3]    # nodes=3136
        reshaped = tf.reshape(pool2, [shape[0], nodes])
       
6、C5层-卷积层
# 第一个全连接层
    with tf.variable_scope("layer5-full1",reuse=resuse):
        Full_connection1_weights = tf.get_variable("weight", [nodes, 512],
                                      initializer=tf.truncated_normal_initializer(stddev=0.1))
        # if regularizer != None:
        tf.add_to_collection("losses", regularizer(Full_connection1_weights))
        Full_connection1_biases = tf.get_variable("bias", [512],
                                                     initializer=tf.constant_initializer(0.1))
        if avg_class ==None:
            Full_1 = tf.nn.relu(tf.matmul(reshaped, Full_connection1_weights) + \
                                                                   Full_connection1_biases)
        else:
            Full_1 = tf.nn.relu(tf.matmul(reshaped, avg_class.average(Full_connection1_weights))
                                                   + avg_class.average(Full_connection1_biases))
# 第二个全连接层
  
7、F6层-全连接层
# 第二个全连接层
    with tf.variable_scope("layer6-full2",reuse=resuse):
        Full_connection2_weights = tf.get_variable("weight", [512, 10],
                                      initializer=tf.truncated_normal_initializer(stddev=0.1))
        # if regularizer != None:
        tf.add_to_collection("losses", regularizer(Full_connection2_weights))
        Full_connection2_biases = tf.get_variable("bias", [10],
                                                   initializer=tf.constant_initializer(0.1))
        if avg_class == None:
            result = tf.matmul(Full_1, Full_connection2_weights) + Full_connection2_biases
        else:
            result = tf.matmul(Full_1, avg_class.average(Full_connection2_weights)) + \
                                                  avg_class.average(Full_connection2_biases)
    return 
8、Output层-全连接层

Output层也是全连接层,共有10个节点,分别代表数字0到9,且如果节点i的值为0,则网络识别的结果是数字i。采用的是径向基函数(RBF)的网络连接方式。假设x是上一层的输入,y是RBF的输出,则RBF输出的计算方式是:在这里插入图片描述

完整代码
import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data

# 下载数据集到当前目录,并且新建文件夹MNIST_data
# input_data.read_data_sets函数生成的类会自动将MNIST数据集划分为train, validation和test三个数据集

mnist = input_data.read_data_sets("./MNIST_data", one_hot=True)

batch_size = 100 
learning_rate = 0.01
learning_rate_decay = 0.99
max_steps = 30000

# 第一个卷积层
# 这行代码指定了第一个卷积层作用域为C1-conv,在这个作用域下有两个变量conv1_weights和conv1_biases
    with tf.variable_scope("C1-conv",reuse=resuse):
     # tf.get_variable共享变量
     # [5, 5, 1, 32]前2个数字代表卷积核的尺寸,第3个数字代表有多少个channel最后代表核的数量,也就是这个卷积层会提取多少类的特征  
        conv1_weights = tf.get_variable("weight", [5, 5, 1, 32],
                             initializer=tf.truncated_normal_initializer(stddev=0.1))# 正太分布的标准差
        # tf.constant_initializer初始化为常数
        conv1_biases = tf.get_variable("bias", [32], initializer=tf.constant_initializer(0.0))
        # strides:都是1代表会不遗漏地划过图片的每一个点
        # padding=’SAME’,表示padding后卷积的图与原图尺寸一致,激活函数relu()
        # padding代表边界的处理方式
        conv1 = tf.nn.conv2d(input_tensor, conv1_weights, strides=[1, 1, 1, 1], padding="SAME")
        relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases))
        
# 第一个池化层
    # tf.name_scope的主要目的是为了更加方便地管理参数命名。与 tf.Variable() 结合使用。简化了命名
    # ksize:池化窗口的大小,取一个四维向量,一般是[1, height, width, 1],
    # 因为我们不想在batch和channels上做池化,一般也是[1, stride,stride, 1]    
    # strides:窗口在每一个维度上滑动的步长
    with tf.name_scope("S2-max_pool",):
       pool1 = tf.nn.max_pool(relu1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")
       
# 第二个卷积层
     with tf.variable_scope("C3-conv",reuse=resuse):
        conv2_weights = tf.get_variable("weight", [5, 5, 32, 64],
                                     initializer=tf.truncated_normal_initializer(stddev=0.1))
        conv2_biases = tf.get_variable("bias", [64], initializer=tf.constant_initializer(0.0))
        conv2 = tf.nn.conv2d(pool1, conv2_weights, strides=[1, 1, 1, 1], padding="SAME")
        relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases))
        
# 第二个池化层
    with tf.name_scope("S4-max_pool",):
        pool2 = tf.nn.max_pool(relu2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")
        # get_shape()函数可以得到这一层维度信息,由于每一层网络的输入输出都是一个batch的矩阵,所以通过此函数得到的维度信息会包含这个batch中数据的个数信息
        # shape[1]是长度方向,shape[2]是宽度方向,shape[3]是深度方向
        # shape[0]是一个batch中数据的个数,reshape()函数原型reshape(tensor,shape,name)
        shape = pool2.get_shape().as_list()
        nodes = shape[1] * shape[2] * shape[3]    # nodes=3136
        reshaped = tf.reshape(pool2, [shape[0], nodes])
        
# 第一个全连接层
    with tf.variable_scope("layer5-full1",reuse=resuse):
        Full_connection1_weights = tf.get_variable("weight", [nodes, 512],
                                      initializer=tf.truncated_normal_initializer(stddev=0.1))
        # if regularizer != None:
        tf.add_to_collection("losses", regularizer(Full_connection1_weights))
        Full_connection1_biases = tf.get_variable("bias", [512],
                                                     initializer=tf.constant_initializer(0.1))
        if avg_class ==None:
            Full_1 = tf.nn.relu(tf.matmul(reshaped, Full_connection1_weights) + \
                                                                   Full_connection1_biases)
        else:
            Full_1 = tf.nn.relu(tf.matmul(reshaped, avg_class.average(Full_connection1_weights))
                                                   + avg_class.average(Full_connection1_biases))
# 第二个全连接层
    with tf.variable_scope("layer6-full2",reuse=resuse):
        Full_connection2_weights = tf.get_variable("weight", [512, 10],
                                      initializer=tf.truncated_normal_initializer(stddev=0.1))
        # if regularizer != None:
        tf.add_to_collection("losses", regularizer(Full_connection2_weights))
        Full_connection2_biases = tf.get_variable("bias", [10],
                                                   initializer=tf.constant_initializer(0.1))
        if avg_class == None:
            result = tf.matmul(Full_1, Full_connection2_weights) + Full_connection2_biases
        else:
            result = tf.matmul(Full_1, avg_class.average(Full_connection2_weights)) + \
                                                  avg_class.average(Full_connection2_biases)
    return result
    
x = tf.placeholder(tf.float32,[batch_size,28,28,1],name="x-input")
y_ = tf.placeholder(tf.float32,[None,10],name="y-input")
# L2正则化是一种减少过度拟合的方法
regularizer = tf.contrib.layers.l2_regularizer(0.0001)
# 调用定义的CNN的函数
y = hidden_layer(x,regularizer.avg_class=None,resuse=False)
# 定义存储训练轮数的变量
training_step = tf.Variable(0,trainable=False)
# 加快训练早期变量更新的速度
variable_averages = tf.train.ExponentialMovingAverage(0.99,training_step)
variable_averages_op = variable_averages.apply(tf.trainable_variables())
average_y = hidden_layer(x,regularizer,variable_averages,resuse=True)
# 使用交叉熵作为损失函数
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y,labels=tf.argmax(y_,1))
# 计算在当前batch中所有样例的交叉熵平均值
cross_entropy_mean = tf.reduce_mean(cross_entropy)
#总损失等于交叉熵损失和正则化损失的和
loss = cross_entropy_mean + tf.add_n(tf.get_collection('losses'))
#设置指数衰减的学习率
learning_rate = tf.train.exponential_decay(learning_rate,training_step,mnist.train.num_examples /batch_size,learning_rate_decay,staircase=True)
#使用tf.train.GradientDescentOptimizer优化算法来优化损失函数
train_step = tf.train.GradientDescentOptimizer(learning_rate). \
    minimize(loss,global_step=training_step)
with tf.control_dependencies([train_step,variable_averages_op]):
    train_op = tf.no_op(name='train')
crorent_predicition = tf.equal(tf.arg_max(average_y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(crorent_predicition,tf.float32))

#初始化会话并开始训练过程
with tf.Session()as sess:
    tf.global_variable_initializer().run()
    for i in range(max_steps):
        if i %1000==0:
            x_val,y_val = mnist.validation.next_batch(batch_size)
            reshaped_x2 = np.reshape(x_val,(batch_size,28,28,1))
            validate_feed = {x: reshaped_x2,y_: y_val}
            validate_accuracy = sess.run(accurency,feed_dict=validate_feed)
            print("After %d training step(s),validate acureny""using average model is %g%%" % (i,validate_accurency * 100))
        x_train,y_train = mnist.train.next_batch(batch_size)
        reshaped_xs = np.reshape(x_reain.(batch_size,28,28,1))
        sess.run(train_op,feed_dict={x: reshaped_xs,y: y_train})
  • 3
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值