190331LeNet5分析理解

LeNet-5分析理解

概要

LeNet-5:是Yann LeCun在1998年设计的用于手写体识别的经典卷积神经网络(CNN),是早期卷积神经网络中最有代表性的实验系统之一。

LeNet-5共有7层,主要有两个卷积层、两个下采样(池化层)、三个全连接层。

各层概况

(注:卷积后数据计算公式)
o u t p u t = ( N + 2 × p a d − F ) stride + 1 output =\frac{(N+2 \times p a d-F)}{\text {stride}}+1 output=stride(N+2×padF)+1

LeNet-5第一层:卷积层C1

该层由6个5X5卷积核进行卷积,生成六个卷积图谱(feature map)。输入图像大小为32X32,则卷积后的特征图谱大小为[(32-5)/1+1]X[(32-5)/1+1]=28X28。由于卷积核大小为5X5则加上一个偏置参数(bias),共有5X5+1=26个参数,则C1层有26X6=156个训练参数。

    with tf.variable_scope("C1-conv",reuse=resuse):
        # 这行代码指定了第一个卷积层作用域为C1-conv,在这个作用域下有两个变量conv1_weights和conv1_biases
        conv1_weights = tf.get_variable("weight", [5, 5, 1, 6],
                           initializer=tf.truncated_normal_initializer(stddev=0.1))
        # tf.get_variable共享变量 stddev正太分布的标准差 
        conv1_biases = tf.get_variable("bias", [32], initializer=tf.constant_initializer(0.0))
        # tf.constant_initializer初始化为常数
        conv1 = tf.nn.conv2d(input_tensor, conv1_weights, strides=[1, 1, 1, 1], padding="SAME")
        relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases))

LeNet-5第二层:池化层S2

对C1层的六个28x28的特征图谱分别进行以2x2为单位的下采样得到6个[(28-2)/2+1]x[(28-2)/2+1]=14x14的特征图谱。

    with tf.name_scope("S2-max_pool",):
    	# tf.name_scope的主要目的是为了更加方便地管理参数命名。
    	# 与 tf.Variable() 结合使用。简化了命名
    	pool1 = tf.nn.max_pool(relu1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="VALID")
    	# ksize:池化窗口的大小,取一个四维向量,一般是[1, height, width, 1]
    	# 因为我们不想在batch和channels上做池化,所以这两个维度设为了1
    	# strides:窗口在每一个维度上滑动的步长,一般也是[1, stride,stride, 1]

LeNet-5第三层:卷积层C3

使用16个5X5卷积核对S2层进行卷积,生成16个(14-5+10)X(14-5+1)=10X10的特征图谱。则该层共有(5X5+1)X16=416个训练参数。

    with tf.variable_scope("C3-conv",reuse=resuse):
    	conv2_weights = tf.get_variable("weight", [5, 5, 32, 64],
                                     initializer=tf.truncated_normal_initializer(stddev=0.1))
    	conv2_biases = tf.get_variable("bias", [64], initializer=tf.constant_initializer(0.0))
    	conv2 = tf.nn.conv2d(pool1, conv2_weights, strides=[1, 1, 1, 1], padding="VALID")
    	relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases))

LeNet-5第四层:池化层S4

对C3层的16个10X10的特征图谱分别进行以2x2为单位的下采样得到16个[(10-2)/2+1]x[(10-2)/2+1]=5x5的特征图谱。

    with tf.name_scope("S4-max_pool",):
    	pool2 = tf.nn.max_pool(relu2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="VALID")
    	shape = pool2.get_shape().as_list()
    	# get_shape()函数可以得到这一层维度信息,由于每一层网络的输入输出都是一个batch的矩阵
    	nodes = shape[1] * shape[2] * shape[3]
    	# shape[1]是长度方向,shape[2]是宽度方向,shape[3]是深度方向
    	reshaped = tf.reshape(pool2, [shape[0], nodes])
    	# shape[0]是一个batch中数据的个数,reshape()函数原型reshape(tensor,shape,name)

LeNet-5第五层:卷积层C5

使用120个5X5卷积核对S4层进行卷积,由于S4的特征图谱大小与卷积核大小相同,会形成120个卷积结果,每个结果都与上一层的16个特征图谱相连。

    	with tf.variable_scope("layer5-full1",reuse=resuse):
    		Full_connection1_weights = tf.get_variable("weight", [nodes, 120],
                                      initializer=tf.truncated_normal_initializer(stddev=0.1))
    		# if regularizer != None:
    		tf.add_to_collection("losses", regularizer(Full_connection1_weights))
    		Full_connection1_biases = tf.get_variable("bias", [120],
                                                     initializer=tf.constant_initializer(0.1))
    		if avg_class ==None:
            	Full_1 = tf.nn.relu(tf.matmul(reshaped, Full_connection1_weights) + \
                                                                   Full_connection1_biases)
        	else:
            	Full_1 = tf.nn.relu(tf.matmul(reshaped, avg_class.average(Full_connection1_weights))
                                                   	+ avg_class.average(Full_connection1_biases))

LeNet-5第六层:全连接层F6

该层为全连接层,有84个节点,采用了正切函数,计算公式为:
x i = f ( a i ) = A tanh ⁡ ( S a i ) x_{i}=f\left(\mathrm{a}_{i}\right)=A \tanh \left(\mathrm{Sa}_{i}\right) xi=f(ai)=Atanh(Sai)

    with tf.variable_scope("layer6-full2",reuse=resuse):
        Full_connection2_weights = tf.get_variable("weight", [120, 10],
                                      initializer=tf.truncated_normal_initializer(stddev=0.1))
        # if regularizer != None:
        tf.add_to_collection("losses", regularizer(Full_connection2_weights))
        Full_connection2_biases = tf.get_variable("bias", [10],
                                                   initializer=tf.constant_initializer(0.1))
        if avg_class == None:
            result = tf.matmul(Full_1, Full_connection2_weights) + Full_connection2_biases
        else:
            result = tf.matmul(Full_1, avg_class.average(Full_connection2_weights)) + \
                                                  avg_class.average(Full_connection2_biases)

LeNet-5第七层:全连接层output

该层为全连接层,有10个节点,分别对应0~9,采用的是径向基函数(RBF)的网络连接方式,计算公式为:
y i = ∑ j ( x j − w i j ) 2 y_{i}=\sum_{j}\left(x_{j}-w_{i j}\right)^{2} yi=j(xjwij)2

代码整合

import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
# 下载数据集到当前目录,并且新建文件夹MNIST_data
# input_data.read_data_sets函数生成的类会自动将MNIST数据集划分为train, validation和test三个数据集
mnist = input_data.read_data_sets("./MNIST_data", one_hot=True)

batch_size = 100 
learning_rate = 0.01
learning_rate_decay = 0.99
max_steps = 30000

def hidden_layer(input_tensor,regularizer,avg_class,resuse):
    with tf.variable_scope("C1-conv",reuse=resuse):
    	# 这行代码指定了第一个卷积层作用域为C1-conv,在这个作用域下有两个变量conv1_weights和conv1_biases
        conv1_weights = tf.get_variable("weight", [5, 5, 1, 6],
                           initializer=tf.truncated_normal_initializer(stddev=0.1))
        # tf.get_variable共享变量 stddev正太分布的标准差 
        conv1_biases = tf.get_variable("bias", [32], initializer=tf.constant_initializer(0.0))
        # tf.constant_initializer初始化为常数
        conv1 = tf.nn.conv2d(input_tensor, conv1_weights, strides=[1, 1, 1, 1], padding="VALID")
        relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases))
    with tf.name_scope("S2-max_pool",):
    	# tf.name_scope的主要目的是为了更加方便地管理参数命名。
    	# 与 tf.Variable() 结合使用。简化了命名
    	pool1 = tf.nn.max_pool(relu1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="VALID")
    	# ksize:池化窗口的大小,取一个四维向量,一般是[1, height, width, 1]
    	# 因为我们不想在batch和channels上做池化,所以这两个维度设为了1
    	# strides:窗口在每一个维度上滑动的步长,一般也是[1, stride,stride, 1]

    with tf.variable_scope("C3-conv",reuse=resuse):
    	conv2_weights = tf.get_variable("weight", [5, 5, 32, 64],
                                     initializer=tf.truncated_normal_initializer(stddev=0.1))
    	conv2_biases = tf.get_variable("bias", [64], initializer=tf.constant_initializer(0.0))
    	conv2 = tf.nn.conv2d(pool1, conv2_weights, strides=[1, 1, 1, 1], padding="VALID")
    	relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases))

    with tf.name_scope("S4-max_pool",):
    	pool2 = tf.nn.max_pool(relu2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="VALID")
    	shape = pool2.get_shape().as_list()
    	# get_shape()函数可以得到这一层维度信息,由于每一层网络的输入输出都是一个batch的矩阵
    	nodes = shape[1] * shape[2] * shape[3]
    	# shape[1]是长度方向,shape[2]是宽度方向,shape[3]是深度方向
    	reshaped = tf.reshape(pool2, [shape[0], nodes])
    	# shape[0]是一个batch中数据的个数,reshape()函数原型reshape(tensor,shape,name)

    with tf.variable_scope("layer5-full1",reuse=resuse):
    	Full_connection1_weights = tf.get_variable("weight", [nodes, 120],
                                   initializer=tf.truncated_normal_initializer(stddev=0.1))
    	# if regularizer != None:
    	tf.add_to_collection("losses", regularizer(Full_connection1_weights))
    	Full_connection1_biases = tf.get_variable("bias", [120],
                                                 initializer=tf.constant_initializer(0.1))
    	if avg_class ==None:
            Full_1 = tf.nn.relu(tf.matmul(reshaped, Full_connection1_weights) + \
                                                                   Full_connection1_biases)
        else:
            Full_1 = tf.nn.relu(tf.matmul(reshaped, avg_class.average(Full_connection1_weights))
                                                   	+ avg_class.average(Full_connection1_biases))

    with tf.variable_scope("layer6-full2",reuse=resuse):
        Full_connection2_weights = tf.get_variable("weight", [120, 10],
                                      initializer=tf.truncated_normal_initializer(stddev=0.1))
        # if regularizer != None:
        tf.add_to_collection("losses", regularizer(Full_connection2_weights))
        Full_connection2_biases = tf.get_variable("bias", [10],
                                                   initializer=tf.constant_initializer(0.1))
        if avg_class == None:
            result = tf.matmul(Full_1, Full_connection2_weights) + Full_connection2_biases
        else:
            result = tf.matmul(Full_1, avg_class.average(Full_connection2_weights)) + \
                                                  avg_class.average(Full_connection2_biases)
    return result

# tf.placeholder(dtype, shape=None, name=None)
x = tf.placeholder(tf.float32, [batch_size ,28,28,1],name="x-input")
y_ = tf.placeholder(tf.float32, [None, 10], name="y-input")


# L2正则化是一种减少过拟合的方法
regularizer = tf.contrib.layers.l2_regularizer(0.0001)

# 调用定义的CNN的函数
y = hidden_layer(x,regularizer,avg_class=None,resuse=False)
# 定义存储训练轮数的变量
training_step = tf.Variable(0, trainable=False)
# tf.train.ExponentialMovingAverage是指数加权平均的求法
# 可以加快训练早期变量的更新速度。
variable_averages = tf.train.ExponentialMovingAverage(0.99, training_step)
variables_averages_op = variable_averages.apply(tf.trainable_variables())

average_y = hidden_layer(x,regularizer,variable_averages,resuse=True)


# 使用交叉熵作为损失函数。这里使用
# sparse_softmax_cross_entropy_with_logits函数来计算交叉熵。因为手写体是一个长度为
# 10的一维数组,而该函数需要提供的是一个正确答案的数字,所以需要使用tf.argmax函数来
# 得到正确答案对应的类别编号
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
# 计算在当前batch中所有样例的交叉熵平均值
cross_entropy_mean = tf.reduce_mean(cross_entropy)


# 总损失等于交叉熵损失和正则化损失的和
loss = cross_entropy_mean + tf.add_n(tf.get_collection('losses'))
# 设置指数衰减的学习率
learning_rate = tf.train.exponential_decay(learning_rate,# 基础的学习率,随着迭代的进行,更新变量时使用的学习率
                                 training_step, mnist.train.num_examples /batch_size ,
                                 learning_rate_decay, staircase=True)

# 使用tf.train.GradientDescentOptimizer优化算法来优化损失函数
train_step = tf.train.GradientDescentOptimizer(learning_rate). \
    minimize(loss, global_step=training_step)

with tf.control_dependencies([train_step, variables_averages_op]):
    train_op = tf.no_op(name='train')
crorent_predicition = tf.equal(tf.arg_max(average_y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(crorent_predicition,tf.float32))

# 初始化会话并开始训练过程
with tf.Session() as sess:
    tf.global_variables_initializer().run()
    for i in range(max_steps):
        if i %1000==0:
            x_val, y_val = mnist.validation.next_batch(batch_size)
            reshaped_x2 = np.reshape(x_val, (batch_size,28,28, 1))
            validate_feed = {x: reshaped_x2, y_: y_val}

            validate_accuracy = sess.run(accuracy, feed_dict=validate_feed)
            print("After %d trainging step(s) ,validation accuracy"
                  "using average model is %g%%" % (i, validate_accuracy * 100))

        x_train, y_train = mnist.train.next_batch(batch_size)

        reshaped_xs = np.reshape(x_train, (batch_size ,28,28,1))
        sess.run(train_op, feed_dict={x: reshaped_xs, y_: y_train})
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值