【深度学习笔记2.1】LeNet-5

概述

LeNet-5中的-5是个啥?

Gradient-Based Learning Applied to Document Recognition

enter image description here

图1 [3]
![enter image description here](https://lh3.googleusercontent.com/-KPfsR5nep9A/W2rbZF4xk-I/AAAAAAAAAFc/PtinL8z9rCA0Pzf_GiovJ7kS8zRkm7nrACLcBGAs/s0/lenet5_2.png "lenet5_2.png")
图2 [2]

Input:shape=[-1, 28, 28, 1]
  | 
  | 
  |  filter.shape = [5, 5, 1, 6]
  |  C1 = tf.nn.conv2d(Input, filter, strides=[1,1,1,1], padding=‘SAME’)
  | 
  |  conv2d后C1层feature maps的shape为[-1, 28, 28, 6]
  | 
  |  参数个数:6x(5x5+1)=156
  |      一个卷积核的大小为5x5,每个卷积核有5x5个参数,每个卷积核做完所有卷积后还要与一个bias相加,故每个卷积核对应有5x5+1个参数;从Input到C1一共有6个卷积核,所以从Input到C1共有6x(5x5+1)个参数需要训练;(疑问:能否一个卷积核做完一次卷积后就和一个bias相加?)
  |  连接个数:6x(5x5+1)x28x28=122304
  |      一个卷积核每做完一次卷积后都会在C1层生成一个像素,该像素对应着(5x5+1)个连接,又C1层每个通道有28x28个像素,故C1层每个通道有(5x5+1)x28x28个连接;又C1层有6个通道,故从Input到C1层一共有6x(5x5+1)x28x28个连接;
  |      疑问:根据图2可知(从文献[1]以及网上的很多示例代码也能看出),这里的一个卷积核是和整个Input做完卷积后再和一个bias相加的,那么连接的个数不应该是6x(5x5x28x28+1)吗?
  |      答:一个卷积核是和整个Input做完卷积后得到的是一个28x28的feature map,该feature map加上一个数值bias可以等价于feature map的每个像素都加上一个bias,所以一个大小为28x28的feature map的每个像素都会和bias相加。
  | 
C1 Layer:shape=[-1, 28, 28, 6]
  | 
  | 
  |  ksize=[1,2,2,1]
  |  bias1 = tf.Variable( tf.truncated_normal( [6] ) )
  |  S2 = tf.nn.max_pool(tf.nn.sigmoid(C1 + bias1), ksize, strides=[1, 2, 2, 1], padding=‘SAME’)
  | 
  |  参数个数:6x(1+1)=12; 1个训练参数w,一个偏置b
  |      C1层的2x2感受野的四个输入相加,然后乘以一个可训练参数,再加上一个可训练偏置。结果通过 sigmoid 函数计算。可训练系数和偏置控制着 sigmoid 函数的非线性程度。如果系数比较小,那么运算近似于线性运算,亚采样相当于模糊图像。如果系数比较大,根据偏置的大小亚采样可以被看成是有噪声的“或”运算或者有噪声的“与”运算。
  |  连接个数:6x(4+1)x14x14=5880
  |      从一个平面到下一个平面的映射可以看作是作卷积运算,S-层可看作是模糊滤波器,起到二次特征提取的作用。隐层与隐层之间空间分辨率递减,而每层所含的平面数递增,这样可用于检测更多的特征信息[2]。
  |      问:按照很多文章介绍说的,那么程序应该是下面这样的吧:
  |        c1 = conv2d( input, filter, … ) + bias;
  |        s2 = sigmoid( pooling( c1, pool_filter, … ) + bias );
  |      但是实际上在很多程序具体实现的过程中却是下面这样的:
  |        c1 = conv2d( input, filter, … );
  |        s2 = pooling( sigmoid( c1 + bias ) );
  |      这是为什么?
  | 
S2 Layer:shape=[-1, 14, 14, 6]
  | 
  | 
  |  filter.shape = [5, 5, 6, 16]
  |  C3 = tf.nn.conv2d(S2, filter, strides=[1, 1, 1, 1], padding=‘VALID’)
  | 
  |  参数个数:6x(3x5x5+1)+6x(4x5x5+1)+3x(4x5x5+1)+1x(6x5x5+1)=1516
  |  连接个数:由于C3 Layer图像大小为10x10,所以共有151600个参数;
  | 
C3 Layer:shape=[-1, 10, 10, 16]
  | 
  | 
  |  ksize=[1,2,2,1]
  |  bias2 = tf.Variable(tf.truncated_normal([16]))
  |  S4 = tf.nn.max_pool(tf.nn.sigmoid(C3 + bias2), ksize, strides=[1, 2, 2, 1], padding=‘SAME’)
  | 
S4 Layer:shape=[-1, 5, 5, 16]
  | 
  | 
  |  filter.shape=[5, 5, 16, 120]
  |  C5 = tf.nn.conv2d(S4, filter, strides=[1, 1, 1, 1], padding=‘SAME’)
  | 
C5 Layer:shape=[-1, 5, 5, 120]
  | 
  | 
  |  C5_flat = tf.reshape( C5, [-1, 5 * 5 * 120] )
  |  W_fc1 = tf.Variable( tf.truncated_normal( [5 * 5 * 120, 84]) )
  |  b_fc1 = tf.Variable( tf.truncated_normal( [84] ) )
  |  h_fc1 = tf.nn.sigmoid( tf.matmul( C5_flat, W_fc1 ) + b_fc1)
  |  参数个数:84x120+84=10164
  | 
F6 Layer:全连接层
  | 
  | 
  |  W_fc2 = tf.Variable( tf.truncated_normal( [80, 10] ) )
  |  b_fc2 = tf.Variable( tf.truncated_normal( [10] ) )
  |  y_conv = tf.nn.softmax( tf.matmul( h_fc1, W_fc2 ) + b_fc2 )
  | 
Output Layer

代码示例

代码参考文献[1] 程序13.10

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import time

datapath = '/home/xiajun/res/MNIST_data'
mnist_data_set = input_data.read_data_sets(datapath, validation_size=0, one_hot=True)

x = tf.placeholder('float', [None, 784])
y_ = tf.placeholder('float', [None, 10])
x_image = tf.reshape(x, [-1, 28, 28, 1])

#第一层卷积层,初始化卷积核参数、偏置值,该卷积层5*5大小,1个通道,共有6个不同卷积核
filter1 = tf.Variable(tf.truncated_normal([5, 5, 1, 6]))
bias1 = tf.Variable(tf.truncated_normal([6]))
conv1 = tf.nn.conv2d(x_image, filter1, strides=[1, 1, 1, 1], padding='SAME')
# 此时conv1.shape = [-1, 28, 28, 6]
h_conv1 = tf.nn.sigmoid(conv1 + bias1)
# h_conv1 = tf.nn.relu(conv1 + bias1)

maxPool2 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# 此时maxPool2.shape = [-1, 14, 14, 6]

filter2 = tf.Variable(tf.truncated_normal([5, 5, 6, 16]))
bias2 = tf.Variable(tf.truncated_normal([16]))
conv2 = tf.nn.conv2d(maxPool2, filter2, strides=[1, 1, 1, 1], padding='SAME')
# 此时conv2.shape = [-1, 14, 14, 16]
h_conv2 = tf.nn.sigmoid(conv2 + bias2)
# h_conv2 = tf.nn.relu(conv2 + bias2)

maxPool3 = tf.nn.max_pool(h_conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# 此时maxPool3.shape = [-1, 7, 7, 16]

filter3 = tf.Variable(tf.truncated_normal([5, 5, 16, 120]))
bias3 = tf.Variable(tf.truncated_normal([120]))
conv3 = tf.nn.conv2d(maxPool3, filter3, strides=[1, 1, 1, 1], padding='SAME')
# 此时conv3.shape = [-1, 7, 7, 120]
h_conv3 = tf.nn.sigmoid(conv3 + bias3)
# h_conv3 = tf.nn.relu(conv3 + bias3)


# 全连接层
# 权值参数
W_fc1 = tf.Variable(tf.truncated_normal([7 * 7 * 120, 80]))
# 偏置值
b_fc1 = tf.Variable(tf.truncated_normal([80]))
# 将卷积的产出展开
h_pool2_flat = tf.reshape(h_conv3, [-1, 7 * 7 * 120])
# 神经网络计算,并添加sigmoid激活函数
h_fc1 = tf.nn.sigmoid(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# 此时h_fc1.shape = [-1, 80]

# 输出层,使用softmax进行多分类
W_fc2 = tf.Variable(tf.truncated_normal([80, 10]))
b_fc2 = tf.Variable(tf.truncated_normal([10]))
y_output = tf.nn.softmax(tf.matmul(h_fc1, W_fc2) + b_fc2)

# 损失函数
cross_entropy = -tf.reduce_sum(y_ * tf.log(y_output))
# 使用GD优化算法来调整参数
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(cross_entropy)

sess = tf.InteractiveSession()

# 测试正确率
correct_prediction = tf.equal(tf.argmax(y_output, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

# 所有变量进行初始化
sess.run(tf.initialize_all_variables())



'''
# Debug
batch_xs, batch_ys = mnist_data_set.train.next_batch(5)
x = tf.Variable(tf.truncated_normal([5, 5, 1, 6]))
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    print(conv1.eval(feed_dict={x: batch_xs}).shape)
    print(h_conv1.eval(feed_dict={x: batch_xs}).shape)
    print(maxPool2.eval(feed_dict={x: batch_xs}).shape)
    print(conv2.eval(feed_dict={x: batch_xs}).shape)
    print(h_conv2.eval(feed_dict={x: batch_xs}).shape)
    print(maxPool3.eval(feed_dict={x: batch_xs}).shape)
    print(h_conv3.eval(feed_dict={x: batch_xs}).shape)
    print(h_fc1.eval(feed_dict={x: batch_xs}).shape)
    print('debug')
'''


# 进行训练
batch_size = 200
start_time = time.time()
for i in range(20000):
    for iteration in range(mnist_data_set.train.num_examples//batch_size):
        # 获取训练数据
        batch_xs, batch_ys = mnist_data_set.train.next_batch(batch_size)
        train_step.run(feed_dict={x: batch_xs, y_: batch_ys})

    batch_xs, batch_ys = mnist_data_set.test.images, mnist_data_set.test.labels
    train_accuracy = accuracy.eval(feed_dict={x: batch_xs, y_: batch_ys})
    print("step %d, training accuracy %g" % (i, train_accuracy))

    end_time = time.time()
    print('time: ', (end_time - start_time))
    start_time = end_time

# 关闭会话
sess.close()

注意事项:将sigmoid激活函数改为relu激活函数后,好像效果更差了(在我的笔记本上训练前3步后准确率都在0.09以下,我的笔记本速度太慢,不知继续训练下去会怎样,留待高级服务器上试试)。

各种优化后的loss和accuracy曲线图

enter image description here

enter image description here

参考文献

[1] 王晓华. TensorFlow深度学习应用实践
[2] Deep Learning(深度学习)学习笔记整理系列之LeNet-5卷积参数个人理解
[3] Gradient-Based Learning Applied to Document Recognition

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值