TensorFlow-5实现简单的卷积神经网络CNN

1 卷积神经网络原理

CNN的原理部分请参看:http://blog.csdn.net/yunpiao123456/article/details/52437794 讲得非常详细。

2 网络架构

数据:Mnist手写数字
网络构造:两层卷积,两层池化,1层全连接层,最后softmax分类分类
               卷积1层:核5X5,stride=1,32个filters, padding=SAME,卷积后利用ReLU激活函数
               卷积2层:核5X5,stride=1,64个filters, padding=SAME,卷积后利用ReLU激活函数
                        池化层:核2X2,stride=2, padding=SAME
               全连接层:1024个节点,ReLu激活函数,输出后采用Dropout防止过拟合
输出层:采用softmax分类器
损失函数:交叉信息熵

3 代码实现

__author__ = 'Administrator'
'''
利用简单的CNN实现手写数字识别
'''
import tensorflow as tf
import tensorflow.examples.tutorials.mnist.input_data as input_data

def weights(shape):
    '''
    定义w初始值
    :param shape: w的shape
    :return:w
    '''
    w=tf.Variable(tf.truncated_normal(shape,stddev=0.1))
    return w

def bias(shape):
    '''
    定义bias b的初始值
    :param shape: w的shape
    :return:b
    '''
    b=tf.Variable(tf.constant(0.1,shape=shape))
    return b

def conv2d(x,kernel):
    '''
    卷积
    :param x: 卷积对象,特征
    :param kernel: 卷积核设置
    :return:卷积后的特征
    '''
    feature=tf.nn.conv2d(x,kernel,[1,1,1,1],padding='SAME')#Strides=[1,1,1,1]表明输出图像与原图大小相等
    return feature

def max_pool_2X2(x):
    '''
    最大池化层,2X2
    :param x: 需要输入的池化量
    :return:池化后的量
    '''
    return tf.nn.max_pool(x,[1,2,2,1],[1,2,2,1],padding='SAME')

#载入数据
mnist=input_data.read_data_sets('MNIST_data',one_hot=True)
#定义公式
x=tf.placeholder(tf.float32,[None,784])#图像数据
y_=tf.placeholder(tf.float32,[None,10])#图像标签
x_image=tf.reshape(x,[-1,28,28,1])#-1表示不知道有多少张图像,28*28图像尺寸,1表示1个通道

#卷积层1
w_conv1=weights([5,5,1,32])#kernel:5*5 1:1个图像通道,32:32个feature
b_conv1=bias([32])
h_conv1=tf.nn.relu(conv2d(x_image,w_conv1)+b_conv1)#卷积
h_pool1=max_pool_2X2(h_conv1)#池化

#卷积层2
w_conv2=weights([5,5,32,64])#kernel:5*5 32:卷积层1 32个feature map,64:64个feature map
b_conv2=bias([64])
h_conv2=tf.nn.relu(conv2d(h_pool1,w_conv2)+b_conv2)#卷积层2输出
h_pool2=max_pool_2X2(h_conv2)#池化层2输出
h_pool2_flat=tf.reshape(h_pool2,[-1,7*7*64])#两次池化后,大小为7*7

#全连接层
w_fc1=weights([7*7*64,1024])#全连接层特征数为1024
b_fc2=bias([1024])
h_fc1=tf.nn.relu(tf.matmul(h_pool2_flat,w_fc1)+b_fc2)
#采用dropOut策略防止过拟合
keep_prob=tf.placeholder(tf.float32)#数据保持率
h_fc1_dropout=tf.nn.dropout(h_fc1,keep_prob)

#softmax分类器识别
w_fc2=weights([1024,10])
b_fc2=bias([10])
y=tf.nn.softmax(tf.matmul(h_fc1_dropout,w_fc2)+b_fc2)

#定义loss 选择优化器
loss=tf.reduce_mean(tf.reduce_sum(-y_*tf.log(y),reduction_indices=[1]))
optimizer=tf.train.AdamOptimizer(1e-4).minimize(loss)

#计算精度
correct_prediction=tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

#打开会话
sess=tf.InteractiveSession()
tf.global_variables_initializer().run()#参数初始化
for epoch in range(20000):
    batch_xs,batch_ys=mnist.train.next_batch(50)
    optimizer.run(feed_dict={x:batch_xs,y_:batch_ys,keep_prob:0.75})#训练
    if epoch%100==0:
        train_accuracy=accuracy.eval(feed_dict={x:batch_xs,y_:batch_ys,keep_prob:1})#训练精度
        test_accuracy=accuracy.eval(feed_dict={x:mnist.test.images,y_:mnist.test.labels,keep_prob:1})
        print("epoch: %d, training accuracy: %g, testing accuracy: %g"%(epoch,train_accuracy,test_accuracy))  

4 源码下载

http://download.csdn.net/download/suan2014/9937242

5 实验结果

epoch: 0, training accuracy: 0.08, testing accuracy: 0.0646
epoch: 100, training accuracy: 0.82, testing accuracy: 0.8614
epoch: 200, training accuracy: 0.94, testing accuracy: 0.9053
epoch: 300, training accuracy: 0.92, testing accuracy: 0.9277
epoch: 400, training accuracy: 0.98, testing accuracy: 0.9395
epoch: 500, training accuracy: 1, testing accuracy: 0.9472
epoch: 600, training accuracy: 0.96, testing accuracy: 0.9532
epoch: 700, training accuracy: 0.96, testing accuracy: 0.954
epoch: 800, training accuracy: 0.88, testing accuracy: 0.9605
epoch: 900, training accuracy: 0.94, testing accuracy: 0.9615
epoch: 1000, training accuracy: 0.92, testing accuracy: 0.9638
epoch: 1100, training accuracy: 0.98, testing accuracy: 0.9673
epoch: 1200, training accuracy: 0.96, testing accuracy: 0.9666
epoch: 1300, training accuracy: 0.98, testing accuracy: 0.9689
epoch: 1400, training accuracy: 0.98, testing accuracy: 0.9712
epoch: 1500, training accuracy: 0.98, testing accuracy: 0.9727
epoch: 1600, training accuracy: 1, testing accuracy: 0.9735
epoch: 1700, training accuracy: 0.96, testing accuracy: 0.9756
epoch: 1800, training accuracy: 1, testing accuracy: 0.9752
epoch: 1900, training accuracy: 0.98, testing accuracy: 0.9719
epoch: 2000, training accuracy: 0.96, testing accuracy: 0.9757
epoch: 2100, training accuracy: 0.98, testing accuracy: 0.9786
epoch: 2200, training accuracy: 0.98, testing accuracy: 0.9789
epoch: 2300, training accuracy: 0.98, testing accuracy: 0.9789
epoch: 2400, training accuracy: 0.98, testing accuracy: 0.9787
epoch: 2500, training accuracy: 0.98, testing accuracy: 0.9771
epoch: 2600, training accuracy: 0.98, testing accuracy: 0.9799
epoch: 2700, training accuracy: 1, testing accuracy: 0.9784
epoch: 2800, training accuracy: 1, testing accuracy: 0.9814
epoch: 2900, training accuracy: 0.98, testing accuracy: 0.9829
epoch: 3000, training accuracy: 0.92, testing accuracy: 0.9822
epoch: 3100, training accuracy: 0.98, testing accuracy: 0.98
epoch: 3200, training accuracy: 1, testing accuracy: 0.9838
epoch: 3300, training accuracy: 1, testing accuracy: 0.9835
epoch: 3400, training accuracy: 1, testing accuracy: 0.9815
epoch: 3500, training accuracy: 0.98, testing accuracy: 0.9823
epoch: 3600, training accuracy: 1, testing accuracy: 0.9825
epoch: 3700, training accuracy: 1, testing accuracy: 0.9839
epoch: 3800, training accuracy: 1, testing accuracy: 0.9819
epoch: 3900, training accuracy: 0.98, testing accuracy: 0.9833
epoch: 4000, training accuracy: 1, testing accuracy: 0.9843
epoch: 4100, training accuracy: 0.98, testing accuracy: 0.9841
epoch: 4200, training accuracy: 1, testing accuracy: 0.9838
epoch: 4300, training accuracy: 0.98, testing accuracy: 0.9854
epoch: 4400, training accuracy: 1, testing accuracy: 0.9863
epoch: 4500, training accuracy: 0.96, testing accuracy: 0.9843
epoch: 4600, training accuracy: 0.98, testing accuracy: 0.9857
epoch: 4700, training accuracy: 0.98, testing accuracy: 0.9876
epoch: 4800, training accuracy: 1, testing accuracy: 0.9854
epoch: 4900, training accuracy: 0.98, testing accuracy: 0.9863
epoch: 5000, training accuracy: 1, testing accuracy: 0.9853
epoch: 5100, training accuracy: 0.98, testing accuracy: 0.9854
epoch: 5200, training accuracy: 0.98, testing accuracy: 0.9856
epoch: 5300, training accuracy: 0.98, testing accuracy: 0.986
epoch: 5400, training accuracy: 1, testing accuracy: 0.9854
epoch: 5500, training accuracy: 0.98, testing accuracy: 0.984
epoch: 5600, training accuracy: 1, testing accuracy: 0.9873
epoch: 5700, training accuracy: 1, testing accuracy: 0.9873
epoch: 5800, training accuracy: 1, testing accuracy: 0.9854
epoch: 5900, training accuracy: 0.98, testing accuracy: 0.9883
epoch: 6000, training accuracy: 0.98, testing accuracy: 0.9867
epoch: 6100, training accuracy: 1, testing accuracy: 0.9871
epoch: 6200, training accuracy: 0.98, testing accuracy: 0.9884
epoch: 6300, training accuracy: 1, testing accuracy: 0.9888
epoch: 6400, training accuracy: 1, testing accuracy: 0.9898
epoch: 6500, training accuracy: 1, testing accuracy: 0.9853
epoch: 6600, training accuracy: 0.98, testing accuracy: 0.9884
epoch: 6700, training accuracy: 1, testing accuracy: 0.9885
epoch: 6800, training accuracy: 1, testing accuracy: 0.9897
epoch: 6900, training accuracy: 1, testing accuracy: 0.9892
epoch: 7000, training accuracy: 1, testing accuracy: 0.9882
epoch: 7100, training accuracy: 1, testing accuracy: 0.9884
epoch: 7200, training accuracy: 1, testing accuracy: 0.9879
epoch: 7300, training accuracy: 1, testing accuracy: 0.9856
epoch: 7400, training accuracy: 1, testing accuracy: 0.9888
epoch: 7500, training accuracy: 1, testing accuracy: 0.9883
epoch: 7600, training accuracy: 1, testing accuracy: 0.9901
epoch: 7700, training accuracy: 0.96, testing accuracy: 0.9885
epoch: 7800, training accuracy: 1, testing accuracy: 0.9889
epoch: 7900, training accuracy: 1, testing accuracy: 0.9893
epoch: 8000, training accuracy: 1, testing accuracy: 0.989
epoch: 8100, training accuracy: 1, testing accuracy: 0.9895
epoch: 8200, training accuracy: 1, testing accuracy: 0.9903
epoch: 8300, training accuracy: 1, testing accuracy: 0.9891
epoch: 8400, training accuracy: 1, testing accuracy: 0.9897
epoch: 8500, training accuracy: 1, testing accuracy: 0.9887
epoch: 8600, training accuracy: 1, testing accuracy: 0.9894
epoch: 8700, training accuracy: 1, testing accuracy: 0.99
epoch: 8800, training accuracy: 1, testing accuracy: 0.9891
epoch: 8900, training accuracy: 1, testing accuracy: 0.9845
epoch: 9000, training accuracy: 1, testing accuracy: 0.9902
epoch: 9100, training accuracy: 1, testing accuracy: 0.9883
epoch: 9200, training accuracy: 1, testing accuracy: 0.9893
epoch: 9300, training accuracy: 1, testing accuracy: 0.9897
epoch: 9400, training accuracy: 1, testing accuracy: 0.988
epoch: 9500, training accuracy: 0.98, testing accuracy: 0.9896
epoch: 9600, training accuracy: 1, testing accuracy: 0.9877
epoch: 9700, training accuracy: 0.98, testing accuracy: 0.9889
epoch: 9800, training accuracy: 1, testing accuracy: 0.9903
epoch: 9900, training accuracy: 0.98, testing accuracy: 0.991
epoch: 10000, training accuracy: 1, testing accuracy: 0.9914
epoch: 10100, training accuracy: 1, testing accuracy: 0.9905
epoch: 10200, training accuracy: 1, testing accuracy: 0.9904
epoch: 10300, training accuracy: 1, testing accuracy: 0.9901
epoch: 10400, training accuracy: 1, testing accuracy: 0.9902
epoch: 10500, training accuracy: 1, testing accuracy: 0.9905
epoch: 10600, training accuracy: 1, testing accuracy: 0.9902
epoch: 10700, training accuracy: 1, testing accuracy: 0.9896
epoch: 10800, training accuracy: 0.98, testing accuracy: 0.9909
epoch: 10900, training accuracy: 1, testing accuracy: 0.9903
epoch: 11000, training accuracy: 1, testing accuracy: 0.9889
epoch: 11100, training accuracy: 1, testing accuracy: 0.9911
epoch: 11200, training accuracy: 1, testing accuracy: 0.9896
epoch: 11300, training accuracy: 1, testing accuracy: 0.9914
epoch: 11400, training accuracy: 1, testing accuracy: 0.9903
epoch: 11500, training accuracy: 1, testing accuracy: 0.9906
epoch: 11600, training accuracy: 1, testing accuracy: 0.9905
epoch: 11700, training accuracy: 1, testing accuracy: 0.9891
epoch: 11800, training accuracy: 1, testing accuracy: 0.9904
epoch: 11900, training accuracy: 1, testing accuracy: 0.9894
epoch: 12000, training accuracy: 1, testing accuracy: 0.9904
epoch: 12100, training accuracy: 1, testing accuracy: 0.9894
epoch: 12200, training accuracy: 1, testing accuracy: 0.9912
epoch: 12300, training accuracy: 1, testing accuracy: 0.9909
epoch: 12400, training accuracy: 1, testing accuracy: 0.9901
epoch: 12500, training accuracy: 1, testing accuracy: 0.9901
epoch: 12600, training accuracy: 1, testing accuracy: 0.9906
epoch: 12700, training accuracy: 1, testing accuracy: 0.9916
epoch: 12800, training accuracy: 1, testing accuracy: 0.9905
epoch: 12900, training accuracy: 1, testing accuracy: 0.9897
epoch: 13000, training accuracy: 1, testing accuracy: 0.9889
epoch: 13100, training accuracy: 1, testing accuracy: 0.9903
epoch: 13200, training accuracy: 1, testing accuracy: 0.9896
epoch: 13300, training accuracy: 1, testing accuracy: 0.9898
epoch: 13400, training accuracy: 1, testing accuracy: 0.9915
epoch: 13500, training accuracy: 1, testing accuracy: 0.9912
epoch: 13600, training accuracy: 1, testing accuracy: 0.9893
epoch: 13700, training accuracy: 1, testing accuracy: 0.9906
epoch: 13800, training accuracy: 1, testing accuracy: 0.99
epoch: 13900, training accuracy: 1, testing accuracy: 0.9893
epoch: 14000, training accuracy: 1, testing accuracy: 0.9898
epoch: 14100, training accuracy: 1, testing accuracy: 0.9877
epoch: 14200, training accuracy: 1, testing accuracy: 0.9913
epoch: 14300, training accuracy: 1, testing accuracy: 0.9899
epoch: 14400, training accuracy: 1, testing accuracy: 0.9915
epoch: 14500, training accuracy: 1, testing accuracy: 0.9918
epoch: 14600, training accuracy: 1, testing accuracy: 0.9922
epoch: 14700, training accuracy: 1, testing accuracy: 0.9909
epoch: 14800, training accuracy: 1, testing accuracy: 0.9909
epoch: 14900, training accuracy: 1, testing accuracy: 0.9918
epoch: 15000, training accuracy: 1, testing accuracy: 0.9905
epoch: 15100, training accuracy: 1, testing accuracy: 0.991
epoch: 15200, training accuracy: 1, testing accuracy: 0.9925
epoch: 15300, training accuracy: 1, testing accuracy: 0.9908
epoch: 15400, training accuracy: 1, testing accuracy: 0.9906
epoch: 15500, training accuracy: 1, testing accuracy: 0.9908
epoch: 15600, training accuracy: 1, testing accuracy: 0.9911
epoch: 15700, training accuracy: 1, testing accuracy: 0.9921
epoch: 15800, training accuracy: 1, testing accuracy: 0.9916
epoch: 15900, training accuracy: 1, testing accuracy: 0.9912
epoch: 16000, training accuracy: 1, testing accuracy: 0.9921
epoch: 16100, training accuracy: 1, testing accuracy: 0.9914
epoch: 16200, training accuracy: 1, testing accuracy: 0.992
epoch: 16300, training accuracy: 1, testing accuracy: 0.992
epoch: 16400, training accuracy: 1, testing accuracy: 0.9917
epoch: 16500, training accuracy: 1, testing accuracy: 0.9912
epoch: 16600, training accuracy: 1, testing accuracy: 0.9919
epoch: 16700, training accuracy: 1, testing accuracy: 0.9913
epoch: 16800, training accuracy: 1, testing accuracy: 0.9903
epoch: 16900, training accuracy: 1, testing accuracy: 0.9915
epoch: 17000, training accuracy: 1, testing accuracy: 0.9925
epoch: 17100, training accuracy: 1, testing accuracy: 0.9921
epoch: 17200, training accuracy: 1, testing accuracy: 0.9922
epoch: 17300, training accuracy: 1, testing accuracy: 0.9923
epoch: 17400, training accuracy: 1, testing accuracy: 0.9909
epoch: 17500, training accuracy: 0.98, testing accuracy: 0.9921
epoch: 17600, training accuracy: 1, testing accuracy: 0.9919
epoch: 17700, training accuracy: 1, testing accuracy: 0.9924
epoch: 17800, training accuracy: 1, testing accuracy: 0.9913
epoch: 17900, training accuracy: 1, testing accuracy: 0.9923
epoch: 18000, training accuracy: 1, testing accuracy: 0.9921
epoch: 18100, training accuracy: 1, testing accuracy: 0.9902
epoch: 18200, training accuracy: 1, testing accuracy: 0.9914
epoch: 18300, training accuracy: 1, testing accuracy: 0.9914
epoch: 18400, training accuracy: 1, testing accuracy: 0.9907
epoch: 18500, training accuracy: 1, testing accuracy: 0.991
epoch: 18600, training accuracy: 1, testing accuracy: 0.9918
epoch: 18700, training accuracy: 1, testing accuracy: 0.9876
epoch: 18800, training accuracy: 1, testing accuracy: 0.9907
epoch: 18900, training accuracy: 1, testing accuracy: 0.9921
epoch: 19000, training accuracy: 1, testing accuracy: 0.9914
epoch: 19100, training accuracy: 1, testing accuracy: 0.9919
epoch: 19200, training accuracy: 1, testing accuracy: 0.9909
epoch: 19300, training accuracy: 1, testing accuracy: 0.9909
epoch: 19400, training accuracy: 1, testing accuracy: 0.9916
epoch: 19500, training accuracy: 1, testing accuracy: 0.992
epoch: 19600, training accuracy: 1, testing accuracy: 0.9904
epoch: 19700, training accuracy: 1, testing accuracy: 0.9915
epoch: 19800, training accuracy: 1, testing accuracy: 0.9887
epoch: 19900, training accuracy: 1, testing accuracy: 0.9904
 
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

勤劳的凌菲

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值