CNN实现手写数字识别2

手写数字识别2

这次,我们使用CNN来实现手写数字识别。 CNN主要的层次:

输入层
卷积层
激励层
池化层
全连接层

CNN(Convolutional neural network),即卷积神经网络。卷积为理解为一个信号与另外一个信号进行叠加,产生新的信号的过程。
在卷积神经网络中,可认为具有固定权重的滑动窗口与原窗口的数据进行对位相乘再相加的过程。

# 说明:如果使用summary记录数据的话,会极大拖慢运行速度,视计算机能力而定。
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

if __name__ == '__main__':
    # 读入数据。
    mnist = input_data.read_data_sets("data/", one_hot=True)
    with tf.name_scope("input"):
        # 训练图像的占位符。
        x = tf.placeholder(tf.float32, [None, 784])
        # 训练图像对应分类(标签)的占位符。
        y = tf.placeholder(tf.float32, [None, 10])
        # 因为卷积要求输入的是4维数据,因此对形状进行转换。
        # NHWC(默认)   NCHW
        # N number样本的数量
        # H height图像的高度
        # W width图像的宽度
        # C channel图像的通道数
        x_image = tf.reshape(x, [-1, 28, 28, 1])

    # 卷积层1。
    with tf.name_scope("conv_layer1"):
        # 定义权重。(w就是滑动窗口)
        # 5, 5, 1, 32  =>  滑动窗口的高度,滑动窗口的宽度,输入通道数,输出通道数。
        w = tf.Variable(tf.truncated_normal([5, 5, 1, 32], stddev=0.1), name="w")
        # 定义偏置。
        b = tf.Variable(tf.constant(0.0, shape=[32]), name="b")
        # 进行卷积计算。
        # strides=[1, 1, 1, 1] 步幅。针对输入的NHWC定义的增量。
        # padding: SAME 与VALID。SAME,只要滑动窗口不全移除输入区域就可以。
        # VALID,滑动窗口必须完全在输入区域之内。
        conv = tf.nn.bias_add(tf.nn.conv2d(x_image, w, strides=[1, 1, 1, 1], padding='SAME'), b, name="conv")
        # 使用激活函数进行激活。
        activation = tf.nn.relu(conv)
        # 池化操作。
        # ksize:池化的窗口。
        pool = tf.nn.max_pool(activation, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')

    # 卷积层2。
    with tf.name_scope("conv_layer2"):
        w = tf.Variable(tf.truncated_normal([5, 5, 32, 64], stddev=0.1), name="w")
        b = tf.Variable(tf.constant(0.0, shape=[64]), name="b")
        conv = tf.nn.bias_add(tf.nn.conv2d(pool, w, strides=[1, 1, 1, 1], padding='SAME'), b, name="conv")
        activation = tf.nn.relu(conv)
        pool = tf.nn.max_pool(activation, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
        
    # 全连接层1。
    with tf.name_scope("full_layer1"):
        # 7 * 7 * 64
        # 原始图像是28 * 28,经过卷积与激励后,没有改变,经过2 * 2池化后,变成 14 * 14。
        # 第一层卷积之后结果为14 * 14,经过第二层卷积与激励后,没有改变,经过2 * 2池化后,变成 7 * 7。
        # 第二层卷积之后,我们图像的形状为  NHWC  =>  [N, 7, 7, 64]
        # 4维变成2二维,将后面三维拉伸成为1维。  =》  [N, 7 * 7 * 64]
        w = tf.Variable(tf.truncated_normal([7 * 7 * 64, 1024], stddev=0.1), name="w")
        b = tf.Variable(tf.constant(0.0, shape=[1024]), name="b")
        # 将第二层卷积之后的结果转换成二维结构。
        pool = tf.reshape(pool, [-1, 7 * 7 * 64])
        activation = tf.nn.relu(tf.matmul(pool, w) + b)
        # 执行dropout(随机丢弃)
        keep_prob = tf.placeholder(tf.float32)
        # 进行随机丢弃,keep_prob指定神经元的保留率。
        drop = tf.nn.dropout(activation, keep_prob)
        
    # 全连接层2。
    with tf.name_scope("full_layer2"):
        w = tf.Variable(tf.truncated_normal([1024, 10], stddev=0.1), name="w")
        b = tf.Variable(tf.constant(0.0, shape=[10]), name="b")
        logits = tf.matmul(drop, w) + b
    
    # 损失值与准确率计算层。
    with tf.name_scope("compute"):                   
        # 计算损失值。
        loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=logits))
        # 将损失值加入到tensorboard中。
#         tf.summary.scalar('loss',loss)
        train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)
        # 计算准确率
        correct = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
        accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
#         tf.summary.scalar('accuracy',accuracy)
        
    #合并所有的summary
#     merged = tf.summary.merge_all()

    # 创建Session。
    with tf.Session() as sess:
        # 对全局变量进行初始化。
        sess.run(tf.global_variables_initializer())
#         train_writer = tf.summary.FileWriter('logs/train',sess.graph)
#         test_writer = tf.summary.FileWriter('logs/test',sess.graph)
        # 可以尝试更大的次数,可以将准确率提升到99%以上。
        for i in range(1, 3001):
            batch = mnist.train.next_batch(64)
            # 每100步报告一次在验证集上的准确度
            if i % 100 == 0:
                train_accuracy = accuracy.eval(
                    feed_dict={x: batch[0], y: batch[1], keep_prob: 1.0})
                test_accuracy = accuracy.eval(
                    feed_dict={x: mnist.test.images[:5000], y: mnist.test.labels[:5000], keep_prob: 1.0})
                print(f"step {i}, training accuracy {train_accuracy * 100:.2f}%")
                print(f"step {i}, test accuracy {test_accuracy * 100:.2f}%")
            train_step.run(feed_dict={x: batch[0], y: batch[1], keep_prob: 0.5})
            # 计算并写入训练集计算的结果。
#             summary = sess.run(merged,feed_dict={x:batch[0], y:batch[1] ,keep_prob:1.0})
#             train_writer.add_summary(summary, i)
            # 计算并写入测试集计算的结果。
#             summary = sess.run(merged,feed_dict={x:mnist.test.images, y:mnist.test.labels, keep_prob:1.0})
#             test_writer.add_summary(summary,i)

输出结果:

WARNING:tensorflow:From <ipython-input-1-28a757f6edb6>:7: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From d:\anaconda3\envs\p36\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From d:\anaconda3\envs\p36\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting data/train-images-idx3-ubyte.gz
WARNING:tensorflow:From d:\anaconda3\envs\p36\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting data/train-labels-idx1-ubyte.gz
WARNING:tensorflow:From d:\anaconda3\envs\p36\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From d:\anaconda3\envs\p36\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From <ipython-input-1-28a757f6edb6>:73: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.

See `tf.nn.softmax_cross_entropy_with_logits_v2`.

step 100, training accuracy 81.25%
step 100, test accuracy 85.28%
step 200, training accuracy 90.62%
step 200, test accuracy 91.00%
step 300, training accuracy 100.00%
step 300, test accuracy 92.14%
step 400, training accuracy 96.88%
step 400, test accuracy 93.88%
step 500, training accuracy 93.75%
step 500, test accuracy 94.34%
step 600, training accuracy 98.44%
step 600, test accuracy 94.64%
step 700, training accuracy 96.88%
step 700, test accuracy 95.00%
step 800, training accuracy 98.44%
step 800, test accuracy 95.76%
step 900, training accuracy 100.00%
step 900, test accuracy 95.82%
step 1000, training accuracy 95.31%
step 1000, test accuracy 96.18%
step 1100, training accuracy 100.00%
step 1100, test accuracy 96.38%
step 1200, training accuracy 95.31%
step 1200, test accuracy 96.76%
step 1300, training accuracy 100.00%
step 1300, test accuracy 96.62%
step 1400, training accuracy 100.00%
step 1400, test accuracy 96.78%
step 1500, training accuracy 96.88%
step 1500, test accuracy 96.90%
step 1600, training accuracy 92.19%
step 1600, test accuracy 96.82%
step 1700, training accuracy 100.00%
step 1700, test accuracy 97.32%
step 1800, training accuracy 98.44%
step 1800, test accuracy 97.52%
step 1900, training accuracy 98.44%
step 1900, test accuracy 97.24%
step 2000, training accuracy 100.00%
step 2000, test accuracy 97.48%
step 2100, training accuracy 96.88%
step 2100, test accuracy 97.50%
step 2200, training accuracy 96.88%
step 2200, test accuracy 97.66%
step 2300, training accuracy 98.44%
step 2300, test accuracy 97.78%
step 2400, training accuracy 100.00%
step 2400, test accuracy 97.72%
  • 2
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值