google lab 深度学习_《TensorFlow: 实战Google深度学习框架》笔记及勘误-第5章MNIST数字识别问题之1...

本文对《TensorFlow:实战Google深度学习框架》第5章的源代码进行了勘误。另外,原书中未详细列出的不同情况下的其他模型的代码,本文补充了这些模型的代码。此外,本文补充了代码运行的结果,并最终综合比较了不同模型的结果,以供参考。

  • 1 MNIST数据处理
import tensorflow as tf

# 1. 读取数据集,第一次TensorFlow会自动下载数据集到下面的路径中
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

# 2. 数据集会自动被分成3个子集,train、validation和test。以下代码会显示数据集的大小。
print("Training data size: ", mnist.train.num_examples)
print("Validating data size: ", mnist.validation.num_examples)
print("Testing data size: ", mnist.test.num_examples)

# 3. 查看training数据集中某个成员的像素矩阵生成的一维数组和其属于的数字标签。
print("Example training data: ", mnist.train.images[0])
print("Example training data label: ", mnist.train.labels[0])

运行结果

Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
Training data size:  55000
Validating data size:  5000
Testing data size:  10000
Example training data:  [ 0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.38039219  0.37647063
  0.3019608   0.46274513  0.2392157   0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.35294119  0.5411765
  0.92156869  0.92156869  0.92156869  0.92156869  0.92156869  0.92156869
  0.98431379  0.98431379  0.97254908  0.99607849  0.96078438  0.92156869
  0.74509805  0.08235294  0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.
  0.54901963  0.98431379  0.99607849  0.99607849  0.99607849  0.99607849
  0.99607849  0.99607849  0.99607849  0.99607849  0.99607849  0.99607849
  0.99607849  0.99607849  0.99607849  0.99607849  0.74117649  0.09019608
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.88627458  0.99607849  0.81568635
  0.78039223  0.78039223  0.78039223  0.78039223  0.54509807  0.2392157
  0.2392157   0.2392157   0.2392157   0.2392157   0.50196081  0.8705883
  0.99607849  0.99607849  0.74117649  0.08235294  0.          0.          0.
  0.          0.          0.          0.          0.          0.
  0.14901961  0.32156864  0.0509804   0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.13333334  0.83529419  0.99607849  0.99607849  0.45098042  0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.32941177  0.99607849  0.99607849  0.91764712  0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.32941177  0.99607849  0.99607849  0.91764712  0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.41568631  0.6156863   0.99607849  0.99607849  0.95294124  0.20000002
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.09803922  0.45882356  0.89411771
  0.89411771  0.89411771  0.99215692  0.99607849  0.99607849  0.99607849
  0.99607849  0.94117653  0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.26666668  0.4666667   0.86274517
  0.99607849  0.99607849  0.99607849  0.99607849  0.99607849  0.99607849
  0.99607849  0.99607849  0.99607849  0.55686277  0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.14509805  0.73333335  0.99215692
  0.99607849  0.99607849  0.99607849  0.87450987  0.80784321  0.80784321
  0.29411766  0.26666668  0.84313732  0.99607849  0.99607849  0.45882356
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.44313729
  0.8588236   0.99607849  0.94901967  0.89019614  0.45098042  0.34901962
  0.12156864  0.          0.          0.          0.          0.7843138
  0.99607849  0.9450981   0.16078432  0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.66274512  0.99607849  0.6901961   0.24313727  0.          0.
  0.          0.          0.          0.          0.          0.18823531
  0.90588242  0.99607849  0.91764712  0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.07058824  0.48627454  0.          0.          0.
  0.          0.          0.          0.          0.          0.
  0.32941177  0.99607849  0.99607849  0.65098041  0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.54509807  0.99607849  0.9333334   0.22352943  0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.
  0.82352948  0.98039222  0.99607849  0.65882355  0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.94901967  0.99607849  0.93725497  0.22352943  0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.
  0.34901962  0.98431379  0.9450981   0.33725491  0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.
  0.01960784  0.80784321  0.96470594  0.6156863   0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.01568628  0.45882356  0.27058825  0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.          0.        ]
Example training data label:  [ 0.  0.  0.  0.  0.  0.  0.  1.  0.  0.]

  • 2 训练神经网络
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

INPUT_NODE = 784 # 输入层节点数,即图片像素
OUTPUT_NODE = 10 # 输出层节点数;输出的是10*1的向量,可参考前文的Example training data label

LAYER1_NODE = 500 #隐藏层节点数

BATCH_SIZE = 100 # 一次训练batch中的数据个数;数字越小(极限为1)则越接近随机梯度下降,越大则越接近梯度下降

LEARNING_RATE_BASE = 0.8     # 基础学习率
LEARNING_RATE_DECAY = 0.99   # 学习率的衰减率
REGULARIZATION_RATE = 0.0001 # 描述模型复杂度的正则化项在损失函数中的系数
TRAINING_STEPS = 30000       # 训练轮数
MOVING_AVERAGE_DECAY = 0.99  # 滑动平均衰减率


# 一个辅助函数,给定神经网络的输入和所有参数,计算神经网络的前向传播结果
def inference(input_tensor, avg_class, weights1, biases1, weights2, biases2):
    # 若没有提供移动平均类,则直接使用参数当前的取值
    if avg_class == None:
        layer1 = tf.nn.relu(tf.matmul(input_tensor, weights1) + biases1)
        return tf.matmul(layer1, weights2) + biases2
    # 若提供了滑动平均类,则首先使用avg_class.average函数计算得出变量的滑动平均值
    # 然后再计算相应的神经网络的前向传播结果
    else:
        layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1))
        return tf.matmul(layer1, avg_class.average(weights2)) + avg_class.average(biases2)


# 模型训练过程
def train(mnist):
    x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input')
    y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')

    # 初始化生成隐藏层的参数,这里用truncated_normal而非普通normal,是为了加速训练过程
    # 注:tf.truncated_normal函数的效果是如得到的随机值偏离均值2个标准差以上,则重新随机一次直至在2个标准差以内
    weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE, LAYER1_NODE], stddev=0.1))
    biases1 = tf.Variable(tf.constant(0.1, shape=[LAYER1_NODE]))

    # 初始化生成输出层的参数
    weights2 = tf.Variable(tf.truncated_normal([LAYER1_NODE, OUTPUT_NODE], stddev=0.1))
    biases2 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE]))

    # 计算在当前参数下前向传播的效果,这里滑动平均类为None所以函数不会使用参数的滑动平均值
    y = inference(x, None, weights1, biases1, weights2, biases2)

    # 定义存储训练轮数的变量。这个变量不需要计算滑动平均值,所以这里设定为不可训练变量(trainable=False)
    global_step = tf.Variable(0, trainable=False)

    # 给定滑动拼接衰减率和训练轮数的变量,初始化滑动平均类;在第4章中介绍过给定训练轮数的变量可加快训练早期变量的更新速度
    variale_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)

    # 对所有神经网络参数的变量上使用滑动平均(不可训练变量除外)
    variale_averages_op = variale_averages.apply(tf.trainable_variables())

    # 计算使用了滑动平均之后的前向传播效果
    average_y = inference(x, variale_averages, weights1, biases1, weights2, biases2)

    # 计算交叉熵;因为one_hot=True,对于稀疏矩阵可用这个函数来加速交叉熵的计算
    # 【勘误】注意这里原书代码有误
    # 原书是cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(y, tf.argmax(y_, 1))可能跑不通
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))

    # 计算当前batch中所有样例的交叉熵的平均值
    cross_entropy_mean = tf.reduce_mean(cross_entropy)

    regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)
    regularization = regularizer(weights1) + regularizer(weights2)
    #总损失 = 交叉熵损失 + 正则化损失
    loss = cross_entropy_mean + regularization

    learning_rate = tf.train.exponential_decay(
        LEARNING_RATE_BASE,     # 基础学习率,随着迭代的进行、更新变量时使用的学习率在此基础上递减
        global_step,            # 当前迭代轮次
        mnist.train.num_examples / BATCH_SIZE,  # 做完所有训练需要的总轮次
        LEARNING_RATE_DECAY,    # 学习率衰减速度
        staircase=True)

    # 优化损失函数
    train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)

    # 反向传播更新参数和更新每一个参数的滑动平均值
    with tf.control_dependencies([train_step, variale_averages_op]):
        train_op = tf.no_op(name='train')

    # 计算正确率
    correct_prediction = tf.equal(tf.argmax(average_y, 1), tf.argmax(y_,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

    # 初始化会话并开始训练过程
    with tf.Session() as sess:
        tf.initialize_all_variables().run()
        # 分别准备验证集和测试集数据
        validate_feed = {x: mnist.validation.images,
                         y_: mnist.validation.labels}
        test_feed = {x: mnist.test.images, y_: mnist.test.labels}

        # 迭代训练神经网络
        for i in range(TRAINING_STEPS):
            # 每1000轮输出一次在验证集上的结果
            if i % 1000 == 0:
                validate_acc = sess.run(accuracy, feed_dict=validate_feed)
                print("After %d training step(s), validation accuracy using average model is %g " % (i, validate_acc))
            xs, ys = mnist.train.next_batch(BATCH_SIZE)
            sess.run(train_op, feed_dict={x: xs, y_: ys})

        test_acc = sess.run(accuracy, feed_dict=test_feed)
        print(("After %d training step(s), test accuracy using average model is %g" % (TRAINING_STEPS, test_acc)))

# 主程序
def main(argv=None):
    mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
    train(mnist)

if __name__=='__main__':
    main()

运行结果如下

After 0 training step(s), validation accuracy using average model is 0.0986 
After 1000 training step(s), validation accuracy using average model is 0.9774 
After 2000 training step(s), validation accuracy using average model is 0.9806 
After 3000 training step(s), validation accuracy using average model is 0.983 
After 4000 training step(s), validation accuracy using average model is 0.9824 
After 5000 training step(s), validation accuracy using average model is 0.9822 
After 6000 training step(s), validation accuracy using average model is 0.983 
After 7000 training step(s), validation accuracy using average model is 0.9834 
After 8000 training step(s), validation accuracy using average model is 0.9826 
After 9000 training step(s), validation accuracy using average model is 0.983 
After 10000 training step(s), validation accuracy using average model is 0.9834 
After 11000 training step(s), validation accuracy using average model is 0.9828 
After 12000 training step(s), validation accuracy using average model is 0.9836 
After 13000 training step(s), validation accuracy using average model is 0.9828 
After 14000 training step(s), validation accuracy using average model is 0.9838 
After 15000 training step(s), validation accuracy using average model is 0.9828 
After 16000 training step(s), validation accuracy using average model is 0.984 
After 17000 training step(s), validation accuracy using average model is 0.9834 
After 18000 training step(s), validation accuracy using average model is 0.9834 
After 19000 training step(s), validation accuracy using average model is 0.9834 
After 20000 training step(s), validation accuracy using average model is 0.9836 
After 21000 training step(s), validation accuracy using average model is 0.9824 
After 22000 training step(s), validation accuracy using average model is 0.9842 
After 23000 training step(s), validation accuracy using average model is 0.9842 
After 24000 training step(s), validation accuracy using average model is 0.984 
After 25000 training step(s), validation accuracy using average model is 0.9844 
After 26000 training step(s), validation accuracy using average model is 0.9846 
After 27000 training step(s), validation accuracy using average model is 0.9842 
After 28000 training step(s), validation accuracy using average model is 0.9844 
After 29000 training step(s), validation accuracy using average model is 0.984 
After 30000 training step(s), test accuracy using average model is 0.9843
  • 3. 无正则化

与2的区别是,loss = cross_entropy_mean

运行结果如下

After 0 training step(s), validation accuracy using average model is 0.0902 
After 1000 training step(s), validation accuracy using average model is 0.9766 
After 2000 training step(s), validation accuracy using average model is 0.9798 
After 3000 training step(s), validation accuracy using average model is 0.9826 
After 4000 training step(s), validation accuracy using average model is 0.983 
After 5000 training step(s), validation accuracy using average model is 0.9832 
After 6000 training step(s), validation accuracy using average model is 0.9838 
After 7000 training step(s), validation accuracy using average model is 0.9842 
After 8000 training step(s), validation accuracy using average model is 0.9838 
After 9000 training step(s), validation accuracy using average model is 0.9836 
After 10000 training step(s), validation accuracy using average model is 0.9832 
After 11000 training step(s), validation accuracy using average model is 0.9834 
After 12000 training step(s), validation accuracy using average model is 0.9836 
After 13000 training step(s), validation accuracy using average model is 0.984 
After 14000 training step(s), validation accuracy using average model is 0.9838 
After 15000 training step(s), validation accuracy using average model is 0.9834 
After 16000 training step(s), validation accuracy using average model is 0.9834 
After 17000 training step(s), validation accuracy using average model is 0.9836 
After 18000 training step(s), validation accuracy using average model is 0.9838 
After 19000 training step(s), validation accuracy using average model is 0.9838 
After 20000 training step(s), validation accuracy using average model is 0.984 
After 21000 training step(s), validation accuracy using average model is 0.9836 
After 22000 training step(s), validation accuracy using average model is 0.9836 
After 23000 training step(s), validation accuracy using average model is 0.9838 
After 24000 training step(s), validation accuracy using average model is 0.984 
After 25000 training step(s), validation accuracy using average model is 0.9842 
After 26000 training step(s), validation accuracy using average model is 0.984 
After 27000 training step(s), validation accuracy using average model is 0.9838 
After 28000 training step(s), validation accuracy using average model is 0.9842 
After 29000 training step(s), validation accuracy using average model is 0.9838 
After 30000 training step(s), test accuracy using average model is 0.9831
  • 4 固定学习率(而非指数衰减)

与2的区别在2处: 1) 将学习率LEARNING_RATE固定为0.1,2) 在tf.train.GradientDescentOptimizer之中将学习率就设定为固定的LEARNING_RATE

代码如下

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

INPUT_NODE = 784
OUTPUT_NODE = 10

LAYER1_NODE = 500

BATCH_SIZE = 100

LEARNING_RATE = 0.1  # 将learning_rate就设定为固定值0.1
REGULARIZATION_RATE = 0.0001
TRAINING_STEPS = 30000
MOVING_AVERAGE_DECAY = 0.99


def inference(input_tensor, avg_class, weights1, biases1, weights2, biases2):
    if avg_class == None:
        layer1 = tf.nn.relu(tf.matmul(input_tensor, weights1) + biases1)
        return tf.matmul(layer1, weights2) + biases2
    else:
        layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1))
        return tf.matmul(layer1, avg_class.average(weights2)) + avg_class.average(biases2)


def train(mnist):
    x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input')
    y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')

    weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE, LAYER1_NODE], stddev=0.1))
    biases1 = tf.Variable(tf.constant(0.1, shape=[LAYER1_NODE]))

    weights2 = tf.Variable(tf.truncated_normal([LAYER1_NODE, OUTPUT_NODE], stddev=0.1))
    biases2 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE]))

    y = inference(x, None, weights1, biases1, weights2, biases2)
    global_step = tf.Variable(0, trainable=False)

    variale_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
    variale_averages_op = variale_averages.apply(tf.trainable_variables())
    average_y = inference(x, variale_averages, weights1, biases1, weights2, biases2)

    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
    cross_entropy_mean = tf.reduce_mean(cross_entropy)

    regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)
    regularization = regularizer(weights1) + regularizer(weights2)
    loss = cross_entropy_mean + regularization

    # 这里只用固定的LEARNING_RATE
    train_step = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(loss, global_step=global_step)

    with tf.control_dependencies([train_step, variale_averages_op]):
        train_op = tf.no_op(name='train')

    correct_prediction = tf.equal(tf.argmax(average_y, 1), tf.argmax(y_,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

    with tf.Session() as sess:
        tf.initialize_all_variables().run()
        validate_feed = {x: mnist.validation.images,
                         y_: mnist.validation.labels}
        test_feed = {x: mnist.test.images, y_: mnist.test.labels}

        for i in range(TRAINING_STEPS):
            if i % 1000 == 0:
                validate_acc = sess.run(accuracy, feed_dict=validate_feed)
                print("After %d training step(s), validation accuracy using average model is %g " % (i, validate_acc))

            xs, ys = mnist.train.next_batch(BATCH_SIZE)
            sess.run(train_op, feed_dict={x: xs, y_: ys})

        test_acc = sess.run(accuracy, feed_dict=test_feed)
        print(("After %d training step(s), test accuracy using average model is %g" % (TRAINING_STEPS, test_acc)))


def main(argv=None):
    mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
    train(mnist)

if __name__=='__main__':
    main()

运行结果如下

After 0 training step(s), validation accuracy using average model is 0.0734 
After 1000 training step(s), validation accuracy using average model is 0.9484 
After 2000 training step(s), validation accuracy using average model is 0.9628 
After 3000 training step(s), validation accuracy using average model is 0.9682 
After 4000 training step(s), validation accuracy using average model is 0.9726 
After 5000 training step(s), validation accuracy using average model is 0.975 
After 6000 training step(s), validation accuracy using average model is 0.9752 
After 7000 training step(s), validation accuracy using average model is 0.9768 
After 8000 training step(s), validation accuracy using average model is 0.9768 
After 9000 training step(s), validation accuracy using average model is 0.978 
After 10000 training step(s), validation accuracy using average model is 0.9788 
After 11000 training step(s), validation accuracy using average model is 0.9794 
After 12000 training step(s), validation accuracy using average model is 0.9796 
After 13000 training step(s), validation accuracy using average model is 0.9802 
After 14000 training step(s), validation accuracy using average model is 0.9802 
After 15000 training step(s), validation accuracy using average model is 0.9814 
After 16000 training step(s), validation accuracy using average model is 0.9818 
After 17000 training step(s), validation accuracy using average model is 0.9814 
After 18000 training step(s), validation accuracy using average model is 0.981 
After 19000 training step(s), validation accuracy using average model is 0.9818 
After 20000 training step(s), validation accuracy using average model is 0.9812 
After 21000 training step(s), validation accuracy using average model is 0.982 
After 22000 training step(s), validation accuracy using average model is 0.9814 
After 23000 training step(s), validation accuracy using average model is 0.9818 
After 24000 training step(s), validation accuracy using average model is 0.982 
After 25000 training step(s), validation accuracy using average model is 0.9822 
After 26000 training step(s), validation accuracy using average model is 0.9818 
After 27000 training step(s), validation accuracy using average model is 0.9814 
After 28000 training step(s), validation accuracy using average model is 0.9822 
After 29000 training step(s), validation accuracy using average model is 0.9816 
After 30000 training step(s), test accuracy using average model is 0.9816
  • 5 无激活函数

与2的区别在于,在inference函数(计算前向传播的值)中,不使用2中给的ReLU激活函数。有区别的代码如下:

def inference(input_tensor, avg_class, weights1, biases1, weights2, biases2):
    if avg_class == None:
        layer1 = tf.matmul(input_tensor, weights1) + biases1
        return tf.matmul(layer1, weights2) + biases2
    else:
        layer1 = tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1)
        return tf.matmul(layer1, avg_class.average(weights2)) + avg_class.average(biase

运行差得惊人,且预测结果到了某个值就不再变动了,可能是遇到了梯度爆炸或梯度消失

After 0 training step(s), validation accuracy using average model is 0.0408 
After 1000 training step(s), validation accuracy using average model is 0.0958 
After 2000 training step(s), validation accuracy using average model is 0.0958 
After 3000 training step(s), validation accuracy using average model is 0.0958 
After 4000 training step(s), validation accuracy using average model is 0.0958 
After 5000 training step(s), validation accuracy using average model is 0.0958 
After 6000 training step(s), validation accuracy using average model is 0.0958 
After 7000 training step(s), validation accuracy using average model is 0.0958 
After 8000 training step(s), validation accuracy using average model is 0.0958 
After 9000 training step(s), validation accuracy using average model is 0.0958 
After 10000 training step(s), validation accuracy using average model is 0.0958 
After 11000 training step(s), validation accuracy using average model is 0.0958 
After 12000 training step(s), validation accuracy using average model is 0.0958 
After 13000 training step(s), validation accuracy using average model is 0.0958 
After 14000 training step(s), validation accuracy using average model is 0.0958 
After 15000 training step(s), validation accuracy using average model is 0.0958 
After 16000 training step(s), validation accuracy using average model is 0.0958 
After 17000 training step(s), validation accuracy using average model is 0.0958 
After 18000 training step(s), validation accuracy using average model is 0.0958 
After 19000 training step(s), validation accuracy using average model is 0.0958 
After 20000 training step(s), validation accuracy using average model is 0.0958 
After 21000 training step(s), validation accuracy using average model is 0.0958 
After 22000 training step(s), validation accuracy using average model is 0.0958 
After 23000 training step(s), validation accuracy using average model is 0.0958 
After 24000 training step(s), validation accuracy using average model is 0.0958 
After 25000 training step(s), validation accuracy using average model is 0.0958 
After 26000 training step(s), validation accuracy using average model is 0.0958 
After 27000 training step(s), validation accuracy using average model is 0.0958 
After 28000 training step(s), validation accuracy using average model is 0.0958 
After 29000 training step(s), validation accuracy using average model is 0.0958 
After 30000 training step(s), test accuracy using average model is 0.098
  • 6 无隐藏层

与2的区别在于,没有隐藏层。详细代码如下

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

INPUT_NODE = 784
OUTPUT_NODE = 10

BATCH_SIZE = 100

LEARNING_RATE_BASE = 0.8
LEARNING_RATE_DECAY = 0.99
REGULARIZATION_RATE = 0.0001
TRAINING_STEPS = 30000
MOVING_AVERAGE_DECAY = 0.99


# 删除weights2和biases2,直接以一层输出
def inference(input_tensor, avg_class, weights1, biases1):
    if avg_class == None:
        layer1 = tf.nn.relu(tf.matmul(input_tensor, weights1) + biases1)
        return layer1
    else:
        layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1))
        return layer1


def train(mnist):
    x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input')
    y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')

    # 仅保留weights1和biases1,且直接按OUTPUT_NODE输出
    weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE, OUTPUT_NODE], stddev=0.1))
    biases1 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE]))

    y = inference(x, None, weights1, biases1)
    global_step = tf.Variable(0, trainable=False)

    variale_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
    variale_averages_op = variale_averages.apply(tf.trainable_variables())
    average_y = inference(x, variale_averages, weights1, biases1)

    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
    cross_entropy_mean = tf.reduce_mean(cross_entropy)

    regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)
    regularization = regularizer(weights1) # 这里删去对weights2的正则化
    loss = cross_entropy_mean + regularization

    learning_rate = tf.train.exponential_decay(
        LEARNING_RATE_BASE, global_step, mnist.train.num_examples / BATCH_SIZE,
        LEARNING_RATE_DECAY, staircase=True)

    train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)

    with tf.control_dependencies([train_step, variale_averages_op]):
        train_op = tf.no_op(name='train')

    correct_prediction = tf.equal(tf.argmax(average_y, 1), tf.argmax(y_,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

    with tf.Session() as sess:
        tf.initialize_all_variables().run()
        validate_feed = {x: mnist.validation.images,
                         y_: mnist.validation.labels}
        test_feed = {x: mnist.test.images, y_: mnist.test.labels}

        for i in range(TRAINING_STEPS):
            if i % 1000 == 0:
                validate_acc = sess.run(accuracy, feed_dict=validate_feed)
                print("After %d training step(s), validation accuracy using average model is %g " % (i, validate_acc))

            xs, ys = mnist.train.next_batch(BATCH_SIZE)
            sess.run(train_op, feed_dict={x: xs, y_: ys})

        test_acc = sess.run(accuracy, feed_dict=test_feed)
        print(("After %d training step(s), test accuracy using average model is %g" % (TRAINING_STEPS, test_acc)))


def main(argv=None):
    mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
    train(mnist)

if __name__=='__main__':
    main()

运行结果如下——可以看到,预测结果低于0.9,非常糟糕

After 0 training step(s), validation accuracy using average model is 0.0838 
After 1000 training step(s), validation accuracy using average model is 0.8326 
After 2000 training step(s), validation accuracy using average model is 0.8422 
After 3000 training step(s), validation accuracy using average model is 0.8416 
After 4000 training step(s), validation accuracy using average model is 0.8428 
After 5000 training step(s), validation accuracy using average model is 0.8432 
After 6000 training step(s), validation accuracy using average model is 0.843 
After 7000 training step(s), validation accuracy using average model is 0.843 
After 8000 training step(s), validation accuracy using average model is 0.842 
After 9000 training step(s), validation accuracy using average model is 0.8442 
After 10000 training step(s), validation accuracy using average model is 0.8418 
After 11000 training step(s), validation accuracy using average model is 0.8444 
After 12000 training step(s), validation accuracy using average model is 0.8422 
After 13000 training step(s), validation accuracy using average model is 0.8436 
After 14000 training step(s), validation accuracy using average model is 0.8436 
After 15000 training step(s), validation accuracy using average model is 0.844 
After 16000 training step(s), validation accuracy using average model is 0.8452 
After 17000 training step(s), validation accuracy using average model is 0.8444 
After 18000 training step(s), validation accuracy using average model is 0.844 
After 19000 training step(s), validation accuracy using average model is 0.843 
After 20000 training step(s), validation accuracy using average model is 0.8432 
After 21000 training step(s), validation accuracy using average model is 0.8432 
After 22000 training step(s), validation accuracy using average model is 0.8448 
After 23000 training step(s), validation accuracy using average model is 0.845 
After 24000 training step(s), validation accuracy using average model is 0.8434 
After 25000 training step(s), validation accuracy using average model is 0.844 
After 26000 training step(s), validation accuracy using average model is 0.845 
After 27000 training step(s), validation accuracy using average model is 0.8446 
After 28000 training step(s), validation accuracy using average model is 0.8442 
After 29000 training step(s), validation accuracy using average model is 0.8436 
After 30000 training step(s), test accuracy using average model is 0.8367

  • 7 无滑动平均

与2的区别在于,correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_,1))

运行结果如下

After 1000 training step(s), validation accuracy using average model is 0.965 
After 2000 training step(s), validation accuracy using average model is 0.9772 
After 3000 training step(s), validation accuracy using average model is 0.9792 
After 4000 training step(s), validation accuracy using average model is 0.9784 
After 5000 training step(s), validation accuracy using average model is 0.9826 
After 6000 training step(s), validation accuracy using average model is 0.9836 
After 7000 training step(s), validation accuracy using average model is 0.9836 
After 8000 training step(s), validation accuracy using average model is 0.983 
After 9000 training step(s), validation accuracy using average model is 0.983 
After 10000 training step(s), validation accuracy using average model is 0.984 
After 11000 training step(s), validation accuracy using average model is 0.9842 
After 12000 training step(s), validation accuracy using average model is 0.984 
After 13000 training step(s), validation accuracy using average model is 0.9834 
After 14000 training step(s), validation accuracy using average model is 0.9844 
After 15000 training step(s), validation accuracy using average model is 0.9842 
After 16000 training step(s), validation accuracy using average model is 0.9844 
After 17000 training step(s), validation accuracy using average model is 0.9846 
After 18000 training step(s), validation accuracy using average model is 0.9832 
After 19000 training step(s), validation accuracy using average model is 0.9854 
After 20000 training step(s), validation accuracy using average model is 0.984 
After 21000 training step(s), validation accuracy using average model is 0.9846 
After 22000 training step(s), validation accuracy using average model is 0.984 
After 23000 training step(s), validation accuracy using average model is 0.986 
After 24000 training step(s), validation accuracy using average model is 0.9842 
After 25000 training step(s), validation accuracy using average model is 0.986 
After 26000 training step(s), validation accuracy using average model is 0.9846 
After 27000 training step(s), validation accuracy using average model is 0.9852 
After 28000 training step(s), validation accuracy using average model is 0.9846 
After 29000 training step(s), validation accuracy using average model is 0.985 
After 30000 training step(s), test accuracy using average model is 0.9834

综合结果如下

v2-40392a91c505b3cd0962d5781d2f4adb_b.png

完整模型在测试集上表现最好,且测试集的结果好于验证集。

无正则化的模型的结果中,验证集好于测试集,表现出一定程度的过拟合。

无激活函数和无隐藏层的结果差得惨不忍睹。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值