TensorFlow之变量管理及模型持久化 学习总结

作者:jliang
https://blog.csdn.net/jliang3

TensorFlow实战Google深度学习框架学习笔记

说明:以下所有代码使用版本TensorFlow1.4.0或1.12.0版本

import tensorflow as tf
print(tf.__version__)
1.12.0

5. MNIST数字识别问题

5.1 MNIST数据处理

MNIST数据集是NIST数据集的一个子集,包含60000张图片作为训练集,10000张图片作为测试集。每种图片都代表0-9中的一个数字,图片大小为28*28。
MNIST数据下载地址和内容:

网址内容
http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz训练数据图片
http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz训练数据答案
http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz测试数据图片
http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz测试数据答案

TesnsorFlow提供一个类来处理MNIST数据,这个类会自动下载并转化MNIST数据的格式,将数据从原始的数据包解析成训练和测试神经网络时使用的格式。
处理后的每一张图片时一个长度为784的一维数组,数组每个元素对应图片像素矩阵中的每一个数字。

from tensorflow.examples.tutorials.mnist import input_data 

# 如果指定目录没有下载好的数据,就会自动从上表的网址下载数据。
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)

print('Training data size:{}'.format(mnist.train.num_examples))
print('Validating data size:{}'.format(mnist.validation.num_examples))
print('Testing data size:{}\n\n'.format(mnist.test.num_examples))

print('Example Training data :{}'.format(mnist.train.images[0]))
print('Example Training data label :{}'.format(mnist.train.labels[0]))

Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
Training data size:55000
Validating data size:5000
Testing data size:10000


Example Training data :[0.         0.         0.         0.         0.         
...
 0.         0.         0.         0.        ]
Example Training data label :[0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]

5.2 神经网络模型训练及不同模型结果对比

完整的神经网络训练代码:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

# 1.设置输入和输出节点的个数,配置神经网络的参数。
INPUT_NODE = 784     # 输入节点
OUTPUT_NODE = 10     # 输出节点
LAYER1_NODE = 500    # 隐藏层数       
                              
BATCH_SIZE = 100     # 每次batch打包的样本个数        

# 模型相关的参数
LEARNING_RATE_BASE = 0.8      # 基础学习率
LEARNING_RATE_DECAY = 0.99    # 学习率的衰减率
REGULARAZTION_RATE = 0.0001   # 描述模型复杂度的正则化项在损失函数中的系数
TRAINING_STEPS = 5000         # 训练轮数
MOVING_AVERAGE_DECAY = 0.99   # 滑动平均衰减率

# 2. 定义辅助函数来计算前向传播结果,使用ReLU做为激活函数。
def inference(input_tensor, avg_class, weights1, biases1, weights2, biases2):
    # 不使用滑动平均类
    if avg_class == None:
        layer1 = tf.nn.relu(tf.matmul(input_tensor, weights1) + biases1)
        return tf.matmul(layer1, weights2) + biases2

    else:
        # 使用滑动平均类
        layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1))
        return tf.matmul(layer1, avg_class.average(weights2)) + avg_class.average(biases2)  
    
# 3. 定义训练过程。
def train(mnist):
    x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input')
    y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')
    # 生成隐藏层的参数。
    weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE, LAYER1_NODE], stddev=0.1))
    biases1 = tf.Variable(tf.constant(0.1, shape=[LAYER1_NODE]))
    # 生成输出层的参数。
    weights2 = tf.Variable(tf.truncated_normal([LAYER1_NODE, OUTPUT_NODE], stddev=0.1))
    biases2 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE]))

    # 计算不含滑动平均类的前向传播结果
    y = inference(x, None, weights1, biases1, weights2, biases2)
    
    # 定义训练轮数及相关的滑动平均类 
    global_step = tf.Variable(0, trainable=False)
    variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
    variables_averages_op = variable_averages.apply(tf.trainable_variables())
    average_y = inference(x, variable_averages, weights1, biases1, weights2, biases2)
    
    # 计算交叉熵及其平均值
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
    cross_entropy_mean = tf.reduce_mean(cross_entropy)
    
    # 损失函数的计算
    regularizer = tf.contrib.layers.l2_regularizer(REGULARAZTION_RATE)
    regularaztion = regularizer(weights1) + regularizer(weights2)
    loss = cross_entropy_mean + regularaztion
    
    # 设置指数衰减的学习率。
    learning_rate = tf.train.exponential_decay(
        LEARNING_RATE_BASE,
        global_step,
        mnist.train.num_examples / BATCH_SIZE,
        LEARNING_RATE_DECAY,
        staircase=True)
    
    # 优化损失函数
    train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
    
    # 反向传播更新参数和更新每一个参数的滑动平均值
    with tf.control_dependencies([train_step, variables_averages_op]):
        train_op = tf.no_op(name='train')

    # 计算正确率
    correct_prediction = tf.equal(tf.argmax(average_y, 1), tf.argmax(y_, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    
    # 初始化会话,并开始训练过程。
    with tf.Session() as sess:
        tf.global_variables_initializer().run()
        validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels}
        test_feed = {x: mnist.test.images, y_: mnist.test.labels} 
        
        # 循环的训练神经网络。
        for i in range(TRAINING_STEPS):
            if i % 1000 == 0:
                validate_acc = sess.run(accuracy, feed_dict=validate_feed)
                print("After %d training step(s), validation accuracy using average model is %g " % (i, validate_acc))
            
            xs,ys=mnist.train.next_batch(BATCH_SIZE)
            sess.run(train_op,feed_dict={x:xs,y_:ys})

        test_acc=sess.run(accuracy,feed_dict=test_feed)
        print(("After %d training step(s), test accuracy using average model is %g" %(TRAINING_STEPS, test_acc)))

        
# 4. 主程序入口,这里设定模型训练次数为5000次。
def main(argv=None):
    mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
    train(mnist)

if __name__=='__main__':
    main()
    
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
After 0 training step(s), validation accuracy using average model is 0.0852 
After 1000 training step(s), validation accuracy using average model is 0.9758 
After 2000 training step(s), validation accuracy using average model is 0.9806 
After 3000 training step(s), validation accuracy using average model is 0.9818 
After 4000 training step(s), validation accuracy using average model is 0.9832 
After 5000 training step(s), test accuracy using average model is 0.9837
  • 神经网络的结构对最终模型的效果有本质性的影响,如有没有使用隐藏层、有没有使用激活函数。
  • 滑动平均模型和指数衰减的学习率在一定程度上都是限制神经网络中参数更新的速度,然而在MNIST数据上,因为模型收敛速度很快,所以这两种优化对最终模型的影响不大。
  • 当问题更加复杂时,迭代不会这么快接近收敛,这是滑动平均模型和指数衰减的学习率可以发挥更大的作用。
  • 相比滑动平均模型和指数衰减学习率,使用正则化的损失函数给模型效果带来的提升要相对显著。
只优化交叉熵模型的模型优化函数
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy_mean, global_step=global_step)
优化交叉熵和L2正则化损失的和
loss = cross_entropy_mean + retularaztion
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)

5.3 变量管理

  • 当神经网络的结构更加复杂、参数更多时,需要一个更好的方式来传递和管理神经网络中的参数。
  • TensorFlow提供了通过变量名称来创建或者获取一个变量的机制。在不同的函数中可以直接通过变量的名字来使用变量,而不需要将变量通过参数的形式到处传递。
  • tf.Variable()和tf.get_variable()都可以用来创建或者获取变量。
    • 两者的最大区别在于指定变量名称的参数,对于tf.Variable来说名称是可选参数,通过name=’'形式给出,对于tf.get_variable来说是必填参数。
    • tf.get_variable会根据名字去创建或获取变量。
    • tf.get_variable用于创建变量时,会尝试去创建一个为执行名称的变量,如果已有同名参数就会创建失败,程序就会报错。
v1 = tf.get_variable('v', shape=[1], initializer=tf.constant_initializer(1.0))
print(v1)

v2 = tf.Variable(tf.constant(1.0, shape=[1], name='v'))
print(v2)

<tf.Variable 'v:0' shape=(1,) dtype=float32_ref>
<tf.Variable 'Variable:0' shape=(1,) dtype=float32_ref>

TensorFlow中的变量初始化函数

初始化函数功能主要参数
tf.constant_initializer初始化为给定常量常量的取值
tf.random_normal_initializer初始化为满足正太分布的随机值正太分布的均值和标准差
tf.truncated_normal_initializer初始化为满足整体分布的随机值,如果随机值偏离平均值超过2个标准差,这个数将重新随机正太分布的均值和标准差
tf.random_uniform_initializer初始化为满足平均分布的随机值最大、最小值
tf.uniform_unit_scaling_initializer初始化为满足平均分布但不影响输出数量级的随机值factor(产生随机变量时乘以的系数)
tf.zeros_initializer全0变量维度
tf.ones_initializer全1变量维度
  • 如果需要通过tf.get_variable获取一个已经创建的变量,需要通过tf.variable_scope()来生成上下文管理器,并明确指定在这个上下文管理器中,tf.variable将直接获取已经生成的变量。
  • tf.variable_scope()的参数reuse=True时生成上下文管理器,这个上下文管理器内所有的tf.variable()会直接获取已经创建的变量。
    • 如果变量不存在,则tf.get_variable()将报错
  • 如果tf.variable_scope()把reuse=None或reuse=False来创建上下文管理器,tf.get_variable()的所有操作将创建新的变量。
    • 如果变量已经存在,则tf.get_variable()会报错
# 在命名为foo的命名空间内创建名字为v的变量
with tf.variable_scope('foo'):
    v = tf.get_variable('v', [1], initializer=tf.constant_initializer(1.0))
    
# 尝试再次创建,此时会报错ValueError: Variable foo/v already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at:
# with tf.variable_scope('foo'):
#     v = tf.get_variable('v', [1])
    
# 在生成上下文管理器时,将reuse设置为True,这样tf.get_variable将直接获取已经声明的变量。
with tf.variable_scope('foo', reuse=True):
    v1 = tf.get_variable('v', [1])
    print('v is v1:{}'.format(v == v1))
    
# reuse=True时,只能获取已经创建过的变量,否则将报错ValueError: Variable bar/v does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?
# with tf.variable_scope('bar', reuse=True):
#     v = tf.get_variable('v', [1])
    
v is v1:True

tf.variable_scope()可以嵌套

with tf.variable_scope('root'):
    print(tf.get_variable_scope().reuse)
    
    with tf.variable_scope('foo', reuse=True):
        print(tf.get_variable_scope().reuse)
        
        with tf.variable_scope('bar'):
            print(tf.get_variable_scope().reuse)
            
    print(tf.get_variable_scope().reuse)
    
False
True
True
False
  • tf.variable_scope会创建一个命名空间,命名空间内创建的变量名称都会带上这个命名空间名称作为前缀。
  • tf.get_variable还提供了一个管理变量命名空间的方式,也就是直接访问命名空间下的变量。
import tensorflow as tf

# 直接创建变量
v1 = tf.get_variable('v', [1])
print('v1 name={}'.format(v1.name))

with tf.variable_scope('foo'):
    # 在命名空间内创建变量
    v2 = tf.get_variable('v', [1])
    print('v2 name={}'.format(v2.name))
    
    # 在嵌套命名空间内创建变量
    with tf.variable_scope('bar'):
        v3 = tf.get_variable('v', [1])
        print('v3 name={}'.format(v3.name))
    
    v4 = tf.get_variable('v1', [1])
    print('v4 name={}'.format(v4.name))
    
with tf.variable_scope('', reuse=True):
    # 通过命名空间访问变量
    v5 = tf.get_variable('foo/bar/v', [1])
    print('v5 name={}'.format(v5.name))
    print('v5 is v3?:{}'.format(v5 is v3))

    v6 = tf.get_variable('foo/v1', [1])
    print('v4 is v6?:{}'.format(v4 is v6))

v1 name=v:0
v2 name=foo/v:0
v3 name=foo/bar/v:0
v4 name=foo/v1:0
v5 name=foo/bar/v:0
v5 is v3?:True
v4 is v6?:True

5.4 TensorFlow模型持久化

为了训练结果可以复用,需要将训练得到的神经网络模型持久化。

保存模型

  • 通过tf.train.Saver类的save()保存到文件中
  • TensorFlow模型一般会存在后缀为.ckpt的文件中
  • 虽然只指定了一个文件路径,但是文件目录下会出现三个文件,因为TensorFlow会将计算图和图上参数取值分开保存。
    • model.ckpt.meta文件保存了TensorFlow计算图的结构
    • model.ckpt文件保存了TensorFlow程序中每一个变量的取值
    • checkpoint文件保存了一个目录下所有模型文件列表
import tensorflow as tf

v1 = tf.Variable(tf.constant(1.0, shape=[1], name='v1'))
v2 = tf.Variable(tf.constant(2.0, shape=[1], name='v2'))
result = v1 + v2

init_op = tf.global_variables_initializer()

saver = tf.train.Saver()

with tf.Session() as sess:
    sess.run(init_op)
    print('result={}'.format(sess.run(result)))
    
    saver.save(sess, 'model/model.ckpt')

result=[3.]

加载模型

方法一:加载模型的程序中先定义了TensorFlow计算图上的所有运算,并声明一个tf.train.Saver类,并没有运行初始化过程,而是直接从已经保存的模型中加载进来。

import tensorflow as tf

v1 = tf.Variable(tf.constant(1.0, shape=[1], name='v1'))
v2 = tf.Variable(tf.constant(2.0, shape=[1], name='v2'))
result = v1 + v2

saver = tf.train.Saver()

with tf.Session() as sess:
   
    saver.restore(sess, 'model/model.ckpt')
    print('result={}'.format(sess.run(result)))
    
INFO:tensorflow:Restoring parameters from model/model.ckpt
result=[3.]

方法二:直接加载已经持久化的图。

有时候只需要保存或加载部分变量,如一个已经训练好的5层模型,现在想将前面5层模型参数固定,在其后新建第6层网络,并只训练第6层网络。

  • 使用tf.train.import_meta_graph加载持久化的图,需要使用x.ckpt.meta初始化
  • 通过tf.get_default_graph().get_tensor_by_name()函数指定名称来获取变量的值
import tensorflow as tf

# 加载已经持久化的图,不再需要重复定义图的运算
saver = tf.train.import_meta_graph('model/model.ckpt.meta')

with tf.Session() as sess:
    saver.restore(sess, 'model/model.ckpt')
    print(tf.get_default_graph().get_tensor_by_name('add:0'))
    
INFO:tensorflow:Restoring parameters from model/model.ckpt
Tensor("add:0", shape=(1,), dtype=float32)
  • 为了保存或加载部分变量,在声明tf.train.Saver类时可以提供一个列表来指定需要保存/加载的变量。
  • tf.train.Saver类也支持在保存或加载时给变量重命名,这样做的主要目的之一就是方便使用变量的滑动平均值。
v1 = tf.Variable(tf.constant(1.0, shape=[1]), name='other-v1')
v2 = tf.Variable(tf.constant(2.0, shape=[1]), name='other-v2')

# 原本名称为v1的变量现在加载到变量v1中,名称为other-v1;原本名称为v2的变量现在加载到v2中,名称为other-v2。
saver = tf.train.Saver({'v1': v1, 'v2': v2})

保存滑动平均模型

每个变量的滑动平均是通过影子变量维护的,所以要获取变量的滑动平均值实际上就是获取这个影子变量的取值。
如果在加载模型时直接将影子变量映射到变量自身,那么在使用训练好的模型时就不需要再调用函数来获取变量的滑动平均值了。

import tensorflow as tf

print('global variables:')
v = tf.Variable(0, dtype=tf.float32, name='v')
for variables in tf.global_variables():
    print(variables)
    
print('After define moving average, global variables:')
# 在滑动平均模型声明之后,TensorFlow会自动生成一个影子变量
ema = tf.train.ExponentialMovingAverage(0.99)
maintain_averages_op = ema.apply(tf.global_variables())
for variables in tf.global_variables():
    print(variables)
    
saver = tf.train.Saver()
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    
    sess.run(tf.assign(v, 10))
    sess.run(maintain_averages_op)
    
    saver.save(sess, 'model/model2.ckpt')
    print(sess.run([v, ema.average(v)]))
    
global variables:
<tf.Variable 'v:0' shape=() dtype=float32_ref>
After define moving average, global variables:
<tf.Variable 'v:0' shape=() dtype=float32_ref>
<tf.Variable 'v/ExponentialMovingAverage:0' shape=() dtype=float32_ref>
[10.0, 0.099999905]

加载滑动平均值

v = tf.Variable(0, dtype=tf.float32, name='v')

saver = tf.train.Saver({'v/ExponentialMovingAverage': v})
with tf.Session() as sess:
    saver.restore(sess, 'model/model2.ckpt')
    print(sess.run(v))
    
INFO:tensorflow:Restoring parameters from model/model2.ckpt
0.099999905
tf.train.ExponentialMovingAverage类提供了variables_to_store()来生成tf.train.Saver类所需要的变量重命名字典,字典会自动包含前面定义的所有变量
import tensorflow as tf

v = tf.Variable(0, dtype=tf.float32, name='v')
ema = tf.train.ExponentialMovingAverage(0.99)
print(ema.variables_to_restore())

saver = tf.train.Saver(ema.variables_to_restore())
with tf.Session() as sess:
    saver.restore(sess, 'model/model2.ckpt')
    print(sess.run(v))

{'v/ExponentialMovingAverage': <tf.Variable 'v:0' shape=() dtype=float32_ref>}
INFO:tensorflow:Restoring parameters from model/model2.ckpt
0.099999905

只保存模型的某些信息

  • 在测试或离线预测时,只需要知道如何从神经网络的输入层经过前向传播计算得到输出层即可,而不需要类似于变量的初始化、模型保存等辅助节点的信息。
  • convert_varaibles_to_constants()可以将计算图中的变量及其取值通过常量的方式保存,整个TensorFlow计算图可以统一存放在一个文件中。
import tensorflow as tf
from tensorflow.python.framework import graph_util

v1 = tf.Variable(tf.constant(1.0, shape=[1]), name='v1')
v2 = tf.Variable(tf.constant(2.0, shape=[1]), name='v2')
result = v1 + v2

init_op = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init_op)
    # 导出当前计算图的GraphDef部分,只需要这一部分就可以完成从输入层到输出层的计算过程
    graph_def = tf.get_default_graph().as_graph_def()
    
    # 如果只关系程序定义的某些计算时,和这些计算无关的结点就没有必要导出并保存了。
    # 最后一个参数['add']给出了需要保存的结点名称,add结点是上面定义的两个变量相加的操作。
    # 注意:这里给出的是计算节点的名称,所以没有后面的:0,:0表示计算节点的第一个输出。
    output_graph_def = graph_util.convert_variables_to_constants(sess, graph_def, ['add'])
    
    # 导出模型存入文件
    with tf.gfile.GFile('model/combined_model.pb', 'wb') as f:
        f.write(output_graph_def.SerializeToString())
        
INFO:tensorflow:Froze 2 variables.
INFO:tensorflow:Converted 2 variables to const ops.

只需要得到计算图中某个节点的取值,直接算是定义的的结果

import tensorflow as tf
from tensorflow.python.platform import gfile

with tf.Session() as sess:
    model_filename = 'model/combined_model.pb'
    # 读取保存的模型文件,并将文件解析成对应的GraphDef Protocol Buffer
    with gfile.FastGFile(model_filename, 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        
    # 将graph_def中保存的图加载到当前图中,return_elements=['add:0']给出了返回的张量名称。
    # 保存的时候给出计算节点名称('add'),加载的时候给出张量名称('add:0')。
    result = tf.import_graph_def(graph_def, return_elements=['add:0'])
    print(sess.run(result))
    
WARNING:tensorflow:From <ipython-input-4-4bbdcdd06e94>:7: FastGFile.__init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
[array([3.], dtype=float32)]

5.5 TensorFlow最佳实践样例程序

mnist_inference.py:定义前向传播过程以及神经网络中的参数,无论训练时还是测试时都可以直接调用这些函数,而不用关心具体的神经网络结构。

import tensorflow as tf

# 1. 定义神经网络结构相关的参数。
INPUT_NODE = 784
OUTPUT_NODE = 10
LAYER1_NODE = 500

# 2. 通过tf.get_variable函数来获取变量。
def get_weight_variable(shape, regularizer):
    weights = tf.get_variable("weights", shape, initializer=tf.truncated_normal_initializer(stddev=0.1))
    if regularizer != None: tf.add_to_collection('losses', regularizer(weights))
    return weights

# 3. 定义神经网络的前向传播过程。
def inference(input_tensor, regularizer):
    with tf.variable_scope('layer1'):

        weights = get_weight_variable([INPUT_NODE, LAYER1_NODE], regularizer)
        biases = tf.get_variable("biases", [LAYER1_NODE], initializer=tf.constant_initializer(0.0))
        layer1 = tf.nn.relu(tf.matmul(input_tensor, weights) + biases)


    with tf.variable_scope('layer2'):
        weights = get_weight_variable([LAYER1_NODE, OUTPUT_NODE], regularizer)
        biases = tf.get_variable("biases", [OUTPUT_NODE], initializer=tf.constant_initializer(0.0))
        layer2 = tf.matmul(layer1, weights) + biases

    return layer2

mnist_train.py :定义了神经网络的训练过程

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# from mnist_inference import INPUT_NODE, OUTPUT_NODE, inference
import os

# 1. 定义神经网络结构相关的参数。
BATCH_SIZE = 100 
LEARNING_RATE_BASE = 0.8
LEARNING_RATE_DECAY = 0.99
REGULARIZATION_RATE = 0.0001
TRAINING_STEPS = 30000
MOVING_AVERAGE_DECAY = 0.99 
MODEL_SAVE_PATH = "MNIST_model/"
MODEL_NAME = "mnist_model"

# 2. 定义训练过程。
def train(mnist):
    # 定义输入输出placeholder。
    x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input')
    y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')

    regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)
    y = inference(x, regularizer)
    global_step = tf.Variable(0, trainable=False)
    
    # 定义损失函数、学习率、滑动平均操作以及训练过程。
    variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
    variables_averages_op = variable_averages.apply(tf.trainable_variables())
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
    cross_entropy_mean = tf.reduce_mean(cross_entropy)
    loss = cross_entropy_mean + tf.add_n(tf.get_collection('losses'))
    learning_rate = tf.train.exponential_decay(
        LEARNING_RATE_BASE,
        global_step,
        mnist.train.num_examples / BATCH_SIZE, LEARNING_RATE_DECAY,
        staircase=True)
    train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
    with tf.control_dependencies([train_step, variables_averages_op]):
        train_op = tf.no_op(name='train')
        
    # 初始化TensorFlow持久化类。
    saver = tf.train.Saver()
    with tf.Session() as sess:
        tf.global_variables_initializer().run()

        for i in range(TRAINING_STEPS):
            xs, ys = mnist.train.next_batch(BATCH_SIZE)
            _, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: xs, y_: ys})
            if i % 1000 == 0:
                print("After %d training step(s), loss on training batch is %g." % (step, loss_value))
                saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step)
                
# 3. 主程序入口。
def main(argv=None):
    mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
    train(mnist)

# if __name__ == '__main__':
main()
    
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
After 1 training step(s), loss on training batch is 2.97041.
After 1001 training step(s), loss on training batch is 0.264442.
After 2001 training step(s), loss on training batch is 0.171015.
After 3001 training step(s), loss on training batch is 0.139156.
After 4001 training step(s), loss on training batch is 0.146149.
After 5001 training step(s), loss on training batch is 0.108051.
After 6001 training step(s), loss on training batch is 0.108037.
After 7001 training step(s), loss on training batch is 0.0889147.
After 8001 training step(s), loss on training batch is 0.0814218.
After 9001 training step(s), loss on training batch is 0.0723983.
After 10001 training step(s), loss on training batch is 0.0696412.
After 11001 training step(s), loss on training batch is 0.0651941.
After 12001 training step(s), loss on training batch is 0.0596991.
After 13001 training step(s), loss on training batch is 0.054056.
After 14001 training step(s), loss on training batch is 0.0499064.
After 15001 training step(s), loss on training batch is 0.0549741.
After 16001 training step(s), loss on training batch is 0.0488281.
After 17001 training step(s), loss on training batch is 0.0428561.
After 18001 training step(s), loss on training batch is 0.0507946.
After 19001 training step(s), loss on training batch is 0.0392388.
After 20001 training step(s), loss on training batch is 0.0419657.
After 21001 training step(s), loss on training batch is 0.0396853.
After 22001 training step(s), loss on training batch is 0.0415259.
After 23001 training step(s), loss on training batch is 0.0405711.
After 24001 training step(s), loss on training batch is 0.0422433.
After 25001 training step(s), loss on training batch is 0.0336363.
After 26001 training step(s), loss on training batch is 0.0326248.
After 27001 training step(s), loss on training batch is 0.0380137.
After 28001 training step(s), loss on training batch is 0.0336193.
After 29001 training step(s), loss on training batch is 0.034102.

mnist_eval.py:定义了测试过程

每隔10秒运行一次,每次运行都是读取最新保存的模型,并在MNIST验证集上计算模型的正确率。一般在解决真实问题时,不会这么频繁地运行评测程序。

import time
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# from mnist_inference import INPUT_NODE, OUTPUT_NODE, inference
# from mnist_train import MOVING_AVERAGE_DECAY, MODEL_SAVE_PATH

# 1. 每10秒加载一次最新的模型
# 加载的时间间隔。
EVAL_INTERVAL_SECS = 10

def evaluate(mnist):
    with tf.Graph().as_default() as g:
        x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input')
        y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')
        validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels}

        y = inference(x, None)
        correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

        variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY)
        variables_to_restore = variable_averages.variables_to_restore()
        saver = tf.train.Saver(variables_to_restore)

        while True:
            with tf.Session() as sess:
                ckpt = tf.train.get_checkpoint_state(MODEL_SAVE_PATH)
                if ckpt and ckpt.model_checkpoint_path:
                    saver.restore(sess, ckpt.model_checkpoint_path)
                    global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
                    accuracy_score = sess.run(accuracy, feed_dict=validate_feed)
                    print("After %s training step(s), validation accuracy = %g" % (global_step, accuracy_score))
                else:
                    print('No checkpoint file found')
                    return
            time.sleep(EVAL_INTERVAL_SECS)
            
# 主程序
def main(argv=None):
    mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
    evaluate(mnist)

# if __name__ == '__main__':
main()
    
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
INFO:tensorflow:Restoring parameters from MNIST_model/mnist_model-29001
After 29001 training step(s), validation accuracy = 0.9866
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值