正文共3565个字,预计阅读时间10分钟。
上海站 | 高性能计算之GPU CUDA培训
训练一个神经网络的目的是啥?不就是有朝一日让它有用武之地吗?可是,在别处使用训练好的网络,得先把网络的参数(就是那些variables)保存下来,怎么保存呢?其实,tensorflow已经给我们提供了很方便的API,来帮助我们实现训练参数的存储与读取,如果想了解详情,请看晦涩难懂的官方API,接下来我简单介绍一下我的理解。
保存与读取数据全靠下面这个类实现:
class tf.train.Saver
当我们需要存储数据时,下面2条指令就够了
saver = tf.train.Saver() save_path = saver.save(sess, model_path)
解释一下,首先创建一个saver类,然后调用saver的save方法(函数),save需要传递两个参数,一个是你的训练session,另一个是文件存储路径,例如“/tmp/superNet.ckpt”,这个存储路径是可以包含文件名的。save方法会返回一个存储路径。当然,save方法还有别的参数可以传递,这里不再介绍。
然后怎么读取数据呢?看下面
saver = tf.train.Saver() load_path = saver.restore(sess, model_path)
和存储数据神似啊!不再赘述。
下面是重点!关于tf.train.Saver()使用的几点小心得!
1、save方法在实现数据读取时,它仅仅读数据,关键是得有一些提前声明好的variables来接受这些数据,因此,当save读取数据到sess时,需要提前声明与数据匹配的variables,否则程序就报错了。
2、save读取的数据不需要initialize。
3、目前想到的就这么多,随时补充。
为了对数据存储和读取有更直观的认识,我自己写了两个实验小程序,下面是第一个,训练网络并存储数据,用的MNIST数据集
import tensorflow as tf
import sys
# load MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('data', one_hot=True)
# 一些 hyper parameters
activation = tf.nn.relu batch_size = 100
iteration = 20000
hidden1_units = 30
# 注意!这里是存储路径!
model_path = sys.path[0] + '/simple_mnist.ckpt'
X = tf.placeholder(tf.float32, [None, 784]) y_ = tf.placeholder(tf.float32, [None, 10]) W_fc1 = tf.Variable(tf.truncated_normal([784, hidden1_units], stddev=0.2)) b_fc1 = tf.Variable(tf.zeros([hidden1_units])) W_fc2 = tf.Variable(tf.truncated_normal([hidden1_units, 10], stddev=0.2)) b_fc2 = tf.Variable(tf.zeros([10]))
def inference(img): fc1 = activation(tf.nn.bias_add(tf.matmul(img, W_fc1), b_fc1)) logits = tf.nn.bias_add(tf.matmul(fc1, W_fc2), b_fc2)
return logits
def loss(logits, labels): cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, labels) loss = tf.reduce_mean(cross_entropy)
return loss
def evaluation(logits, labels):
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
return accuracy logits = inference(X) loss = loss(logits, y_) train_op = tf.train.AdamOptimizer(1e-4).minimize(loss) accuracy = evaluation(logits, y_)
# 先实例化一个Saver()类saver = tf.train.Saver() init = tf.initialize_all_variables()
with tf.Session() as sess: sess.run(init) for i in xrange(iteration): batch = mnist.train.next_batch(batch_size)
if i%1000 == 0 and i: train_accuracy = sess.run(accuracy, feed_dict={X: batch[0], y_: batch[1]})
print "step %d, train accuracy %g" %(i, train_accuracy)
sess.run(train_op, feed_dict={X: batch[0], y_: batch[1]})
print '[+] Test accuracy is %f' % sess.run(accuracy, feed_dict={X: mnist.test.images, y_: mnist.test.labels})
# 存储训练好的variables
save_path = saver.save(sess, model_path)
print "[+] Model saved in file: %s" % save_path
接下来是读取数据并做测试!
import tensorflow as tf
import sys
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('data', one_hot=True)
activation = tf.nn.relu hidden1_units = 30
model_path = sys.path[0] + '/simple_mnist.ckpt'
X = tf.placeholder(tf.float32, [None, 784]) y_ = tf.placeholder(tf.float32, [None, 10]) W_fc1 = tf.Variable(tf.truncated_normal([784, hidden1_units], stddev=0.2)) b_fc1 = tf.Variable(tf.zeros([hidden1_units])) W_fc2 = tf.Variable(tf.truncated_normal([hidden1_units, 10], stddev=0.2)) b_fc2 = tf.Variable(tf.zeros([10]))
def inference(img): fc1 = activation(tf.nn.bias_add(tf.matmul(img, W_fc1), b_fc1)) logits = tf.nn.bias_add(tf.matmul(fc1, W_fc2), b_fc2)
return logits
def evaluation(logits, labels): correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
return accuracy logits = inference(X) accuracy = evaluation(logits, y_) saver = tf.train.Saver()with tf.Session() as sess:
# 读取之前训练好的数据 load_path = saver.restore(sess, model_path)
print "[+] Model restored from %s" % load_path
print '[+] Test accuracy is %f' % sess.run(accuracy, feed_dict={X: mnist.test.images, y_: mnist.
原文链接:https://www.jianshu.com/p/83fa3aa2d0e9
查阅更为简洁方便的分类文章以及最新的课程、产品信息,请移步至全新呈现的“LeadAI学院官网”:
www.leadai.org
请关注人工智能LeadAI公众号,查看更多专业文章
大家都在看