1. Need to tell optimizer to increment global step
self.global_step = tf.Variable(0, dtype=tf.int32, trainable=False,
name='global_step')
self.optimizer = tf.train.GradientDescentOptimizer(self.lr).minimize(self.loss,
global_step=self.global_step)
2. tf.train.Saver Only save variables, not graph
3. tf.Coordinator and tf.train.QueueRunner
● QueueRunner
create a number of threads cooperating to enqueue tensors in the
same queue
● Coordinator
help multiple threads stop together and report exceptions to a
program that waits for them to stop
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for i in range(10): # generate 10 batches
features, labels = sess.run([data_batch, label_batch])
print(i)
print(features)
coord.request_stop()
coord.join(threads)