tensorflow越跑越慢,TensorFlow:训练for循环中的每次迭代都比较慢

I'm training a standard, simple multilayer perceptron ANN with three hidden layers in TensorFlow. I added a text progress bar so I could watch the progress of iterating through the epochs. What I'm finding is that the processing time per iteration increases after the first few epochs. Here's an example screenshot showing the increase with each iteration:

TmBcL.png

In this case, the first few iterations took roughly 1.05s/it and by 100% it was taking 4.01s/it.

The relevant code is listed here:

# ------------------------- Build the TensorFlow Graph -------------------------

with tf.Graph().as_default():

(a bunch of statements for specifying the graph)

# --------------------------------- Training ----------------------------------

sess = tf.InteractiveSession()

sess.run(tf.initialize_all_variables())

print "Start Training"

pbar = tqdm(total = training_epochs)

for epoch in range(training_epochs):

avg_cost = 0.0

batch_iter = 0

while batch_iter < batch_size:

train_features = []

train_labels = []

batch_segments = random.sample(train_segments, 20)

for segment in batch_segments:

train_features.append(segment[0])

train_labels.append(segment[1])

sess.run(optimizer, feed_dict={x: train_features, y_: train_labels})

line_out = "," + str(batch_iter) + "\n"

train_outfile.write(line_out)

line_out = ",," + str(sess.run(tf.reduce_mean(weights['h1']), feed_dict={x: train_features, y_: train_labels}))

line_out += "," + str(sess.run(tf.reduce_mean(weights['h2']), feed_dict={x: train_features, y_: train_labels}))

line_out += "," + str(sess.run(tf.reduce_mean(weights['h3']), feed_dict={x: train_features, y_: train_labels})) + "\n"

train_outfile.write(line_out)

avg_cost += sess.run(cost, feed_dict={x: train_features, y_: train_labels})/batch_size

batch_iter += 1

pbar.update(1) # Increment the progress bar by one

train_outfile.close()

print "Completed training"

In searching stackoverflow, I found Processing time gets longer and longer after each iteration where someone else was also having problems with each iteration taking longer than the last. However, I believe mine may be different since they were clearly adding ops to the graph using statements like so:

distorted_image = tf.image.random_flip_left_right(image_tensor)

While I'm new to TensorFlow, I don't believe that I'm making the same mistake because the only stuff in my loop are sess.run() calls.

Any help is much appreciated.

解决方案

The three places where you have:

sess.run(tf.reduce_mean(weights['h1']), ...)

each append a new tf.reduce_mean() node to the graph at each iteration of the while loop, which adds overhead. Try to create them outside of the while loop:

with tf.Graph().as_default():

...

m1 = tf.reduce_mean(weights['h1'])

while batch_iter < batch_size:

...

line_out = ",," + str(sess.run(m1, feed_dict={x: train_features, y_: train_labels}))

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值