如何让TensorFlow模型运行提速36.8%

24 篇文章 0 订阅

x和y是标准的输入输出，网络的预测输出则是在BLSTM顶端加了一个线性MLP，最后通过sigmoid层并输出交叉熵误差。在这个程序里，数据都是通过feed_dict导入的，代码如下：

#coding:utf-8

import time

import tensorflow as tf
from tensorflow.contrib.rnn import LSTMCell

'''

'''

time_length = 128

batch_size = 400

feature_size = 512

hidden_size = 128

# 随机产生以均值为0 方差为1 的[time_length, batch_size, feature_size]数据
x = tf.random_normal([time_length,
batch_size, feature_size], mean=0, stddev=1)

y = tf.reduce_mean(tf.reduce_sum(x, axis=0), axis=1, keep_dims=True)
y = tf.cast(tf.greater(y, 0), tf.int32)

inputs = tf.placeholder(tf.float32,
shape=[time_length, batch_size, feature_size])
labels = tf.placeholder(tf.int32, shape=[batch_size, 1])

sequence_length = tf.Variable([time_length]*batch_size, dtype=tf.int32)

cell_fw = LSTMCell(num_units=hidden_size)
cell_bw = LSTMCell(num_units=hidden_size)
outputs, state = tf.nn.bidirectional_dynamic_rnn(
cell_fw=cell_fw,
cell_bw=cell_bw,
inputs=inputs,
sequence_length=sequence_length,
dtype=tf.float32,
time_major=True)
outputs_fw, outputs_bw = outputs
outputs = tf.concat([outputs_fw, outputs_bw], axis=2)
outputs = tf.reduce_mean(outputs, axis=0)
outputs = tf.contrib.layers.fully_connected(
inputs=outputs,
num_outputs=1,
activation_fn=None)

losses_op = tf.nn.sigmoid_cross_entropy_with_logits(None, tf.cast(labels, tf.float32), outputs)

losses_op = tf.reduce_mean(losses_op)

y_pred = tf.cast(tf.greater(outputs, 0), tf.int32)

accuracy = tf.reduce_mean(tf.cast(tf.equal(y_pred, y), tf.float32))

train_op = tf.train.AdamOptimizer(0.001).minimize(losses_op, name="train_op")

t1 = time.time()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(50):
data_x, data_y = sess.run([x, y])
_, losses, acc = sess.run([train_op, losses_op, accuracy],
feed_dict={inputs: data_x, labels: data_y})
print('epoch:%d, loss: %f,   accuracy: %f' % (i, losses, acc))

print('time:', (time.time()-t1))

RandomShuffleQueue 随机队列
FIFOQueue 先入先出
PriorityQueue 优先队列

enqueue() 即向queue中压入数据
dequeue() 即从queue中弹出数据
enqueue_many() 即向queue中压入多个数据
dequeue_many() 即从queue中弹出多个数据

Coordinator类可以让多个线程停止，它主要有三个方法：

tf.train.Coordinator.should_stop 确认线程是否应该停止
tf.train.Coordinator.request_stop 要求线程停止
tf.train.Coordinator.join 要求等待线程结束

import time
import tensorflow as tf
from tensorflow.contrib.rnn import LSTMCell

time_length = 128
batch_size = 400
feature_size = 512

hidden_size = 128

## prepare data
x = tf.random_normal([time_length, batch_size, feature_size], mean=0, stddev=1)

q = tf.FIFOQueue(capacity=4, dtypes=tf.float32)
enqueue_op = q.enqueue(x)
qr = tf.train.QueueRunner(q, [enqueue_op] * num_threads)
inputs = q.dequeue()
inputs.set_shape(x.get_shape())
y = tf.reduce_mean(tf.reduce_sum(inputs, axis=0), axis=1, keep_dims=True)
labels = tf.cast(tf.greater(y, 0), tf.int32)

## build model
sequence_length = tf.Variable([time_length]*batch_size, dtype=tf.int32)
cell_fw = LSTMCell(num_units=hidden_size)
cell_bw = LSTMCell(num_units=hidden_size)
outputs, state = tf.nn.bidirectional_dynamic_rnn(
cell_fw=cell_fw,
cell_bw=cell_bw,
inputs=inputs,
sequence_length=sequence_length,
dtype=tf.float32,
time_major=True)

outputs_fw, outputs_bw = outputs
outputs = tf.concat([outputs_fw, outputs_bw], axis=2)
outputs = tf.reduce_mean(outputs, axis=0)
outputs = tf.contrib.layers.fully_connected(
inputs=outputs,
num_outputs=1,
activation_fn=None)

losses_op = tf.nn.sigmoid_cross_entropy_with_logits(None, tf.cast(labels, tf.float32), outputs)
losses_op = tf.reduce_mean(losses_op)

y_pred = tf.cast(tf.greater(outputs, 0), tf.int32)
accuracy = tf.reduce_mean(tf.cast(tf.equal(y_pred, labels), tf.float32))
train_op = tf.train.AdamOptimizer(0.001).minimize(losses_op, name="train_op")

t1 = time.time()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
coord = tf.train.Coordinator()
for i in range(50):
_, losses, acc = sess.run([train_op, losses_op, accuracy])
print('epoch:%d, loss: %f' % (i, losses))

coord.request_stop()
print("Time taken: %f" % (time.time() - t1))


• 2
点赞
• 13
收藏
觉得还不错? 一键收藏
• 3
评论
12-30 998
02-26 919
08-01 644
05-19 533
09-27
03-07
10-10
09-27
06-19 1万+

“相关推荐”对你有帮助么？

• 非常没帮助
• 没帮助
• 一般
• 有帮助
• 非常有帮助

1.余额是钱包充值的虚拟货币，按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载，可以购买VIP、付费专栏及课程。