# Tensorflow trick 与 细节

## 前后传播采用不同方式

How Can I Define Only the Gradient for a Tensorflow Subgraph?

Suppose you want group of ops that behave as f(x) in forward mode, but as g(x) in the backward mode. You implement it as

t = g(x)
y = t + tf.stop_gradient(f(x) - t)

def round_through(x):
# g(x) = x back
# f(x) = round(x) forward
rounded = K.round(x)
return x + K.stop_gradient(rounded - x) 

## 停止部分变量的梯度计算

How to stop gradient for some entry of a tensor in tensorflow提供了一个较好的方法：

res_matrix = tf.stop_gradient(mask_h*E) + mask*E

def entry_stop_gradients(target, mask):
return tf.stop_gradient(mask_h * target) + mask * target

## Tensor 与 Variable

a = tf.Variable([1])
with tf.device("/cpu:0"):
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print 'a:',a.eval()
print 'type of a:',a
a = a + 1
print 'a:',a.eval()
print 'type of a:',a
b = a + 1
print 'b:',b.eval()
print 'type of b:',b

a: [1]
type of a: <tf.Variable 'Variable:0' shape=(1,) dtype=int32_ref>
a: [2]
type of a: Tensor("add:0", shape=(1,), dtype=int32, device=/device:CPU:0)
b: [3]
type of b: Tensor("add_1:0", shape=(1,), dtype=int32, device=/device:CPU:0)

• 评论

• 上一篇
• 下一篇