static_rnn 和dynamic_rnn的区别

最近在看tensorflow的api接口,发现tensorflow中提供了rnn接口有两种,第一种是静态的rnn,另外一种是动态的rnn,这两种区别查了一些资料其中:https://stackoverflow.com/questions/39734146/whats-the-difference-between-tensorflow-dynamic-rnn-and-rnn

说的比较清楚,原文如下:

   
   
  1. tf.nn.rnn creates an unrolled graph for a fixed RNN length. That means, if you call tf.nn.rnn with inputs having 200 time steps you are creating a static graph with 200 RNN steps. First, graph creation is slow. Second, youre unable to pass in longer sequences (> 200) than youve originally specified.
  2. tf.nn.dynamic_rnn solves this. It uses a tf.While loop to dynamically construct the graph when it is executed. That means graph creation is faster and you can feed batches of variable size.


中文大概意思是说:

    
    
  1. tf.nn.rnn创建一个展开图的一个固定的网络长度。这意味着,如果有200次输入的步骤你与200步骤创建一个静态的图tf.nn.rnn RNN。首先,创建graphh较慢。第二,您无法传递比最初指定的更长的序列(> 200)。但是动态的tf.nn.dynamic_rnn解决这。当它被执行时,它使用循环来动态构建图形。这意味着图形创建速度更快,并且可以提供可变大小的批处理。


这里就说的比较清楚了,下面  看下编程有啥不同的:


静态的rnn:

   
   
  1. def RNN(_X,weights,biases):
  2. #第一种用static_rnn效果
  3. _X = tf.transpose(_X, [1, 0, 2]) # permute n_steps and batch_size
  4. _X = tf.reshape(_X, [-1, n_inputs]) # (n_steps*batch_size, n_input)
  5. _X = tf.matmul(_X, weights['in']) + biases['in']
  6. lstm_cell =tf.nn.rnn_cell.BasicLSTMCell(n_hidden_unis, forget_bias=1.0)
  7. _init_state=lstm_cell.zero_state(batch_size,dtype=tf.float32)
  8. _X = tf.split(_X, n_step,0 ) # n_steps * (batch_size, n_hidden)
  9. outputs, states =tf.nn.static_rnn(lstm_cell, _X, initial_state=_init_state)
  10. #第二种用dynamic_rnn,是是这种效果
  11. # lstm_cell =tf.nn.rnn_cell.BasicLSTMCell(n_hidden_unis)
  12. # outputs, states =tf.nn.dynamic_rnn(lstm_cell, _X, dtype=tf.float32)
  13. # outputs = tf.transpose(outputs, [1, 0, 2])
  14. # Get inner loop last output
  15. return tf.matmul(outputs[-1], weights['out']) + biases['out']

动态的rnn:
   
   
  1. def RNN(_X,weights,biases):
  2. #第一种用static_rnn效果
  3. # _X = tf.transpose(_X, [1, 0, 2]) # permute n_steps and batch_size
  4. # _X = tf.reshape(_X, [-1, n_inputs]) # (n_steps*batch_size, n_input)
  5. # _X = tf.matmul(_X, weights['in']) + biases['in']
  6. # lstm_cell =tf.nn.rnn_cell.BasicLSTMCell(n_hidden_unis, forget_bias=1.0)
  7. # _init_state=lstm_cell.zero_state(batch_size,dtype=tf.float32)
  8. # _X = tf.split(_X, n_step,0 ) # n_steps * (batch_size, n_hidden)
  9. # outputs, states =tf.nn.static_rnn(lstm_cell, _X, initial_state=_init_state)
  10. #第二种用dynamic_rnn,是是这种效果
  11. lstm_cell =tf.nn.rnn_cell.BasicLSTMCell(n_hidden_unis)
  12. outputs, states =tf.nn.dynamic_rnn(lstm_cell, _X, dtype=tf.float32)
  13. outputs = tf.transpose(outputs, [1, 0, 2])
  14. # Get inner loop last output
  15. return tf.matmul(outputs[-1], weights['out']) + biases['out']
  • 3
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值