Tensorflow dynamic_rnn_decoder()

def dynamic_rnn_decoder(cell, decoder_fn, inputs=None, sequence_length=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None, name=None):

Dynamic RNN decoder for a sequence-to-sequence model specified by RNNCell and decoder function.

The dynamic_rnn_decoder is similar to the tf.python.ops.rnn.dynamic_rnn as the decoder does not make any assumptions of sequence length and batch size of the input.

The dynamic_rnn_decoder has two modes: 训练和测试。需要使用者针对不同的模式设置不同的decoder_fn函数。

训练和测试两个模式中都需要, celldecoder_fn , where cell performs computation at every timestep using raw_rnn, and decoder_fn allows modeling of early stopping, output, state, and next input and context.

训练阶段,user需要提供inputs.在每个时间步长a slice of the supplied input 提供给decoder_fn,which modifies and returns the input for the next time step. 训练阶段inputssequence_length一般是一起出现。sequence_length is needed at training time, i.e., when inputs is not None, for dynamic unrolling.

测试阶段, when inputs is None, sequence_length is not needed. 测试阶段,inputs最好是None,可以通过decoder_fn得到。


Args:
cell :(训练测试必填) An instance of RNNCell.
decoder_fn :(训练测试必填) A function that takes time, cell state, cell input,
cell output and context state. It returns a early stopping vector,
cell state, next input, cell output and context state.
Examples of decoder_fn can be found in the decoder_fn.py folder.
inputs : (训练有,测试最好没有)The inputs for decoding (embedded format).
If time_major == False (default), this must be a Tensor of shape:
[batch_size, max_time, ...].
If time_major == True, this must be a Tensor of shape:
[max_time, batch_size, ...].
The input to cell at each time step will be a Tensor with dimensions
[batch_size, ...].
sequence_length: (和input配合使用)(optional) An int32/int64 vector sized [batch_size]. if inputs is not None and sequence_length is None it is inferred from the inputs as the maximal possible sequence length.
parallel_iterations: (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
swap_memory: Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
time_major: The shape format of the inputs and outputs Tensors.
If true, these Tensors must be shaped [max_time, batch_size, depth].
If false, these Tensors must be shaped [batch_size, max_time, depth].
Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
scope: VariableScope for the raw_rnn; defaults to None.
name: NameScope for the decoder; defaults to “dynamic_rnn_decoder”


Returns:
A tuple (outputs, final_state, final_context_state) where:
outputs: the RNN output ‘Tensor’.
If time_major == False (default), this will be a Tensor shaped:
[batch_size, max_time, cell.output_size].
If time_major == True, this will be a Tensor shaped:
[max_time, batch_size, cell.output_size].
final_state: The final state and will be shaped
[batch_size, cell.state_size].
final_context_state: The context state returned by the final call
to decoder_fn. This is useful if the context state maintains internal data which is required after the graph is run.
For example, one way to diversify the inference output is to use
a stochastic decoder_fn, in which case one would want to store the
decoded outputs, not just the RNN outputs. This can be done by
maintaining a TensorArray in context_state and storing the decoded
output of each iteration therein.

Raises:
ValueError: if inputs is not None and has less than three dimensions.


def dynamic_rnn_decoder(cell, decoder_fn, inputs=None, sequence_length=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None, name=None):
  with ops.name_scope(name, "dynamic_rnn_decoder",[cell,  decoder_fn, inputs, sequence_length, parallel_iterations, swap_memory, time_major, scope]):
    if inputs is not None:
      # Convert to tensor
      inputs = ops.convert_to_tensor(inputs)
       # Test input dimensions
      if inputs.get_shape().ndims is not None and (
          inputs.get_shape().ndims < 2):
        raise ValueError("Inputs must have at least two dimensions")
      # Setup of RNN (dimensions, sizes, length, initial state, dtype)
      if not time_major:
        # [batch, seq, features] -> [seq, batch, features]
        inputs = array_ops.transpose(inputs, perm=[1, 0, 2])
      #获取inputs的数据类型
      dtype = inputs.dtype
      # Get data input information
      input_depth = int(inputs.get_shape()[2])
      batch_depth = inputs.get_shape()[1].value
      max_time = inputs.get_shape()[0].value
      if max_time is None:
        max_time = array_ops.shape(inputs)[0]
      # Setup decoder inputs as TensorArray
      inputs_ta = tensor_array_ops.TensorArray(dtype, size=max_time)
      inputs_ta = inputs_ta.unstack(inputs)

    def loop_fn(time, cell_output, cell_state, loop_state):
      if cell_state is None:  # first call, before while loop (in raw_rnn)
        if cell_output is not None:
          raise ValueError("Expected cell_output to be None when cell_state "
                           "is None, but saw: %s" % cell_output)
        if loop_state is not None:
          raise ValueError("Expected loop_state to be None when cell_state "
                           "is None, but saw: %s" % loop_state)
        context_state = None
      else:  # subsequent calls, inside while loop, after cell excution
        if isinstance(loop_state, tuple):
          (done, context_state) = loop_state
        else:
          done = loop_state
          context_state = None

      # call decoder function
      if inputs is not None:  # training
        # get next_cell_input
        if cell_state is None:
          next_cell_input = inputs_ta.read(0)
        else:
          if batch_depth is not None:
            batch_size = batch_depth
          else:
            batch_size = array_ops.shape(done)[0]
          next_cell_input = control_flow_ops.cond(
              math_ops.equal(time, max_time),
              lambda: array_ops.zeros([batch_size, input_depth], dtype=dtype),
              lambda: inputs_ta.read(time))
        (next_done, next_cell_state, next_cell_input, emit_output,
         next_context_state) = decoder_fn(time, cell_state, next_cell_input,
                                          cell_output, context_state)
      else:  # inference
        # next_cell_input is obtained through decoder_fn
        (next_done, next_cell_state, next_cell_input, emit_output,
         next_context_state) = decoder_fn(time, cell_state, None, cell_output,
                                          context_state)

      # check if we are done
      if next_done is None:  # training
        next_done = time >= sequence_length

      # build next_loop_state
      if next_context_state is None:
        next_loop_state = next_done
      else:
        next_loop_state = (next_done, next_context_state)

      return (next_done, next_cell_input, next_cell_state,
              emit_output, next_loop_state)

    # Run raw_rnn function
    outputs_ta, final_state, final_loop_state = rnn.raw_rnn(
        cell, loop_fn, parallel_iterations=parallel_iterations,
        swap_memory=swap_memory, scope=scope)
    outputs = outputs_ta.stack()

    # Get final context_state, if generated by user
    if isinstance(final_loop_state, tuple):
      final_context_state = final_loop_state[1]
    else:
      final_context_state = None

    if not time_major:
      # [seq, batch, features] -> [batch, seq, features]
      outputs = array_ops.transpose(outputs, perm=[1, 0, 2])
    return outputs, final_state, final_context_state

理解dynamic_rnn_decoder()函数,需要
dynamic_rnn_decoder()
decoder_fn()
rnn.raw_rnn()

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值