一直在等待,一直会等待 RNN系列--1

tf.nn.rnn_cell.BasicRNNCell

        类BasicRNNCell继承于LayerRNNCell,是最基本的RNN单元

初始化参数:

__init__(
    num_units,
    activation=None,
    reuse=None,
    name=None,
    dtype=None,
    **kwargs
)

num_units: RNN cell的个数,即隐层的大小
activation: 非线性使用,默认是tanh
reuse: boolean类型,当前scope中是否可以重用
dtype: 层类型,若为None,则与初始输入一致

属性:

activity_regularizer:Optional regularizer function for the output of this layer.
input:Retrieves the input tensor(s) of a layer
losses:Losses which are associated with this Layer.
output:Retrieves the output tensor(s) of a layer.
state_size: 隐层的大小
output_size: 后者是输出的大小
weights:Returns the list of all layer variables/weights.

方法:

__call__(
    inputs,
    state,
    scope=None,
    *args,
    **kwargs
)

         从给定状态开始,根据输入运行一次RNN单元
         inputs: 2-D张量, [batch_size, input_size].
         state: self.state_size为整数时为2-D张量[batch_size, self.state_size]. 否则,如果 self.state_size整数元组, 结果为元组 [batch_size, s] for s in self.state_size.
         返回值: 二维张量 [batch_size, self.output_size]

get_initial_state(
    inputs=None,
    batch_size=None,
    dtype=None
)

        返回所有的输入

get_input_at(node_index)

        给定结点索引,返回对应的输入

zero_state(
    batch_size,
    dtype
)

        返回全为零填充的state tensor(s),如果state_size为整数,或者TensorShape,则结果[batch_size, state_size];如果state_size嵌套list或者tuple,返回值为形状为[batch_size, s] for each s in state_size的张量

tf.nn.rnn_cell.BasicLSTMCell

         基本LSTM递归神经网络单元,没有实现clipping,projection layer,peep-hole等一些lstm的高级变种,仅作为一个基本的basicline结构存在,如果要使用这些高级variant要用LSTMCell这个类。因此已经弃用,使用tf.nn.rnn_cell.LSTMCell进行替换

tf.nn.rnn_cell.LSTMCell

         Long short-term memory unit (LSTM) recurrent network cell.

初始化参数:

__init__(
    num_units,
    use_peepholes=False,
    cell_clip=None,
    initializer=None,
    num_proj=None,
    proj_clip=None,
    num_unit_shards=None,
    num_proj_shards=None,
    forget_bias=1.0,
    state_is_tuple=True,
    activation=None,
    reuse=None,
    name=None,
    dtype=None,
    **kwargs
)

num_units: LSTM cell的个数,隐单元的个数
use_peepholes: bool, set True to enable diagonal/peephole connections.
cell_clip: (optional) A float value, if provided the cell state is clipped by this value prior to the cell output activation.
initializer: (optional) The initializer to use for the weight and projection matrices.
num_proj: (optional) int, The output dimensionality for the projection matrices. If None, no projection is performed.
proj_clip: (optional) A float value. If num_proj > 0 and proj_clip is provided, then the projected values are clipped elementwise to within [-proj_clip, proj_clip].
num_unit_shards: Deprecated, will be removed by Jan. 2017. Use a variable_scope partitioner instead.
num_proj_shards: Deprecated, will be removed by Jan. 2017. Use a variable_scope partitioner instead.
forget_bias: Biases of the forget gate are initialized by default to 1 in order to reduce the scale of forgetting at the beginning of the training. Must set it manually to 0.0 when restoring from CudnnLSTM trained checkpoints.
state_is_tuple: If True, accepted and returned states are 2-tuples of the c_state and m_state. If False, they are concatenated along the column axis. This latter behavior will soon be deprecated.
activation: Activation function of the inner states. Default: tanh. It could also be string that is within Keras activation function names.
reuse: (optional) Python boolean describing whether to reuse variables in an existing scope. If not True, and the existing scope already has the given variables, an error is raised.
name: String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases.
dtype: Default dtype of the layer (default of None means use the type of the first input). Required when build is called before call.

属性和方法与tf.nn.rnn_cell.BasicRNNCell几乎相同

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值