Pytorch学习笔记——nn.RNN()

pytorch 中使用 nn.RNN 类来搭建基于序列的循环神经网络,其构造函数如下:
nn.RNN(input_size, hidden_size, num_layers=1, nonlinearity=tanh, bias=True, batch_first=False, dropout=0, bidirectional=False)

  1. RNN的结构如下:
    在这里插入图片描述
    RNN 可以被看做是同一神经网络的多次赋值,每个神经网络模块会把消息传递给下一个,我们将这个图的结构展开
    在这里插入图片描述
  2. 参数解释如下:
  • input_size:The number of expected features in the input x,即输入特征的维度, 一般rnn中输入的是词向量,那么 input_size 就等于一个词向量的维度。
  • hidden_size:The number of features in the hidden state h,即隐藏层神经元个数,或者也叫输出的维度(因为rnn输出为各个时间步上的隐藏状态)。
  • num_layers:Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two RNNs together to form a stacked RNN,with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1
    即网络的层数。
  • nonlinearity:The non-linearity to use. Can be either 'tanh' or 'relu'. Default: 'tanh',即激活函数。
  • bias:If False, then the layer does not use bias weights b_ih and b_hh. Default: True,即是否使用偏置。
  • batch_first:If True, then the input and output tensors are provided as (batch, seq, feature). Default: False,即输入数据的形式,默认是 False,如果设置成True,则格式为(seq(num_step), batch, input_dim),也就是将序列长度放在第一位,batch 放在第二位。
  • dropout:If non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last layer, with dropout probability equal to :attr:dropout. Default: 0,即是否应用dropout, 默认不使用,如若使用将其设置成一个0-1的数字即可。
  • bidirectional:If True, becomes a bidirectional RNN. Default: False,是否使用双向的 rnn,默认是 False。

nn.RNN() 中最主要的参数是 input_sizehidden_size,这两个参数务必要搞清楚。其余的参数通常不用设置,采用默认值就可以了。

  1. RNN输入输出的shape
  • Inputs: input, h_0
    - input of shape (seq_len, batch, input_size): tensor containing the features
    of the input sequence. The input can also be a packed variable length
    sequence. See :func:torch.nn.utils.rnn.pack_padded_sequence
    or :func:torch.nn.utils.rnn.pack_sequence
    for details.
    - h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor
    containing the initial hidden state for each element in the batch.
    Defaults to zero if not provided. If the RNN is bidirectional,
    num_directions should be 2, else it should be 1.

  • Outputs: output, h_n
    - output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features (h_t) from the last layer of the RNN,
    - for each t. If a :class:torch.nn.utils.rnn.PackedSequence has
    been given as the input, the output will also be a packed sequence.
    For the unpacked case, the directions can be separated
    using output.view(seq_len, batch, num_directions, hidden_size),with forward and backward being direction 0 and 1 respectively.
    Similarly, the directions can be separated in the packed case.
    - h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len.
    Like output, the layers can be separated using
    h_n.view(num_layers, num_directions, batch, hidden_size).

  • Shape:
    - Input1: :math: ( L , N , H i n ) (L, N, H_{in}) (L,N,Hin) tensor containing input features where
    :math: H i n

  • 2
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值