循环神经网络pytorch实现

4 篇文章 0 订阅
2 篇文章 0 订阅

RNN

RNN
前向过程:

  • h t = g ( U h t − 1 + W x t + b h ) h_t = g(Uh_{t-1} + Wx_t +b_h) ht=g(Uht1+Wxt+bh)
  • y t = g ( W y h t + b y ) y_t = g(W_yh_t + b_y) yt=g(Wyht+by)

pytorch 实现

import torch
import torch.nn as nn
import torch.nn.functional as F


class RNNCell(nn.Module):

    def __init__(self, input_size, hidden_dim):
        super(RNNCell, self).__init__()
        self.input_size = input_size
        self.hidden_dim = hidden_dim
        self.linear1 = nn.Linear(hidden_dim, hidden_dim)
        self.linear2 = nn.Linear(input_size, hidden_dim)

    def forward(self, x, h_pre):
        """
        :param x:       (batch, input_size)
        :param h_pre:   (batch, hidden_dim)
        :return: h_next (batch, hidden_dim)
        """
        h_next = torch.tanh(self.linear1(h_pre) + self.linear2(x))
        return h_next


class RNN(nn.Module):

    def __init__(self, input_size, hidden_dim):
        super(RNN, self).__init__()
        self.input_size = input_size
        self.hidden_dim = hidden_dim
        self.rnn_cell = RNNCell(input_size, hidden_dim)

    def forward(self, x):
        """
        :param x: (seq_len, batch,input_size)
        :return:
           output (seq_len, batch, hidden_dim)
           h_n    (1, batch, hidden_dim)
        """
        seq_len, batch, _ = x.shape
        h = torch.zeros(batch, self.hidden_dim)
        output = torch.zeros(seq_len, batch, self.hidden_dim)
        for i in range(seq_len):
            inp = x[i, :, :]
            h = self.rnn_cell(inp, h)
            output[i, :, :] = h

        h_n = output[-1:, :, :]
        return output, h_n

LSTM

LSTM
前向过程:

  • 输入门: i t = σ ( W i x t + U i h t − 1 + b i ) i_t = \sigma (W_ix_t + U_ih_{t-1} + b_i) it=σ(Wixt+Uiht1+bi)
  • 遗忘门: f t = σ ( W f x t + U f h t − 1 + b f ) f_t = \sigma (W_fx_t + U_fh_{t-1} + b_f) ft=σ(Wfxt+Ufht1+bf)
  • 输出门: o t = σ ( W o x t + U o h t − 1 + b o ) o_t = \sigma (W_ox_t + U_oh_{t-1} + b_o) ot=σ(Woxt+Uoht1+bo)
  • c ^ t = t a n h ( W c x t + U c h t − 1 + b c ) \hat{c}_t = tanh(W_cx_t + U_ch_{t-1} + b_c) c^t=tanh(Wcxt+Ucht1+bc)
  • c t = f t ⊙ c t − 1 + i t ⊙ c ^ t c_t = f_t \odot c_{t-1} + i_t \odot \hat{c} _t ct=ftct1+itc^t
  • h t = o t ⊙ t a n h ( c t ) h_t = o_t \odot tanh(c_t) ht=ottanh(ct)

pytorch 实现

import torch
import torch.nn as nn
import torch.nn.functional as F
import copy


class Gate(nn.Module):
    def __init__(self, input_size, hidden_dim):
        super(Gate, self).__init__()
        self.linear1 = nn.Linear(hidden_dim, hidden_dim)
        self.linear2 = nn.Linear(input_size, hidden_dim)

    def forward(self, x, h_pre, active_func):
        h_next = active_func(self.linear1(h_pre) + self.linear2(x))
        return h_next


def clones(module, N):
    "Produce N identical layers."
    return nn.ModuleList([copy.deepcopy(module) for _ in range(N)])


class LSTMCell(nn.Module):

    def __init__(self, input_size, hidden_dim):
        super(LSTMCell, self).__init__()
        self.input_size = input_size
        self.hidden_dim = hidden_dim
        self.gate = clones(Gate(input_size, hidden_dim), 4)

    def forward(self, x, h_pre, c_pre):
        """
        :param x: (batch, input_size)
        :param h_pre: (batch, hidden_dim)
        :param c_pre: (batch, hidden_dim)
        :return: h_next(batch, hidden_dim), c_next(batch, hidden_dim)
        """
        f_t = self.gate[0](x, h_pre, torch.sigmoid)
        i_t = self.gate[1](x, h_pre, torch.sigmoid)
        g_t = self.gate[2](x, h_pre, torch.tanh)
        o_t = self.gate[3](x, h_pre, torch.sigmoid)
        c_next = f_t * c_pre + i_t * g_t
        h_next = o_t * torch.tanh(c_next)

        return h_next, c_next


class LSTM(nn.Module):

    def __init__(self, input_size, hidden_dim):
        super(LSTM, self).__init__()
        self.input_size = input_size
        self.hidden_dim = hidden_dim
        self.lstm_cell = LSTMCell(input_size, hidden_dim)

    def forward(self, x):
        """
        :param x: (seq_len, batch,input_size)
        :return:
           output (seq_len, batch, hidden_dim)
           h_n    (1, batch, hidden_dim)
           c_n    (1, batch, hidden_dim)
        """
        seq_len, batch, _ = x.shape
        h = torch.zeros(batch, self.hidden_dim)
        c = torch.zeros(batch, self.hidden_dim)
        output = torch.zeros(seq_len, batch, self.hidden_dim)
        for i in range(seq_len):
            inp = x[i, :, :]
            h, c = self.lstm_cell(inp, h, c)
            output[i, :, :] = h

        h_n = output[-1:, :, :]
        return output, (h_n, c.unsqueeze(0))

GRU

GRU
前向过程:

更新门:

  • r t = σ ( W x r x t + W h r h t − 1 + b r ) r_t = \sigma (W_{xr}x_t + W_{hr}h_{t-1} + b_r) rt=σ(Wxrxt+Whrht1+br)
  • z t = σ ( W x z x t + W h z h t − 1 + b z ) z_t = \sigma (W_{xz}x_t + W_{hz}h_{t-1} + b_z) zt=σ(Wxzxt+Whzht1+bz)

候选隐含状态:

  • h ^ t = t a n h ( W x h x t + r t ⊙ W h h h t − 1 + b h ) \hat{h}_t = tanh(W_{xh}x_t + r_t \odot W_{hh}h_{t-1} + b_h) h^t=tanh(Wxhxt+rtWhhht1+bh)

隐含状态:

  • h t = z t ⊙ h t − 1 + ( 1 − z t ) ⊙ h ^ t h_t = z_t \odot h_{t-1} + (1-z_t) \odot \hat{h}_t ht=ztht1+(1zt)h^t

输出:

  • y t = s o f t m a x ( W h y h t + b y ) y_t = softmax(W_{hy}h_t + b_y) yt=softmax(Whyht+by)
  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值