长短期记忆网络LSTM-读书笔记

1. 长短期记忆网络 LSTM

1.1 原理概述

  • 长短期记忆网络门:
    (1)忘记门 F t F_t Ft:将值朝0减少
    F t = σ ( X t W x f + H t − 1 W h f + b f ) (1) F_t=\sigma(X_tW_{xf}+H_{t-1}W_{hf}+b_f)\tag 1 Ft=σ(XtWxf+Ht1Whf+bf)(1)
    (2)输入门 I t I_t It:决定是不是忽略掉输入数据
    I t = σ ( X t W x i + H t − 1 W h i + b i ) (2) I_t=\sigma(X_tW_{xi}+H_{t-1}W_{hi}+b_i)\tag 2 It=σ(XtWxi+Ht1Whi+bi)(2)
    (3)输出门 O t O_t Ot:决定是不是使用隐状态
    O t = σ ( X t W x o + H t − 1 W h o + b o ) (3) O_t=\sigma(X_tW_{xo}+H_{t-1}W_{ho}+b_o)\tag 3 Ot=σ(XtWxo+Ht1Who+bo)(3)
    (4)候选记忆单元 C ~ t \widetilde{C}_t C t:存储记忆作用
    C ~ t = tanh ⁡ ( X t W x c + H t − 1 W h c + b c ) (4) \widetilde{C}_t=\tanh(X_tW_{xc}+H_{t-1}W_{hc}+b_c)\tag4 C t=tanh(XtWxc+Ht1Whc+bc)(4)
    (5)记忆单元 C t C_t Ct
    C t = F t ⊙ C t − 1 + I t ⊙ C ~ t (5) C_t=F_t \odot C_{t-1}+I_t \odot \widetilde{C}_t\tag5 Ct=FtCt1+ItC t(5)
    (6)隐状态 H t H_t Ht:
    H t = O t ⊙ tanh ⁡ ( C t ) (6) H_t=O_t\odot \tanh(C_t)\tag6 Ht=Ottanh(Ct)(6)

1.2 图解结构

在这里插入图片描述

1.3 代码描述

1.3.1 lstm 从零开始实现

  • 代码
# -*- coding: utf-8 -*-
# @Project: zc
# @Author: zc
# @File name: LSTM_test
# @Create time: 2022/1/30 22:48

import torch
from torch import nn
from d2l import torch as d2l
import matplotlib.pyplot as plt

batch_size, num_steps = 32, 35
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)


def get_lstm_params(vocab_size, num_hiddens, device):
	num_inputs = num_outputs = vocab_size

	def normal(shape):
		return torch.randn(size=shape, device=device) * 0.01

	def three():
		return (normal((num_inputs, num_hiddens)),
				normal((num_hiddens, num_hiddens)),
				torch.zeros(num_hiddens, device=device))

	W_xi, W_hi, b_i = three()  # 输入门参数
	W_xf, W_hf, b_f = three()  # 遗忘门参数
	W_xo, W_ho, b_o = three()  # 输出门参数
	W_xc, W_hc, b_c = three()  # 候选记忆单元参数

	# 输出层参数
	W_hq = normal((num_hiddens, num_outputs))
	b_q = torch.zeros(num_outputs, device=device)

	params = [W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc,
			  b_c, W_hq, b_q]
	for param in params:
		param.requires_grad_(True)
	return params


def init_lstm_state(batch_size, num_hiddens, device):
	return (torch.zeros((batch_size, num_hiddens), device=device),
			torch.zeros((batch_size, num_hiddens), device=device))


def lstm(inputs, state, params):
	[W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c,
	 W_hq, b_q] = params
	(H, C) = state
	outputs = []
	for X in inputs:
		I = torch.sigmoid((X @ W_xi) + (H @ W_hi) + b_i)
		F = torch.sigmoid((X @ W_xf) + (H @ W_hf) + b_f)
		O = torch.sigmoid((X @ W_xo) + (H @ W_ho) + b_o)
		C_tilda = torch.tanh((X @ W_xc) + (H @ W_hc) + b_c)
		C = F * C + I * C_tilda
		H = O * torch.tanh(C)
		Y = (H @ W_hq) + b_q
		outputs.append(Y)
	return torch.cat(outputs, dim=0), (H, C)


vocab_size,num_hiddens,device = len(vocab),256,d2l.try_gpu()
num_epochs,lr = 500,1
model = d2l.RNNModelScratch(len(vocab),num_hiddens,device,get_lstm_params,
							init_lstm_state,lstm)
d2l.train_ch8(model,train_iter,vocab,lr,num_epochs,device)
plt.show()
  • 结果
perplexity 1.2, 29691.7 tokens/sec on cuda:0
time traveller after the thing tine wionerswer for sowey bucklex
traveller afcenthe epenthe of ho gersfor ton this it ar wis

在这里插入图片描述

1.3.2 lstm 简洁实现

  • 代码
# -*- coding: utf-8 -*-
# @Project: zc
# @Author: zc
# @File name: lstm-concise
# @Create time: 2022/2/6 22:06
import torch
from torch import nn
from d2l import torch as d2l
import matplotlib.pyplot as plt

batch_size, num_steps = 32, 35
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)

vocab_size, num_hiddens, device = len(vocab), 256, d2l.try_gpu()
num_epochs, lr = 500, 1

num_inputs = vocab_size
lstm_layer = nn.LSTM(num_inputs, num_hiddens)
model = d2l.RNNModel(lstm_layer, len(vocab))
model = model.to(device)
d2l.train_ch8(model,train_iter,vocab,lr,num_epochs,device)
plt.show()
  • 结果
perplexity 1.1, 359347.9 tokens/sec on cuda:0
time travelleryou can show black is white by argument said filby
travelleryou can show black is white by argument said filby

在这里插入图片描述

1.4 网络小结

(1)长短期记忆网络有三种类型的门:输入门,遗忘门和输出门
(2)长短期记忆网络的隐藏层输出包括"隐状态"和"记忆单元"。只有隐状态会传递到输出层,而记忆元完全属于内部信息
(3)长短期记忆网络可以缓解梯度消失和梯度爆炸

2. 深度循环神经网络

2.1 循环神经网络模型结构

在这里插入图片描述

  • h t h_t ht相关网络:
    h t = ϕ ( W h h h t − 1 + W h x x t − 1 + b h ) (1) h_t=\phi(W_{hh}h_{t-1}+W_{hx}x_{t-1}+b_h)\tag1 ht=ϕ(Whhht1+Whxxt1+bh)(1)
    注1: h t h_t ht来自于 h t − 1 h_{t-1} ht1 x t − 1 x_{t-1} xt1的影响; O t O_t Ot来自于 h t h_t ht的影响; W h h W_{hh} Whh存储所有的时序信息
    O t = ϕ ( W h o h t + b o ) (2) O_t=\phi(W_{ho}h_t+b_o)\tag2 Ot=ϕ(Whoht+bo)(2)
    注2:损失的计算是 l o s s = O t − X t loss =O_t-X_t loss=OtXt;因为 O t O_t Ot相当于 Y_hat,而X_t 相当于标签 Y,那么我们就能得到损失值

2.2 深度循环神经网络结构图

在这里插入图片描述
H t ( l ) = ϕ ( H t ( l − 1 ) W x h ( l ) + H t − 1 ( l ) W h h ( l ) + b h ( l ) ) (3) H_t^{(l)}=\phi(H_t^{(l-1)}W_{xh}^{(l)}+H_{t-1}^{(l)}W_{hh}^{(l)}+b_h^{(l)})\tag{3} Ht(l)=ϕ(Ht(l1)Wxh(l)+Ht1(l)Whh(l)+bh(l))(3)
O t = H t ( L ) W h q + b q (4) O_t=H_t^{(L)}W_{hq}+b_q\tag{4} Ot=Ht(L)Whq+bq(4)

2.3 区别

由上述可以看出,区别在于深度神经网络将隐藏层 H t H_t Ht改变成了 [ H 1 , H 2 , . . . . , H L ] [H_1,H_2,....,H_L] [H1,H2,....,HL]

2.4 代码

  • 代码
import torch
from torch import nn
from d2l import torch as d2l
import matplotlib.pyplot as plt


batch_size, num_steps = 32, 35
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)
vocab_size, num_hiddens, num_layers = len(vocab), 256, 2
num_inputs = vocab_size
device = d2l.try_gpu()
lstm_layer = nn.LSTM(num_inputs, num_hiddens, num_layers)
model = d2l.RNNModel(lstm_layer, len(vocab))
model = model.to(device)
num_epochs, lr = 500, 2
d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
  • 结果
perplexity 1.0, 195541.7 tokens/sec on cuda:0
time travelleryou can show black is white by argument said filby
travelleryou can show black is white by argument said filby

3. 双向循环神经网络

3.1 结构图

  • 注:双向循环神经网络只是在原来的一个单向隐藏层中新增一个反向的隐藏层来进行传播,具体传播路径如下:
    在这里插入图片描述

3.2 公式

  • 隐藏层-正向传播
    H t → = ϕ ( X t W x h ( f ) + H t − 1 → W h h ( f ) + b h ( f ) ) (1) \mathop{H_t}\limits ^{\rightarrow}=\phi(X_tW_{xh}^{(f)}+\mathop{H_{t-1}}\limits ^{\rightarrow}W_{hh}^{(f)}+b_h^{(f)})\tag 1 Ht=ϕ(XtWxh(f)+Ht1Whh(f)+bh(f))(1)
  • 隐藏层-反向传播
    H t ← = ϕ ( X t W x h ( b ) + H t + 1 ← W h h ( b ) + b h ( b ) ) (2) \mathop{H_t}\limits ^{\leftarrow}=\phi(X_tW_{xh}^{(b)}+\mathop{H_{t+1}}\limits ^{\leftarrow}W_{hh}^{(b)}+b_h^{(b)})\tag 2 Ht=ϕ(XtWxh(b)+Ht+1Whh(b)+bh(b))(2)
  • 输出层
    H t = [ H t → , H t ← ] (3) H_t=[\mathop{H_t}\limits ^{\rightarrow},\mathop{H_t}\limits ^{\leftarrow}]\tag3 Ht=[Ht,Ht](3)
    O t = H t W h q + b q (4) O_t=H_tW_{hq}+b_q\tag4 Ot=HtWhq+bq(4)

3.3 代码

  • 代码
import torch
from torch import nn
from d2l import torch as d2l
# 加载数据
batch_size, num_steps, device = 32, 35, d2l.try_gpu()
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)
# 通过设置“bidirective=True”来定义双向LSTM模型
vocab_size, num_hiddens, num_layers = len(vocab), 256, 2
num_inputs = vocab_size
lstm_layer = nn.LSTM(num_inputs, num_hiddens, num_layers, bidirectional=True)
model = d2l.RNNModel(lstm_layer, len(vocab))
model = model.to(device)
# 训练模型
num_epochs, lr = 500, 1
d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
  • 结果
perplexity 1.1, 86529.1 tokens/sec on cuda:0
time travellerererererererererererererererererererererererererer
travellerererererererererererererererererererererererererer

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值