1. 内容来源
2. 底层实现
2.1 参数初始化
如下图所示,与RNN(RNN Pytorch底层+简洁实现【初学者】_云龙弓手的博客-CSDN博客)相比,GRU通过设置遗忘门(R)和更新门(Z)来控制隐藏层状态的更新,学习如何丢弃某些不需要信息,更新必要的信息。
(图源:门控循环单元(GRU)_哔哩哔哩_bilibili)
其中,R和Z的计算和RNN中H计算类似,以上一时刻的H和当前输入(从时间轴上看也是上时刻)为输入,采用与MLP一样的计算公式。
则参数初始化中,方式与RNN相同,只是多了两组参数。
代码实现:
def get_params(vocab_size, num_hiddens, device):
num_inputs = num_outputs = vocab_size
def normal(shape):
return torch.randn(size=shape, device=device) * 0.01
def normal_wwb():
return (normal((num_inputs, num_hiddens)),
normal((num_hiddens, num_hiddens)),
torch.zeros(num_hiddens, device=device))
W_xz, W_hz, b_z = normal_wwb()
W_xr, W_hr, b_r = normal_wwb()
W_xh, W_hh, b_h = normal_wwb()
W_hq = normal((num_hiddens, num_outputs))
b_q = torch.zeros(num_outputs, device=device)
params = [W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q]
for param in params:
param.requires_grad_(True)
return params
其中,normal_wwb()函数是对每次类MLP计算中两个W和一个b的参数初始化函数封装
2.2 前向计算
由于增加了遗忘门,模型在更新隐藏状态前需要及计算候选隐状态:
(图源:门控循环单元(GRU)_哔哩哔哩_bilibili)
候选隐状态的计算公式如下:
通过更新门,利用得到的候选隐状态更新隐状态:
(图源:门控循环单元(GRU)_哔哩哔哩_bilibili)
更新公式如下:
代码实现:
def gru(inputs, state, params):
W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q = params
H, = state
outputs = []
for X in inputs:
Z = torch.sigmoid((X @ W_xz) + (H @ W_hz) + b_z)
R = torch.sigmoid((X @ W_xr) + (H @ W_hr) + b_r)
H_tilda = torch.tanh((X @ W_xh) + ((R * H) @ W_hh) + b_h)
H = Z * H + (1 - Z) * H_tilda
Y = H @ W_hq + b_q
outputs.append(Y)
return torch.cat(outputs, dim=0), (H, )
隐状态初始化:
def init_gru_state(batch_size, num_hiddens, device):
return (torch.zeros((batch_size, num_hiddens), device=device), )
2.3 模型训练
由于GRU与RNN区别仅在于参数初始化和前向计算,将get_params, init_gru_state, gru三个函数传入d2l.RNNModelScratch即可得到GRU模型(详见RNN Pytorch底层+简洁实现【初学者】_云龙弓手的博客-CSDN博客 2.3小节)
batch_size , num_steps = 32, 35
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)
vocab_size, num_hiddens, device = len(vocab), 256, d2l.try_gpu()
num_epochs, lr = 500, 1
model = d2l.RNNModelScratch(len(vocab), num_hiddens, device,
get_params, init_gru_state, gru)
d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
结果:
perplexity 1.1, 11694.5 tokens/sec on cuda:0
time traveller but now you begin to yee or holeastounthowo has g
traveller with a slight accession ofcheerfulness really thi
与RNN相比,由于多了更多的参数运算,训练速度有所下降,效果由于训练集太小看不去区别,但是GRU相比RNN可以处理更长的句子
3. 简洁实现
torch实现只需要把nn.RNN改成nn.GRU即可,d2l.RNNModel 详见RNN Pytorch底层+简洁实现【初学者】_云龙弓手的博客-CSDN博客 3.1小节:
num_inputs = vocab_size
gru_layer = nn.GRU(num_inputs, num_hiddens)
model = d2l.RNNModel(gru_layer, len(vocab))
model = model.to(device)
d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
结果:
perplexity 1.0, 194738.2 tokens/sec on cuda:0
time travelleryou can show black is white by argument said filby
travelleryou can show black is white by argument said filby