全部代码在最后面。
全连接层的权重的最多的;
RNN模型图:
RNNcell里面的维度变化:(如图下)
Xt 维度:input_size * 1
Wih维度:hidden_size * input_size
则Xt * Wih得到 维度 :hidden_size * 1
h t-1 维度: hidden_size * 1
Whh维度: hidden_size * hidden_size
则h t-1 * Whh 得到 维度: hidden_size * 1
再二者相加,调用tanh。
本质是一个线性层
code:
cell = torch.nn.RNNCell(input_size=input_size, hidden_size=hidden_size)
RNNcell里面的参数维度等设置:
包括input,output,dataset等
h_0,h_n:
batch_first:可选,用的时候设置为:batch_size=True,调用的时候要inputs转置维度顺序。
numLayers:
代码对应图的各个部分:
sequence2sequence:
hello -> ohlol
先把词变为词典,为了能进入训练,把单词变为独热(one-hot)向量。
RNNcell模型图:
箭头所指为加法,把各个loss加起来构成计算图
RNN 里面的维度变化:
inputs :seq * batch_size * input_size
outputs : seq * batch_size * hidden_size
labels : seq * batch_size * 1 指这个序列中每一个样本属于哪一个分类,labels维度-> seq 乘batch_size * 1
hidden : num_layers * batch_size * hidden_size
最后的输出out要变成二维的,好处:用交叉熵时可以变为一个矩阵[ ]
即:seq 乘 batch_size * hidden_size
交叉熵接受的维度输入是outputs 和 labels 的维度输入
总的code:
RNNcell:
import torch
input_size = 4
hidden_size = 4
batch_size = 1 # 一个样本
idx2char = ['e', 'h', 'l', 'o']
x_data = [1, 0, 2, 2, 3] # 维度: seq * input_szie
y_data = [3, 1, 2, 3, 2]
one_hot_lookup = [[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]]
x_one_hot = [one_hot_lookup[x] for x in x_data]
# print(x_one_hot)
# -1 表示自动适配,维度自己判断,在这里指seq维度
inputs = torch.Tensor(x_one_hot).view(-1, batch_size, input_size)
labels = torch.LongTensor(y_data).view(-1, 1)
class Model(torch.nn.Module):
def __init__(self, input_size, hidden_size, batch_size):
super(Model, self).__init__()
self.batch_size = batch_size
self.input_size = input_size
self.hidden_size = hidden_size
self.rnncell = torch.nn.RNNCell(input_size=self.input_size,
hidden_size=self.hidden_size)
def forward(self, input, hidden):
# 把隐层转换为下一个隐层 即:ht = cell(xt, ht-1)
hidden = self.rnncell(input, hidden)
return hidden
# 生成h0,初始的隐层,全0
def init_hidden(self):
return torch.zeros(self.batch_size, self.hidden_size)
net = Model(input_size, hidden_size, batch_size)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=0.01)
for epoch in range(100):
loss = 0
optimizer.zero_grad()
hidden = net.init_hidden()
print('Predicted string:', end='')
# inputs 维度: seq * batch * input_size,按序列读取即x1 -> x5
for input, label in zip(inputs, labels):
hidden = net(input, hidden)
loss += criterion(hidden, label)
# hidden 是一个一列四维的[]向量,里面都是概率,所以用.max 找到最大值的下标
_, idx = hidden.max(dim=1)
# 打印出最有可能的下标所对应的值
print(idx2char[idx.item()], end='')
loss.backward()
optimizer.step()
print(', Epoch [%d/15] loss=%.4f' % (epoch+1, loss.item()))
RNN:
import torch
input_size = 4
hidden_size = 4
batch_size = 1 # 一个样本
num_layers = 1
seq_len = 5
idx2char = [‘e’, ‘h’, ‘l’, ‘o’]
x_data = [1, 0, 2, 2, 3] # 维度: seq * input_szie
y_data = [3, 1, 2, 3, 2]
one_hot_lookup = [[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]]
x_one_hot = [one_hot_lookup[x] for x in x_data]
print(x_one_hot)
-1 表示自动适配,维度自己判断,在这里指seq维度
inputs = torch.Tensor(x_one_hot).view(seq_len, batch_size, input_size)
labels = torch.LongTensor(y_data)
class Model(torch.nn.Module):
def init(self, input_size, hidden_size, batch_size, num_layers=1):
super(Model, self).init()
self.num_layers = num_layers
self.batch_size = batch_size
self.input_size = input_size
self.hidden_size = hidden_size
self.rnn = torch.nn.RNN(input_size=self.input_size,
hidden_size=self.hidden_size,
num_layers=num_layers)
def forward(self, input):
# 为了构造h0,所以模型加入batch_size,也可以在外面创建h0,这样就buoy那个模型中加入batch_size
hidden = torch.zeros(self.num_layers,
self.batch_size,
self.hidden_size)
out, _ = self.rnn(input, hidden)
return out.view(-1, self.hidden_size)
net = Model(input_size, hidden_size, batch_size, num_layers)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=0.05)
for epoch in range(15):
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
_, idx = outputs.max(dim=1)
idx = idx.data.numpy()
print('Predicted string:',''.join([idx2char[x] for x in idx]) , end='')
print(', Epoch [%d/15] loss=%.4f' % (epoch + 1, loss.item()))
效果图:
关于 one-hot向量的缺点:
embedding 数据降维
embedding维度:主要靠 num_embedding 和 embedding_dim 构成矩阵的高度和宽度;
例子:
全连接层维度:
loss维度:
总的code:embedding + linear(RNN)
import torch
num_class = 4 # 4个类别
input_size = 4 # 等价于输入是4维的
hidden_size = 8 # 等价于输出是8维的
embedding_size = 10
num_layers = 2
batch_size = 1
seq_len = 5
idx2char = ['e', 'h', 'l', 'o']
x_data = [[1, 0, 2, 2, 3]] # 维度: batch(先,因为batch_first=True) * seq_len
y_data = [3, 1, 2, 3, 2] # batch * seq_len
inputs = torch.LongTensor(x_data)
labels = torch.LongTensor(y_data)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.emb = torch.nn.Embedding(input_size, embedding_size)
self.rnn = torch.nn.RNN(input_size=embedding_size,
hidden_size=hidden_size,
num_layers=num_layers,
batch_first=True)
self.fc = torch.nn.Linear(hidden_size, num_class)
def forward(self, x):
hidden = torch.zeros(num_layers, x.size(0), hidden_size)
x = self.emb(x)
x, _ = self.rnn(x, hidden)
x = self.fc(x)
# 编程矩阵,再交给loss来处理
return x.view(-1, num_class)
net = Model()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=0.05)
for epoch in range(15):
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
_, idx = outputs.max(dim=1)
idx = idx.data.numpy()
print('Predicted string:',''.join([idx2char[x] for x in idx]) , end='')
print(', Epoch [%d/15] loss=%.4f' % (epoch + 1, loss.item()))
效果图:
LSTM与GRU:
lstm效果比rnn好,但是时间长;
折中方案:GRU
LSTM模型图:
LSTM公式:
GRU公式: