神经网络语言模型(NNLM)
Paper: http://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf
原理
本文算是训练语言模型的经典之作,Bengio 将神经网络引入语言模型的训练中,并得到了词向量这个副产物。词向量对后面深度学习在自然语言处理方面有很大的贡献,也是获取词的语义特征的有效方法。
现在的任务是输入 w t − n + 1 , … , w t − 1 w_{t−n+1},…,w_{t−1} wt−n+1,…,wt−1 这前 n − 1 n-1 n−1 个单词,然后预测出下一个单词 w t w_t wt
数学符号说明:
- C i C_i Ci:单词 w 对应的词向量,其中 i i i 为词 w w w 在整个词汇表中的索引
- C C C:词向量,大小为 ∣ V ∣ × m |V| \times m ∣V∣×m 的矩阵
- ∣ V ∣ |V| ∣V∣:词汇表的大小,即预料库中去重后的单词个数
- m m m:词向量的维度,一般是 50 到 200
- H H H:隐藏层的 weight
- d d d:隐藏层的 bias
- U U U:输出层的 weight
- b b b:输出层的 bias
- W W W:输入层到输出层的 weight
- h h h:隐藏层神经元个数
计算流程:
- 首先将输入的 n − 1 n-1 n−1 个单词索引转为词向量,然后将这 n − 1 n-1 n−1 个词向量进行 concat,形成一个 n − 1 ∗ w n−1*w n−1∗w 的向量,用 X X X 表示
- 将 X X X 送入隐藏层进行计算, h i d d e n o u t = t a n h ( d + X ∗ H ) hiddenout=tanh(d+X∗H) hiddenout=tanh(d+X∗H)
- 输出层共有 ∣ V ∣ |V| ∣V∣ 个节点,每个节点 y i y_i yi 表示预测下一个单词 i 的概率, y=b+X∗W+{ hidden } _{ out }∗U$
代码
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as Data
"""定义类型"""
dtype = torch.FloatTensor
"""定义词汇表"""
sentences = [ "i like dog", "i love coffee", "i hate milk"]
word_list = " ".join(sentences).split() # ['i', 'like', 'dog', 'dog', 'i', 'love', 'coffee', 'i', 'hate', 'milk']
word_list = list(set(word_list)) # ['i', 'like', 'dog', 'love', 'coffee', 'hate', 'milk']
word_dict = {w: i for i, w in enumerate(word_list)} # {'i':0, 'like':1, 'dog':2, 'love':3, 'coffee':4, 'hate':5, 'milk':6}
number_dict = {i: w for i, w in enumerate(word_list)} # {0:'i', 1:'like', 2:'dog', 3:'love', 4:'coffee', 5:'hate', 6:'milk'}
n_class = len(word_dict) # number of Vocabulary, just like |V|, in this task n_class=7
# NNLM(Neural Network Language Model) Parameter
n_step = len(sentences[0].split())-1 # n-1 in paper, look back n_step words and predict next word. In this task n_step=2
n_hidden = 2 # h in paper
m = 2 # m in paper, word embedding dim
def make_batch(sentences):
input_batch = []
target_batch = []
for sen in sentences:
word = sen.split()
input = [word_dict[n] for n in word[:-1]] # [0, 1], [0, 3], [0, 5]
target = word_dict[word[-1]] # 2, 4, 6
input_batch.append(input) # [[0, 1], [0, 3], [0, 5]]
target_batch.append(target) # [2, 4, 6]
return input_batch, target_batch
"""预处理数据"""
input_data, target_data = make_batch(sentences)
"""数据转为longtensor"""
input_data, target_data = torch.LongTensor(input_data), torch.LongTensor(target_data)
"""数据批处理"""
dataset = Data.TensorDataset(input_data, target_data)
loader = Data.DataLoader(dataset, 16, True ) # dataset batch_size, shuffle
class NNLM(nn.Module):
def __init__(self):
super(NNLM, self).__init__()
self.C = nn.Embedding(n_class, m)
self.H = nn.Parameter(torch.randn(n_step * m, n_hidden).type(dtype))
self.W = nn.Parameter(torch.randn(n_step * m, n_class).type(dtype))
self.d = nn.Parameter(torch.randn(n_hidden).type(dtype))
self.U = nn.Parameter(torch.randn(n_hidden, n_class).type(dtype))
self.b = nn.Parameter(torch.randn(n_class).type(dtype))
def forward(self, X):
'''
X: [batch_size, n_step]
'''
X = self.C(X) # [batch_size, n_step] => [batch_size, n_step, m]
X = X.view(-1, n_step * m) # [batch_size, n_step * m]
hidden_out = torch.tanh(self.d + torch.mm(X, self.H)) # [batch_size, n_hidden]
output = self.b + torch.mm(X, self.W) + torch.mm(hidden_out, self.U) # [batch_size, n_class]
return output
model = NNLM()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=1e-3)
# Training
for epoch in range(5000):
for batch_x, batch_y in loader:
optimizer.zero_grad()
output = model(batch_x)
# output : [batch_size, n_class], batch_y : [batch_size] (LongTensor, not one-hot)
loss = criterion(output, batch_y)
if (epoch + 1)%1000 == 0:
print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss))
loss.backward()
optimizer.step()
# Predict
predict = model(input_data).data.max(1, keepdim = True)[1]
# Test
print([sen.split()[:n_step] for sen in sentences], '->', [number_dict[n.item()] for n in predict.squeeze()])
欢迎关注公众号: