【NLP】LSTM 唐诗生成器 pytorch 版

8 篇文章 0 订阅

参考这篇文章LSTM唐诗生成器Keras版

将相关的 keras 模型代码进行修改,改成对应的 pytorch 模型,现将有区别的部分放在这里。

训练模型

搭建网络

# 把keras 模型改成 pytorch 模型
# 建立LSTM模型
import torch
import torch.nn as nn
import torch.nn.functional as F

# 设置 CUDA
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# model = Sequential()
# model.add(Embedding(10000, 128, input_length=20))
# model.add(LSTM(128, return_sequences=True))
# model.add(Dropout(0.2))
# model.add(LSTM(128))
# model.add(Dropout(0.2))
# model.add(Dense(10000, activation='softmax'))

# 参考上述的 keras 模型,建立 pytorch 模型

# 第二层 LSTM 只取最后一个输出,所以 return_sequences=False

class LSTMNet(nn.Module):
    def __init__(self):
        super(LSTMNet, self).__init__()
        self.embedding = nn.Embedding(10000, 128)
        self.lstm1 = nn.LSTM(input_size=128, hidden_size=128, num_layers=1, batch_first=True)
        self.dropout1 = nn.Dropout(0.2)
        self.lstm2 = nn.LSTM(input_size=128, hidden_size=128, num_layers=1, batch_first=True)
        self.dropout2 = nn.Dropout(0.2)
        self.fc = nn.Linear(128, 10000)
        
    def forward(self, x):
        x = self.embedding(x) # [batch_size, seq_len, embedding_size]
        x, _ = self.lstm1(x)  # [batch_size, seq_len, hidden_size]
        x = self.dropout1(x)  # [batch_size, seq_len, hidden_size]
        x, _ = self.lstm2(x)  # [batch_size, seq_len, hidden_size]
        x = self.dropout2(x)  # [batch_size, seq_len, hidden_size]
        x = x[:, -1, :] #       这里-1的意思是:取最后一个输出 [batch_size, hidden_size]
        x = self.fc(x)  #       [batch_size, 10000]
        return x
# 实例化模型
model = LSTMNet().to(device)
model

LSTMNet(
(embedding): Embedding(10000, 128)
(lstm1): LSTM(128, 128, batch_first=True)
(dropout1): Dropout(p=0.2, inplace=False)
(lstm2): LSTM(128, 128, batch_first=True)
(dropout2): Dropout(p=0.2, inplace=False)
(fc): Linear(in_features=128, out_features=10000, bias=True)
)

Pytorch 数据转换

注意:因为 y_train 和 y_test [batch, 1] 最后一个维度是没用的,
所以要把它去掉,变成 [batch] 才能正常给交叉熵损失函数计算

# 先把 x_train, x_test, y_train, y_test 转化为 tensor
x_train = torch.tensor(x_train).to(device)
x_test = torch.tensor(x_test).to(device)
y_train = torch.tensor(y_train).to(device)
y_test = torch.tensor(y_test).to(device)
# 测试样本能否正常输入网络
pred = model(x_train[0:3].to(device))
print(x_train[0:3].shape) # [3, 20] # 3个样本,每个样本20个词
print(pred.shape) # [3, 10000]     #  3个样本,每个样本10000个分类

torch.Size([3, 20])
torch.Size([3, 10000])

# 因为 y_train 和 y_test [batch, 1] 最后一个维度是没用的,
# 所以要把它去掉,变成 [batch] 才能正常给交叉熵损失函数计算
y_train = y_train.squeeze()
y_test = y_test.squeeze()

# 转化成 Long
y_train = y_train.long()
y_test = y_test.long()

# 查看形状
y_train.shape,y_test.shape

(torch.Size([39405]), torch.Size([16889]))

训练模型

# 训练模型
import torch.optim as optim
from tqdm import tqdm
optimizer = optim.Adam(model.parameters(), lr=0.001)

batch_size = 256
epochs = 20

# 注意,这里 y_train, y_test 的形状都是 [batch, 1] ,也就是说,并不是 one-hot 编码
# 所以,损失函数用的是 CrossEntropyLoss

loss_func = nn.CrossEntropyLoss()
for epoch in range(epochs):
    print('Epoch: ', epoch)
    for i in tqdm(range(0, len(x_train), batch_size)):
        x_batch = x_train[i:i+batch_size]
        y_batch = y_train[i:i+batch_size]
        pred = model(x_batch)
        loss = loss_func(pred, y_batch)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    # 每个 epoch 结束后,计算一下准确率
    # 训练集准确率
    pred = model(x_train)
    pred = torch.argmax(pred, dim=1)
    acc = (pred == y_train).sum().item() / len(y_train)
    print('Train acc: ', acc)
    # 测试集准确率
    pred = model(x_test)
    pred = torch.argmax(pred, dim=1)
    acc = (pred == y_test).sum().item() / len(y_test)
    print('Test acc: ', acc)

Epoch: 0
100%|██████████| 154/154 [00:38<00:00, 4.01it/s]
Train acc: 0.10216977540921203
Test acc: 0.10320326839955
Epoch: 1

Epoch: 19
100%|██████████| 154/154 [00:37<00:00, 4.09it/s]
Train acc: 0.20576069026773253
Test acc: 0.17970276511338742

test_string = '白日依山盡,黃河入海流,欲窮千里目,更上一'

for i in range(300):
    # 循环 300 步,每步都要预测一个字
    
    test_string_token = tokenizer.texts_to_sequences([test_string[-20:]]) # 取最后20个字
    test_string_mat = np.array(test_string_token)
    
    pred = model(torch.tensor(test_string_mat).to(device)) # pred 的形状是 [1, 10000]
    pred_argmax = torch.argmax(pred, dim=1).item()         # pred_argmax 的形状是 [1]
    # 把预测的字转化为文字
    tokenizer.index_word[pred_argmax]
    test_string = test_string + tokenizer.index_word[pred_argmax]
print(test_string)
  • 2
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值