[翻译Pytorch教程]NLP部分:使用TorchText进行语言翻译

翻译自官网教程:LANGUAGE TRANSLATION WITH TORCHTEXT

本教程展示了如何使用torchtext中几个方便的类对包含英语和德语句子对的知名数据集进行预处理,并用其训练一个将德语句子翻译成英语的包含注意力机制的序列到序列模型。

本教程基于来自PyTorch社区成员Ben Trevett这个教程,经过Ben Trevett的允许由Seth Weidman创建。

完成本教程,你可以学到:

FieldTranslationDataset

torchtext 可以创建易于迭代的数据集用于创建语言翻译模型。一个重要的类是Field,其可以指定每个句子如何被处理,另一个是TranslationDatasettorchtext有几个这样的数据集;本教程中使用的是Multi30k dataset数据集,其包含大约30,000个英语和德语的句子对(平均长度约为13)。

注意: 本教程中的标记化(tokenization)依赖Spacy使用Spacy是因为其提供了除了英语外的强有力的标记化支持。torchtext提供了basic_english标记化及其他英语标记化支持(如,Moses),但是对于语言翻译——需要多种语言,Spacy是最佳选择。

运行本教程,需要使用pipconda安装spacy。然后下载用于英语和德语Spacy标记化的原始数据。

python -m spacy download en
python -m spacy download de

安装了Spacy之后,随后的代码将基于Field中定义的标记器(tokenizer)将TranslationDataset中的每个句子进行标记化。

%matplotlib inline

注意: 由于网络原因,自动下载Multi30k数据集失败,可以在这里下载该数据集并放到.data/目录下。

from torchtext.datasets import Multi30k
from torchtext.data import Field, BucketIterator

SRC = Field(tokenize = "spacy",
            tokenizer_language="de_core_news_sm",
            init_token = '<sos>',
            eos_token = '<eos>',
            lower = True)

TRG = Field(tokenize = "spacy",
            tokenizer_language="en_core_web_sm",
            init_token = '<sos>',
            eos_token = '<eos>',
            lower = True)

train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'),
                                                    fields = (SRC, TRG))

一旦定义了train_data,可以看到torchtextField十分有用的特征:build_vocab方法允许我们创建与每种语言相关的词典。

SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)

一旦这些代码被运行,SRC.vocab.stoi将成为键是词汇表中符号的字典且值为其对应的索引;SRC.vocab.itos将成为键和值交换的相同的字典。本教程不会大量使用这个特性,但其在其他NLP任务中可能非常有用。

BucketIterator

我们将要用到的torchtext的最后一个具体特性是BucketIterator,由于它接收TranslationDataset作为其第一个参数,它十分易用。特别是,如说明所述:定义一个可以将相似长度的样本分成一个批次的迭代器。当为每个新的epoch生成重新洗牌的批次时可以降低填充数量。见分装过程中使用的池操作(原句:See pool for the bucketing procedure used)。

import torch

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

BATCH_SIZE = 128

train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
    (train_data, valid_data, test_data),
    batch_size = BATCH_SIZE,
    device = device)

迭代器可以像DataLoader一样被调用;如下,在trainevaluate函数中,其调用很简单:

for i, batch in enumerate(iterator):

每个batch 就有了srctrg属性:

src = batch.src
trg = batch.trg

定义自己的nn.ModuleOptimizer

这些大部分都来自torchtext的全景图:创建了数据集及定义了迭代器(iterator),教程的的剩余部分只需要简单定义模型,如nn.Module,连同一个优化器Optimizer,然后训练它。

本文的模型是根据这里描述的结构定义的(可以在这里找到更推荐的版本)。

注意:这个模型仅仅是个用于语言翻译的样例模型;选择它是因为它是一个翻译任务的标准模型,不是因为它是翻译的推荐模型。可能你已经知道,目前最先进的模型是基于Transformers的,你可以在这里PyTorch实现Transformer层的功能;请注意下面的模型所用到的注意力(attention)与transformer模型中所用的多头自注意力(multi-headed self-attention)不同。

import random
from typing import Tuple

import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch import Tensor


class Encoder(nn.Module):
    def __init__(self,
                 input_dim: int,
                 emb_dim: int,
                 enc_hid_dim: int,
                 dec_hid_dim: int,
                 dropout: float):
        super().__init__()

        self.input_dim = input_dim
        self.emb_dim = emb_dim
        self.enc_hid_dim = enc_hid_dim
        self.dec_hid_dim = dec_hid_dim
        self.dropout = dropout

        self.embedding = nn.Embedding(input_dim, emb_dim)

        self.rnn = nn.GRU(emb_dim, enc_hid_dim, bidirectional = True)

        self.fc = nn.Linear(enc_hid_dim * 2, dec_hid_dim)

        self.dropout = nn.Dropout(dropout)

    def forward(self,
                src: Tensor) -> Tuple[Tensor]:

        embedded = self.dropout(self.embedding(src))

        outputs, hidden = self.rnn(embedded)

        hidden = torch.tanh(self.fc(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1)))

        return outputs, hidden


class Attention(nn.Module):
    def __init__(self,
                 enc_hid_dim: int,
                 dec_hid_dim: int,
                 attn_dim: int):
        super().__init__()

        self.enc_hid_dim = enc_hid_dim
        self.dec_hid_dim = dec_hid_dim

        self.attn_in = (enc_hid_dim * 2) + dec_hid_dim

        self.attn = nn.Linear(self.attn_in, attn_dim)

    def forward(self,
                decoder_hidden: Tensor,
                encoder_outputs: Tensor) -> Tensor:

        src_len = encoder_outputs.shape[0]

        repeated_decoder_hidden = decoder_hidden.unsqueeze(1).repeat(1, src_len, 1)

        encoder_outputs = encoder_outputs.permute(1, 0, 2)

        energy = torch.tanh(self.attn(torch.cat((
            repeated_decoder_hidden,
            encoder_outputs),
            dim = 2)))

        attention = torch.sum(energy, dim=2)

        return F.softmax(attention, dim=1)


class Decoder(nn.Module):
    def __init__(self,
                 output_dim: int,
                 emb_dim: int,
                 enc_hid_dim: int,
                 dec_hid_dim: int,
                 dropout: int,
                 attention: nn.Module):
        super().__init__()

        self.emb_dim = emb_dim
        self.enc_hid_dim = enc_hid_dim
        self.dec_hid_dim = dec_hid_dim
        self.output_dim = output_dim
        self.dropout = dropout
        self.attention = attention

        self.embedding = nn.Embedding(output_dim, emb_dim)

        self.rnn = nn.GRU((enc_hid_dim * 2) + emb_dim, dec_hid_dim)

        self.out = nn.Linear(self.attention.attn_in + emb_dim, output_dim)

        self.dropout = nn.Dropout(dropout)


    def _weighted_encoder_rep(self,
                              decoder_hidden: Tensor,
                              encoder_outputs: Tensor) -> Tensor:

        a = self.attention(decoder_hidden, encoder_outputs)

        a = a.unsqueeze(1)

        encoder_outputs = encoder_outputs.permute(1, 0, 2)

        weighted_encoder_rep = torch.bmm(a, encoder_outputs)

        weighted_encoder_rep = weighted_encoder_rep.permute(1, 0, 2)

        return weighted_encoder_rep


    def forward(self,
                input: Tensor,
                decoder_hidden: Tensor,
                encoder_outputs: Tensor) -> Tuple[Tensor]:

        input = input.unsqueeze(0)

        embedded = self.dropout(self.embedding(input))

        weighted_encoder_rep = self._weighted_encoder_rep(decoder_hidden,
                                                          encoder_outputs)

        rnn_input = torch.cat((embedded, weighted_encoder_rep), dim = 2)

        output, decoder_hidden = self.rnn(rnn_input, decoder_hidden.unsqueeze(0))

        embedded = embedded.squeeze(0)
        output = output.squeeze(0)
        weighted_encoder_rep = weighted_encoder_rep.squeeze(0)

        output = self.out(torch.cat((output,
                                     weighted_encoder_rep,
                                     embedded), dim = 1))

        return output, decoder_hidden.squeeze(0)


class Seq2Seq(nn.Module):
    def __init__(self,
                 encoder: nn.Module,
                 decoder: nn.Module,
                 device: torch.device):
        super().__init__()

        self.encoder = encoder
        self.decoder = decoder
        self.device = device

    def forward(self,
                src: Tensor,
                trg: Tensor,
                teacher_forcing_ratio: float = 0.5) -> Tensor:

        batch_size = src.shape[1]
        max_len = trg.shape[0]
        trg_vocab_size = self.decoder.output_dim

        outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)

        encoder_outputs, hidden = self.encoder(src)

        # 解码器的第一个输入是 <sos> 标记
        output = trg[0,:]

        for t in range(1, max_len):
            output, hidden = self.decoder(output, hidden, encoder_outputs)
            outputs[t] = output
            teacher_force = random.random() < teacher_forcing_ratio
            top1 = output.max(1)[1]
            output = (trg[t] if teacher_force else top1)

        return outputs


INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
# ENC_EMB_DIM = 256
# DEC_EMB_DIM = 256
# ENC_HID_DIM = 512
# DEC_HID_DIM = 512
# ATTN_DIM = 64
# ENC_DROPOUT = 0.5
# DEC_DROPOUT = 0.5

ENC_EMB_DIM = 32
DEC_EMB_DIM = 32
ENC_HID_DIM = 64
DEC_HID_DIM = 64
ATTN_DIM = 8
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5

enc = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, ENC_DROPOUT)

attn = Attention(ENC_HID_DIM, DEC_HID_DIM, ATTN_DIM)

dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, DEC_DROPOUT, attn)

model = Seq2Seq(enc, dec, device).to(device)


def init_weights(m: nn.Module):
    for name, param in m.named_parameters():
        if 'weight' in name:
            nn.init.normal_(param.data, mean=0, std=0.01)
        else:
            nn.init.constant_(param.data, 0)


model.apply(init_weights)

optimizer = optim.Adam(model.parameters())


def count_parameters(model: nn.Module):
    return sum(p.numel() for p in model.parameters() if p.requires_grad)


print(f'The model has {count_parameters(model):,} trainable parameters')

输出:

The model has 1,856,685 trainable parameters

注意: 当对一个语言翻译模型的表现评分时,需要使nn.CrossEntropyLoss函数忽略掉目标中简单填充部分的索引。

PAD_IDX = TRG.vocab.stoi['<pad>']

criterion = nn.CrossEntropyLoss(ignore_index=PAD_IDX)

最后,可以训练并验证这个模型了:

import math
import time


def train(model: nn.Module,
          iterator: BucketIterator,
          optimizer: optim.Optimizer,
          criterion: nn.Module,
          clip: float):

    model.train()

    epoch_loss = 0

    for _, batch in enumerate(iterator):

        src = batch.src
        trg = batch.trg

        optimizer.zero_grad()

        output = model(src, trg)

        output = output[1:].view(-1, output.shape[-1])
        trg = trg[1:].view(-1)

        loss = criterion(output, trg)

        loss.backward()

        torch.nn.utils.clip_grad_norm_(model.parameters(), clip)

        optimizer.step()

        epoch_loss += loss.item()

    return epoch_loss / len(iterator)


def evaluate(model: nn.Module,
             iterator: BucketIterator,
             criterion: nn.Module):

    model.eval()

    epoch_loss = 0

    with torch.no_grad():

        for _, batch in enumerate(iterator):

            src = batch.src
            trg = batch.trg

            output = model(src, trg, 0) #关闭 teacher forcing

            output = output[1:].view(-1, output.shape[-1])
            trg = trg[1:].view(-1)

            loss = criterion(output, trg)

            epoch_loss += loss.item()

    return epoch_loss / len(iterator)


def epoch_time(start_time: int,
               end_time: int):
    elapsed_time = end_time - start_time
    elapsed_mins = int(elapsed_time / 60)
    elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
    return elapsed_mins, elapsed_secs


N_EPOCHS = 10
CLIP = 1

best_valid_loss = float('inf')

for epoch in range(N_EPOCHS):

    start_time = time.time()

    train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
    valid_loss = evaluate(model, valid_iterator, criterion)

    end_time = time.time()

    epoch_mins, epoch_secs = epoch_time(start_time, end_time)

    print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
    print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
    print(f'\t Val. Loss: {valid_loss:.3f} |  Val. PPL: {math.exp(valid_loss):7.3f}')

test_loss = evaluate(model, test_iterator, criterion)

print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')

输出:

Epoch: 01 | Time: 0m 24s
	Train Loss: 5.678 | Train PPL: 292.465
	 Val. Loss: 5.252 |  Val. PPL: 190.984
Epoch: 02 | Time: 0m 25s
	Train Loss: 5.019 | Train PPL: 151.184
	 Val. Loss: 5.118 |  Val. PPL: 166.988
Epoch: 03 | Time: 0m 25s
	Train Loss: 4.741 | Train PPL: 114.590
	 Val. Loss: 4.970 |  Val. PPL: 144.084
Epoch: 04 | Time: 0m 25s
	Train Loss: 4.615 | Train PPL: 100.974
	 Val. Loss: 4.999 |  Val. PPL: 148.223
Epoch: 05 | Time: 0m 25s
	Train Loss: 4.492 | Train PPL:  89.300
	 Val. Loss: 5.147 |  Val. PPL: 171.969
Epoch: 06 | Time: 0m 25s
	Train Loss: 4.416 | Train PPL:  82.793
	 Val. Loss: 5.004 |  Val. PPL: 149.005
Epoch: 07 | Time: 0m 25s
	Train Loss: 4.339 | Train PPL:  76.631
	 Val. Loss: 5.027 |  Val. PPL: 152.493
Epoch: 08 | Time: 0m 25s
	Train Loss: 4.254 | Train PPL:  70.378
	 Val. Loss: 4.892 |  Val. PPL: 133.206
Epoch: 09 | Time: 0m 25s
	Train Loss: 4.210 | Train PPL:  67.342
	 Val. Loss: 4.801 |  Val. PPL: 121.662
Epoch: 10 | Time: 0m 25s
	Train Loss: 4.140 | Train PPL:  62.808
	 Val. Loss: 4.709 |  Val. PPL: 110.916
| Test Loss: 4.735 | Test PPL: 113.916 |

下一步

  • 这里查看剩余的Ben Trevett使用torchtext的教程 。
  • 继续关注使用torchtext其他特性的教程,如nn.Transformer,通过预测下一个单词构建语言模型。
  • 2
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值