- 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
- 🍖 原作者:K同学啊 | 接辅导、项目定制
- 🚀 文章来源:K同学的学习圈子
一、课题背景和开发环境
📌第N9周:Transformer实战-单词预测📌
- Python 3.8.12
- numpy==1.21.5 -> 1.24.3
- pytorch==1.8.1+cu111
📌本周任务:📌
- 自定义输入一段英文文本进行预测(拓展内容,可自由发挥)
二、代码实现
这是一个关于使用 Transformer 模型来预测文本序列中下一个单词的教程示例。
1.关于数据集
本文使用的是Wikitext-2数据集,WikiText 英语词库数据(The WikiText Long Term Dependency Language Modeling Dataset)是一个包含1亿个词汇的英文词库数据,这些词汇是从Wikipedia的优质文章和标杆文章中提取得到,包括 WikiText-2
和 WikiText-103
两个版本,相比于著名的 Penn Treebank (PTB) 词库中的词汇数量,前者是其2倍,后者是其110倍。每个词汇还同时保留产生该词汇的原始文章,这尤其适合当需要长时依赖(longterm dependency)自然语言建模的场景。
以下是关于Wikitext-2数据集的一些详细介绍:
- 数据来源:Wikitext-2数据集是从维基百科抽取的,包含了维基百科中的文章文本。
- 数据内容:Wikitext-2数据集包含维基百科的文章内容,包括各种主题和领域的信息。这些文章是经过预处理和清洗的,以提供干净和可用于训练的文本数据。
- 数据规模:Wikitext-2数据集的规模相对较小。它包含了超过2,088,628个词标记(token)的文本,以及其中1,915,997个词标记用于训练,172,430个词标记用于验证和186,716个词标记用于测试。
- 数据格式:Wikitext-2数据集以纯文本形式进行存储,每个文本文件包含一个维基百科文章的内容。文本以段落和句子为单位进行分割。
- 用途:Wikitext-2数据集通常用于语言建模任务,其中模型的目标是根据之前的上下文来预测下一个词或下一个句子。此外,该数据集也可以用于其他文本生成任务,如机器翻译、摘要生成等。
2.定义模型
class PositionalEncoding(nn.Module):
def __init__(self,
d_model: int,
dropout: float = 0.1,
max_len: int = 5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
# 生成位置编码的位置张量
position = torch.arange(max_len).unsqueeze(1)
# 计算位置编码的除数项
div_term = torch.exp(torch.arange(0, d_model, 2) * (-math.log(10000.0) / d_model))
# 创建位置编码张量
pe = torch.zeros(max_len, 1, d_model)
# 使用正弦函数计算位置编码中的奇数维度部分
pe[:, 0, 0::2] = torch.sin(position * div_term)
# 使用余弦函数计算位置编码中的偶数维度部分
pe[:, 0, 1::2] = torch.cos(position * div_term)
self.register_buffer('pe', pe)
def forward(self, x: Tensor) -> Tensor:
"""
Arguments:
x: Tensor, 形状为 [seq_len, batch_size, embedding_dim]
"""
# 将位置编码添加到输入张量
x = x + self.pe[:x.size(0)]
# 应用 dropout
return self.dropout(x)
class Transformer(nn.Module):
def __init__(
self,
ntoken: int,
d_model: int,
nhead: int,
d_hid: int,
nlayers: int,
dropout: float = 0.5):
super(Transformer, self).__init__()
self.model_type = 'Transformer'
self.pos_encoder = PositionalEncoding(d_model, dropout)
# 定义编码器层
encoder_layers = TransformerEncoderLayer(d_model, nhead, d_hid, dropout)
# 定义编码器,pytorch将Transformer编码器进行了打包
self.encoder = TransformerEncoder(encoder_layers, nlayers)
self.embedding = nn.Embedding(ntoken, d_model)
self.d_model = d_model
self.linear = nn.Linear(d_model, ntoken)
# 初始化权重
def init_weights(self) -> None:
initrange = 0.1
self.embedding.weight.data.uniform_(-initrange, initrange)
self.linear.bias.data.zero_()
self.linear.weight.data.uniform_(-initrange, initrange)
def forward(self, src: Tensor, mask: Tensor = None) -> Tensor:
"""
Arguments:
src : Tensor, 形状为 [seq_len, batch_size]
mask: Tensor, 形状为 [seq_len, seq_len]
Returns:
输出的 Tensor, 形状为 [seq_len, batch_size, ntoken]
"""
src = self.embedding(src) * math.sqrt(self.d_model)
src = self.pos_encoder(src)
output = self.encoder(src, mask)
output = self.linear(output)
return output
3.加载数据集
安装 portalocker
和 portalocker
pip install portalocker
pip install torchdata
这里因为我使用的torch版本和 @K同学啊 提供的例程版本不一致,其中的build_vocab_from_iterator接口实现不能通用,所以略作了修改。
bptt = 35
# 设置GPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# 从torchtext库中导入WikiText2数据集
train_iter = WikiText2(split='train')
# 获取基本英语的分词器
tokenizer = get_tokenizer('basic_english')
#### 通过迭代器构建词汇表
###vocab = build_vocab_from_iterator(map(tokenizer, train_iter), specials=['<unk>'])
#### 将默认索引设置为'<unk>'
###vocab.set_default_index(vocab['<unk>'])
counter = Counter()
for line in train_iter:
#print('Line :', line, 'END.')
counter.update(tokenizer(line))
#print('Token :', tokenizer(line), 'END.')
vocab = Vocab(counter, min_freq=1)
def data_process(raw_text_iter: dataset.IterableDataset) -> Tensor:
"""将原始文本转换为扁平的张量"""
data = []
for item in raw_text_iter:
tokens = tokenizer(item)
data.append(torch.tensor([vocab[token] for token in tokens], dtype=torch.long))
return torch.cat(tuple(filter(lambda t: t.numel() > 0, data)))
def batchify(data: Tensor, bsz: int) -> Tensor:
"""将数据划分为 bsz 个单独的序列,去除不能完全容纳的额外元素。
Arguments:
data: Tensor, 形状为``[N]``
bsz : int, 批大小
Returns:
形状为 [N // bsz, bsz] 的张量
"""
seq_len = data.size(0) // bsz
data = data[:seq_len * bsz]
data = data.view(bsz, seq_len).t().contiguous()
return data.to(device)
# 获取批次数据
def get_batch(source: Tensor, i: int) -> Tuple[Tensor, Tensor]:
"""
Arguments:
source: Tensor,形状为 ``[full_seq_len, batch_size]``
i: int, 当前批次索引
Returns:
tuple (data, target),
- data形状为 [seq_len, batch_size]
- target形状为 [seq_len * batch_size]
"""
# 计算当前批次的序列长度,最大为bptt,确保不超过source的长度
seq_len = min(bptt, len(source) - 1 - i)
# 获取data,从i开始,长度为seq_len
data = source[i:i+seq_len]
# 获取target,从i+1开始,长度为seq_len,并将其形状转换为一维张量
target = source[i+1:i+1+seq_len].reshape(-1)
return data, target
# 由于构建词汇表时"train_iter"被使用了,所以需要重新创建
train_iter, val_iter, test_iter = WikiText2()
# 对训练、验证和测试数据进行处理
train_data = data_process(train_iter)
val_data = data_process(val_iter)
test_data = data_process(test_iter)
# 设置批大小和评估批大小
batch_size = 20
eval_batch_size = 10
# 将训练、验证和测试数据进行批处理
train_data = batchify(train_data, batch_size) # 形状为 [seq_len, batch_size]
val_data = batchify(val_data, eval_batch_size)
test_data = batchify(test_data, eval_batch_size)
4.初始化实例
ntokens = len(vocab) # 词汇表的大小
emsize = 200 # 嵌入维度
d_hid = 200 # nn.TransformerEncoder 中前馈网络模型的维度
nlayers = 2 # nn.TransformerEncoder中的nn.TransformerEncoderLayer层数
nhead = 2 # nn.MultiheadAttention 中的头数
dropout = 0.2 # 丢弃概率
# 创建 Transformer 模型,并将其移动到设备上
model = Transformer(ntokens, emsize, nhead, d_hid, nlayers, dropout).to(device)
5.训练模型
def train(model: nn.Module) -> None:
model.train() # 开启训练模式
total_loss = 0.
log_interval = 200 # 每隔200个batch打印一次日志
start_time = time.time()
num_batches = len(train_data) // bptt # 计算总的batch数量
for batch, i in enumerate(range(0, train_data.size(0) - 1, bptt)):
data, targets = get_batch(train_data, i) # 获取当前batch的数据和目标
output = model(data) # 前向传播
output_flat = output.view(-1, ntokens)
loss = criterion(output_flat, targets) # 计算损失
optimizer.zero_grad() # 梯度清零
loss.backward() # 反向传播计算梯度
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5) # 对梯度进行裁剪,防止梯度爆炸
optimizer.step() # 更新模型参数
total_loss += loss.item() # 累加损失值
if batch % log_interval == 0 and batch > 0:
lr = scheduler.get_last_lr()[0] # 获取当前学习率
# 计算每个batch的平均耗时
ms_per_batch = (time.time() - start_time) * 1000 / log_interval
cur_loss = total_loss / log_interval # 计算平均损失
ppl = math.exp(cur_loss) # 计算困惑度
# 打印日志信息
print(f'| epoch {epoch:3d} | {batch:5d}/{num_batches:5d} batches | '
f'lr {lr:02.2f} | ms/batch {ms_per_batch:5.2f} | '
f'loss {cur_loss:5.2f} | ppl {ppl:8.2f}')
total_loss = 0 # 重置损失值
start_time = time.time() # 重置起始时间
def evaluate(model: nn.Module, eval_data: Tensor) -> float:
model.eval() # 开启评估模式
total_loss = 0.
with torch.no_grad():
for i in range(0, eval_data.size(0) - 1, bptt):
data, targets = get_batch(eval_data, i) # 获取当前batch的数据和目标
seq_len = data.size(0) # 序列长度
output = model(data) # 前向传播
output_flat = output.view(-1, ntokens)
total_loss += seq_len * criterion(output_flat, targets).item() # 计算总损失
return total_loss / (len(eval_data) - 1) # 返回平均损失
criterion = nn.CrossEntropyLoss() # 定义交叉熵损失函数
lr = 5.0 # 学习率
# 使用随机梯度下降(SGD)优化器,将模型参数传入优化器
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
# 使用学习率调度器,每隔1个epoch,将学习率按0.95的比例进行衰减
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.95)
best_val_loss = float('inf') # 初始最佳验证损失为无穷大
epochs = 5 # 训练的总轮数
# 最佳模型参数的保存路径
best_model_params_path = os.path.join("output", "best_model_params.pt")
for epoch in range(1, epochs + 1): # 遍历每个epoch
epoch_start_time = time.time() # 记录当前epoch开始的时间
train(model) # 进行模型训练
val_loss = evaluate(model, val_data) # 在验证集上评估模型性能,计算验证损失
val_ppl = math.exp(val_loss) # 计算困惑度
elapsed = time.time() - epoch_start_time # 计算当前epoch的耗时
print('-' * 89)
# 打印当前epoch的信息,包括耗时、验证损失和困惑度]
print(f'| end of epoch {epoch:3d} | time: {elapsed:5.2f}s | '
f'valid loss {val_loss:5.2f} | valid ppl {val_ppl:8.2f}')
print('-' * 89)
if val_loss < best_val_loss: # 如果当前验证损失比最佳验证损失更低
best_val_loss = val_loss # 更新最佳验证损失
# 保存当前模型参数为最佳模型参数
torch.save(model.state_dict(), best_model_params_path)
scheduler.step() # 更新学习率
# 加载最佳模型参数,即加载在验证集上性能最好的模型
model.load_state_dict(torch.load(best_model_params_path))
| epoch 1 | 200/ 2928 batches | lr 5.00 | ms/batch 62.10 | loss 7.60 | ppl 1988.40
| epoch 1 | 400/ 2928 batches | lr 5.00 | ms/batch 17.75 | loss 6.91 | ppl 997.98
| epoch 1 | 600/ 2928 batches | lr 5.00 | ms/batch 17.86 | loss 6.54 | ppl 691.84
| epoch 1 | 800/ 2928 batches | lr 5.00 | ms/batch 18.23 | loss 6.42 | ppl 613.56
| epoch 1 | 1000/ 2928 batches | lr 5.00 | ms/batch 18.59 | loss 6.31 | ppl 550.96
| epoch 1 | 1200/ 2928 batches | lr 5.00 | ms/batch 18.00 | loss 6.29 | ppl 537.86
| epoch 1 | 1400/ 2928 batches | lr 5.00 | ms/batch 18.06 | loss 6.21 | ppl 497.04
| epoch 1 | 1600/ 2928 batches | lr 5.00 | ms/batch 18.06 | loss 6.19 | ppl 489.55
| epoch 1 | 1800/ 2928 batches | lr 5.00 | ms/batch 18.06 | loss 6.10 | ppl 447.05
| epoch 1 | 2000/ 2928 batches | lr 5.00 | ms/batch 18.04 | loss 6.10 | ppl 446.01
| epoch 1 | 2200/ 2928 batches | lr 5.00 | ms/batch 18.21 | loss 5.98 | ppl 395.55
| epoch 1 | 2400/ 2928 batches | lr 5.00 | ms/batch 18.09 | loss 6.04 | ppl 421.60
| epoch 1 | 2600/ 2928 batches | lr 5.00 | ms/batch 18.20 | loss 6.02 | ppl 412.97
| epoch 1 | 2800/ 2928 batches | lr 5.00 | ms/batch 18.37 | loss 5.95 | ppl 383.96
-----------------------------------------------------------------------------------------
| end of epoch 1 | time: 64.10s | valid loss 5.75 | valid ppl 313.27
-----------------------------------------------------------------------------------------
| epoch 2 | 200/ 2928 batches | lr 4.75 | ms/batch 19.12 | loss 5.91 | ppl 367.51
| epoch 2 | 400/ 2928 batches | lr 4.75 | ms/batch 19.36 | loss 5.90 | ppl 366.18
| epoch 2 | 600/ 2928 batches | lr 4.75 | ms/batch 19.06 | loss 5.76 | ppl 316.60
| epoch 2 | 800/ 2928 batches | lr 4.75 | ms/batch 19.39 | loss 5.79 | ppl 327.00
| epoch 2 | 1000/ 2928 batches | lr 4.75 | ms/batch 19.42 | loss 5.74 | ppl 312.08
| epoch 2 | 1200/ 2928 batches | lr 4.75 | ms/batch 19.42 | loss 5.78 | ppl 324.39
| epoch 2 | 1400/ 2928 batches | lr 4.75 | ms/batch 19.47 | loss 5.76 | ppl 318.24
| epoch 2 | 1600/ 2928 batches | lr 4.75 | ms/batch 19.52 | loss 5.79 | ppl 327.94
| epoch 2 | 1800/ 2928 batches | lr 4.75 | ms/batch 19.45 | loss 5.73 | ppl 307.91
| epoch 2 | 2000/ 2928 batches | lr 4.75 | ms/batch 19.48 | loss 5.74 | ppl 310.71
| epoch 2 | 2200/ 2928 batches | lr 4.75 | ms/batch 19.99 | loss 5.63 | ppl 279.15
| epoch 2 | 2400/ 2928 batches | lr 4.75 | ms/batch 19.93 | loss 5.72 | ppl 303.65
| epoch 2 | 2600/ 2928 batches | lr 4.75 | ms/batch 19.72 | loss 5.71 | ppl 302.62
| epoch 2 | 2800/ 2928 batches | lr 4.75 | ms/batch 19.75 | loss 5.65 | ppl 284.62
-----------------------------------------------------------------------------------------
| end of epoch 2 | time: 59.49s | valid loss 5.45 | valid ppl 233.57
-----------------------------------------------------------------------------------------
| epoch 3 | 200/ 2928 batches | lr 4.51 | ms/batch 20.16 | loss 5.64 | ppl 280.39
| epoch 3 | 400/ 2928 batches | lr 4.51 | ms/batch 19.53 | loss 5.66 | ppl 288.43
| epoch 3 | 600/ 2928 batches | lr 4.51 | ms/batch 34.44 | loss 5.51 | ppl 246.89
| epoch 3 | 800/ 2928 batches | lr 4.51 | ms/batch 74.36 | loss 5.56 | ppl 258.98
| epoch 3 | 1000/ 2928 batches | lr 4.51 | ms/batch 74.49 | loss 5.51 | ppl 247.44
| epoch 3 | 1200/ 2928 batches | lr 4.51 | ms/batch 74.49 | loss 5.54 | ppl 255.86
| epoch 3 | 1400/ 2928 batches | lr 4.51 | ms/batch 74.48 | loss 5.54 | ppl 255.24
| epoch 3 | 1600/ 2928 batches | lr 4.51 | ms/batch 74.50 | loss 5.58 | ppl 265.05
| epoch 3 | 1800/ 2928 batches | lr 4.51 | ms/batch 74.43 | loss 5.52 | ppl 250.01
| epoch 3 | 2000/ 2928 batches | lr 4.51 | ms/batch 74.42 | loss 5.53 | ppl 251.67
| epoch 3 | 2200/ 2928 batches | lr 4.51 | ms/batch 74.39 | loss 5.43 | ppl 227.35
| epoch 3 | 2400/ 2928 batches | lr 4.51 | ms/batch 74.41 | loss 5.52 | ppl 249.87
| epoch 3 | 2600/ 2928 batches | lr 4.51 | ms/batch 74.41 | loss 5.54 | ppl 254.04
| epoch 3 | 2800/ 2928 batches | lr 4.51 | ms/batch 74.41 | loss 5.45 | ppl 233.39
-----------------------------------------------------------------------------------------
| end of epoch 3 | time: 196.27s | valid loss 5.36 | valid ppl 212.73
-----------------------------------------------------------------------------------------
| epoch 4 | 200/ 2928 batches | lr 4.29 | ms/batch 74.89 | loss 5.45 | ppl 233.39
| epoch 4 | 400/ 2928 batches | lr 4.29 | ms/batch 74.41 | loss 5.50 | ppl 245.46
| epoch 4 | 600/ 2928 batches | lr 4.29 | ms/batch 74.53 | loss 5.32 | ppl 204.48
| epoch 4 | 800/ 2928 batches | lr 4.29 | ms/batch 74.91 | loss 5.38 | ppl 217.50
| epoch 4 | 1000/ 2928 batches | lr 4.29 | ms/batch 74.98 | loss 5.35 | ppl 211.26
| epoch 4 | 1200/ 2928 batches | lr 4.29 | ms/batch 74.90 | loss 5.40 | ppl 220.44
| epoch 4 | 1400/ 2928 batches | lr 4.29 | ms/batch 75.01 | loss 5.40 | ppl 220.32
| epoch 4 | 1600/ 2928 batches | lr 4.29 | ms/batch 75.13 | loss 5.44 | ppl 230.02
| epoch 4 | 1800/ 2928 batches | lr 4.29 | ms/batch 75.43 | loss 5.36 | ppl 213.09
| epoch 4 | 2000/ 2928 batches | lr 4.29 | ms/batch 75.58 | loss 5.39 | ppl 218.75
| epoch 4 | 2200/ 2928 batches | lr 4.29 | ms/batch 75.51 | loss 5.27 | ppl 194.47
| epoch 4 | 2400/ 2928 batches | lr 4.29 | ms/batch 75.47 | loss 5.37 | ppl 214.22
| epoch 4 | 2600/ 2928 batches | lr 4.29 | ms/batch 74.93 | loss 5.38 | ppl 216.47
| epoch 4 | 2800/ 2928 batches | lr 4.29 | ms/batch 75.28 | loss 5.31 | ppl 201.87
-----------------------------------------------------------------------------------------
| end of epoch 4 | time: 228.38s | valid loss 5.28 | valid ppl 197.28
-----------------------------------------------------------------------------------------
| epoch 5 | 200/ 2928 batches | lr 4.07 | ms/batch 75.21 | loss 5.32 | ppl 204.85
| epoch 5 | 400/ 2928 batches | lr 4.07 | ms/batch 75.07 | loss 5.36 | ppl 213.76
| epoch 5 | 600/ 2928 batches | lr 4.07 | ms/batch 75.41 | loss 5.18 | ppl 177.20
| epoch 5 | 800/ 2928 batches | lr 4.07 | ms/batch 75.38 | loss 5.24 | ppl 189.53
| epoch 5 | 1000/ 2928 batches | lr 4.07 | ms/batch 75.90 | loss 5.21 | ppl 182.96
| epoch 5 | 1200/ 2928 batches | lr 4.07 | ms/batch 75.93 | loss 5.25 | ppl 190.94
| epoch 5 | 1400/ 2928 batches | lr 4.07 | ms/batch 75.30 | loss 5.26 | ppl 191.79
| epoch 5 | 1600/ 2928 batches | lr 4.07 | ms/batch 75.06 | loss 5.31 | ppl 202.05
| epoch 5 | 1800/ 2928 batches | lr 4.07 | ms/batch 75.20 | loss 5.24 | ppl 189.07
| epoch 5 | 2000/ 2928 batches | lr 4.07 | ms/batch 75.20 | loss 5.26 | ppl 193.05
| epoch 5 | 2200/ 2928 batches | lr 4.07 | ms/batch 75.21 | loss 5.14 | ppl 170.77
| epoch 5 | 2400/ 2928 batches | lr 4.07 | ms/batch 75.02 | loss 5.23 | ppl 187.72
| epoch 5 | 2600/ 2928 batches | lr 4.07 | ms/batch 75.15 | loss 5.25 | ppl 189.99
| epoch 5 | 2800/ 2928 batches | lr 4.07 | ms/batch 75.12 | loss 5.17 | ppl 176.12
-----------------------------------------------------------------------------------------
| end of epoch 5 | time: 228.79s | valid loss 5.24 | valid ppl 188.86
-----------------------------------------------------------------------------------------
6.评估模型
test_loss = evaluate(model, test_data)
test_ppl = math.exp(test_loss)
print('=' * 89)
print(f'| End of training | test loss {test_loss:5.2f} | '
f'test ppl {test_ppl:8.2f}')
print('=' * 89)
=========================================================================================
| End of training | test loss 5.15 | test ppl 171.82
=========================================================================================