官网链接
Text classification with the torchtext library — PyTorch Tutorials 2.0.1+cu117 documentation
使用torchtext库进行文本分类
在本教程中,我们将展示如何使用torchtext库来构建用于文本分类分析的数据集。
- 使用迭代器访问原始数据
- 构建数据处理管道,将原始文本字符串转换为torch.Tensor ,可以用来训练模型
- 使用torch.utils.data.DataLoader对数据进行清洗和迭代
先决条件
在运行本教程之前,需要安装portalocker包的最新2.x版本。例如,在Colab环境中,可以通过在脚本顶部添加以下行来完成:
!pip install -U portalocker>=2.0.0`
访问原始数据集迭代器
torchtext库提供了一些原始数据集迭代器,用于生成原始文本字符串。例如,AG_NEWS数据集迭代器以标签和文本元组的形式生成原始数据。
要访问torchtext数据集,请按照https://github.com/pytorch/data上的说明安装torchdata。
import torch
from torchtext.datasets import AG_NEWS
train_iter = iter(AG_NEWS(split="train"))
next(train_iter)
>>> (3, "Fears for T N pension after talks Unions representing workers at Turner
Newall say they are 'disappointed' after talks with stricken parent firm Federal
Mogul.")
next(train_iter)
>>> (4, "The Race is On: Second Private Team Sets Launch Date for Human
Spaceflight (SPACE.com) SPACE.com - TORONTO, Canada -- A second\\team of
rocketeers competing for the #36;10 million Ansari X Prize, a contest
for\\privately funded suborbital space flight, has officially announced
the first\\launch date for its manned rocket.")
next(train_iter)
>>> (4, 'Ky. Company Wins Grant to Study Peptides (AP) AP - A company founded
by a chemistry researcher at the University of Louisville won a grant to develop
a method of producing better peptides, which are short chains of amino acids, the
building blocks of proteins.')
准备数据处理管道
我们回顾了torchtext库的基本组件,包括词汇表、词向量和分词器。这些是原始文本字符串的基本数据处理模块。
下面是一个使用分词器和词汇表进行典型NLP数据处理的示例。第一步是使用原始训练数据集构建一个词汇表。这里我们使用内置的工厂函数build_vocab_from_iterator,它接受生成列表的迭代器或者单词列表的迭代器。用户还可以传递任何要添加到词汇表中的特殊符号。
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
tokenizer = get_tokenizer("basic_english")
train_iter = AG_NEWS(split="train")
def yield_tokens(data_iter):
for _, text in data_iter:
yield tokenizer(text)
vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"])
vocab.set_default_index(vocab["<unk>"])
词汇表块将单词列表转换为整数。
vocab(['here', 'is', 'an', 'example'])
>>> [475, 21, 30, 5297]
使用分词器和词汇表准备文本处理管道。文本和标签管道将用于处理来自数据集迭代器的原始数据字符串。
text_pipeline = lambda x: vocab(tokenizer(x))
label_pipeline = lambda x: int(x) - 1
文本管道根据词汇表中定义的查找表将文本字符串转换为整数列表。标签管道将标签转换为整数。例如,
text_pipeline('here is the an example')
>>> [475, 21, 2, 30, 5297]
label_pipeline('10')
>>> 9
生成数据批处理和迭代器
建议PyTorch用户使用torch.utils.data.DataLoader(这里有一个教程here).它使用map风格的数据集,实现了getitem()和len()协议,表示从索引/键到数据样本的映射。它也适用于shuffle参数为False的可迭代数据集。
在发送到模型之前,collate_fn函数处理从DataLoader生成的一批样本。collate_fn的输入是DataLoader中批量大小相同的一批数据,collate_fn根据之前声明的数据处理管道来处理它们。注意这里,确保collate_fn被声明为顶级定义。这确保了该函数在每个worker中都可用。
在这个例子中,原始数据批量输入中的文本条目被打包到一个列表中,并连接为一个张量,用于nn.EmbeddingBag 的输入。偏移量是一个分隔符张量,表示文本张量中单个序列的起始索引。Label是一个张量,保存每个文本条目的标签。
from torch.utils.data import DataLoader
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def collate_batch(batch):
label_list, text_list, offsets = [], [], [0]
for _label, _text in batch:
label_list.append(label_pipeline(_label))
processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int64)
text_list.append(processed_text)
offsets.append(processed_text.size(0))
label_list = torch.tensor(label_list, dtype=torch.int64)
offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
text_list = torch.cat(text_list)
return label_list.to(device), text_list.to(device), offsets.to(device)
train_iter = AG_NEWS(split="train")
dataloader = DataLoader(
train_iter, batch_size=8, shuffle=False, collate_fn=collate_batch
)
定义模型
该模型由 nn.EmbeddingBag 层和用于分类的线性层组成。使用模式模式”mean“ 的nn.EmbeddingBag 计算”bag“ Embedding的平均值。尽管这里的文本条目具有不同的长度,但nn.EmbeddingBag模块在这里不需要填充,因为文本长度保存在偏移量中。
此外,由于nn.EmbeddingBag在运行中累计了所有Embedding的平均值,因此nn.EmbeddingBag可以提高处理一系列张量的性能和内存效率。
from torch import nn
class TextClassificationModel(nn.Module):
def __init__(self, vocab_size, embed_dim, num_class):
super(TextClassificationModel, self).__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=False)
self.fc = nn.Linear(embed_dim, num_class)
self.init_weights()
def init_weights(self):
initrange = 0.5
self.embedding.weight.data.uniform_(-initrange, initrange)
self.fc.weight.data.uniform_(-initrange, initrange)
self.fc.bias.data.zero_()
def forward(self, text, offsets):
embedded = self.embedding(text, offsets)
return self.fc(embedded)
初始化实例
AG_NEWS数据集有4个标签,因此类别数为4
1 : World
2 : Sports
3 : Business
4 : Sci/Tec
我们构建了一个embedding维度为64的模型。词汇表的大小等于词汇表实例的长度。类别的数量等于标签的数量,
train_iter = AG_NEWS(split="train")
num_class = len(set([label for (label, text) in train_iter]))
vocab_size = len(vocab)
emsize = 64
model = TextClassificationModel(vocab_size, emsize, num_class).to(device)
定义函数来训练模型和评估结果
import time
def train(dataloader):
model.train()
total_acc, total_count = 0, 0
log_interval = 500
start_time = time.time()
for idx, (label, text, offsets) in enumerate(dataloader):
optimizer.zero_grad()
predicted_label = model(text, offsets)
loss = criterion(predicted_label, label)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
optimizer.step()
total_acc += (predicted_label.argmax(1) == label).sum().item()
total_count += label.size(0)
if idx % log_interval == 0 and idx > 0:
elapsed = time.time() - start_time
print(
"| epoch {:3d} | {:5d}/{:5d} batches "
"| accuracy {:8.3f}".format(
epoch, idx, len(dataloader), total_acc / total_count
)
)
total_acc, total_count = 0, 0
start_time = time.time()
def evaluate(dataloader):
model.eval()
total_acc, total_count = 0, 0
with torch.no_grad():
for idx, (label, text, offsets) in enumerate(dataloader):
predicted_label = model(text, offsets)
loss = criterion(predicted_label, label)
total_acc += (predicted_label.argmax(1) == label).sum().item()
total_count += label.size(0)
return total_acc / total_count
分割数据集并运行模型
由于原始AG_NEWS没有验证数据集,我们将训练数据集划分为训练集/验证集,划分比为0.95(训练集)和0.05(验证集)。这里我们使用torch.utils.data.dataset.random_split 。PyTorch核心库中的random_split函数。
CrossEntropyLoss(交叉熵损失)准则将nn.LogSoftmax()和nn.NLLLoss()组合在一个类中。当训练包含C类的分类问题时,它很有用。SGD采用随机梯度下降法作为优化器。初始学习率设置为5.0。这里使用StepLR 通过epoch调整学习率。
from torch.utils.data.dataset import random_split
from torchtext.data.functional import to_map_style_dataset
# Hyperparameters
EPOCHS = 10 # epoch
LR = 5 # learning rate
BATCH_SIZE = 64 # batch size for training
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.1)
total_accu = None
train_iter, test_iter = AG_NEWS()
train_dataset = to_map_style_dataset(train_iter)
test_dataset = to_map_style_dataset(test_iter)
num_train = int(len(train_dataset) * 0.95)
split_train_, split_valid_ = random_split(
train_dataset, [num_train, len(train_dataset) - num_train]
)
train_dataloader = DataLoader(
split_train_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch
)
valid_dataloader = DataLoader(
split_valid_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch
)
test_dataloader = DataLoader(
test_dataset, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch
)
for epoch in range(1, EPOCHS + 1):
epoch_start_time = time.time()
train(train_dataloader)
accu_val = evaluate(valid_dataloader)
if total_accu is not None and total_accu > accu_val:
scheduler.step()
else:
total_accu = accu_val
print("-" * 59)
print(
"| end of epoch {:3d} | time: {:5.2f}s | "
"valid accuracy {:8.3f} ".format(
epoch, time.time() - epoch_start_time, accu_val
)
)
print("-" * 59)
输出
| epoch 1 | 500/ 1782 batches | accuracy 0.694
| epoch 1 | 1000/ 1782 batches | accuracy 0.856
| epoch 1 | 1500/ 1782 batches | accuracy 0.877
-----------------------------------------------------------
| end of epoch 1 | time: 12.46s | valid accuracy 0.886
-----------------------------------------------------------
| epoch 2 | 500/ 1782 batches | accuracy 0.898
| epoch 2 | 1000/ 1782 batches | accuracy 0.899
| epoch 2 | 1500/ 1782 batches | accuracy 0.906
-----------------------------------------------------------
| end of epoch 2 | time: 11.99s | valid accuracy 0.895
-----------------------------------------------------------
| epoch 3 | 500/ 1782 batches | accuracy 0.916
| epoch 3 | 1000/ 1782 batches | accuracy 0.913
| epoch 3 | 1500/ 1782 batches | accuracy 0.915
-----------------------------------------------------------
| end of epoch 3 | time: 12.07s | valid accuracy 0.894
-----------------------------------------------------------
| epoch 4 | 500/ 1782 batches | accuracy 0.930
| epoch 4 | 1000/ 1782 batches | accuracy 0.932
| epoch 4 | 1500/ 1782 batches | accuracy 0.929
-----------------------------------------------------------
| end of epoch 4 | time: 12.06s | valid accuracy 0.902
-----------------------------------------------------------
| epoch 5 | 500/ 1782 batches | accuracy 0.932
| epoch 5 | 1000/ 1782 batches | accuracy 0.933
| epoch 5 | 1500/ 1782 batches | accuracy 0.931
-----------------------------------------------------------
| end of epoch 5 | time: 12.04s | valid accuracy 0.902
-----------------------------------------------------------
| epoch 6 | 500/ 1782 batches | accuracy 0.933
| epoch 6 | 1000/ 1782 batches | accuracy 0.932
| epoch 6 | 1500/ 1782 batches | accuracy 0.935
-----------------------------------------------------------
| end of epoch 6 | time: 12.02s | valid accuracy 0.903
-----------------------------------------------------------
| epoch 7 | 500/ 1782 batches | accuracy 0.934
| epoch 7 | 1000/ 1782 batches | accuracy 0.933
| epoch 7 | 1500/ 1782 batches | accuracy 0.935
-----------------------------------------------------------
| end of epoch 7 | time: 12.09s | valid accuracy 0.903
-----------------------------------------------------------
| epoch 8 | 500/ 1782 batches | accuracy 0.935
| epoch 8 | 1000/ 1782 batches | accuracy 0.933
| epoch 8 | 1500/ 1782 batches | accuracy 0.935
-----------------------------------------------------------
| end of epoch 8 | time: 12.10s | valid accuracy 0.904
-----------------------------------------------------------
| epoch 9 | 500/ 1782 batches | accuracy 0.934
| epoch 9 | 1000/ 1782 batches | accuracy 0.934
| epoch 9 | 1500/ 1782 batches | accuracy 0.934
-----------------------------------------------------------
| end of epoch 9 | time: 12.05s | valid accuracy 0.904
-----------------------------------------------------------
| epoch 10 | 500/ 1782 batches | accuracy 0.934
| epoch 10 | 1000/ 1782 batches | accuracy 0.936
| epoch 10 | 1500/ 1782 batches | accuracy 0.933
-----------------------------------------------------------
| end of epoch 10 | time: 12.00s | valid accuracy 0.905
-----------------------------------------------------------
使用测试数据集评估模型
检查测试数据集的结果……
print("Checking the results of test dataset.")
accu_test = evaluate(test_dataloader)
print("test accuracy {:8.3f}".format(accu_test))
输出
Checking the results of test dataset.
test accuracy 0.907
测试一个随机的新闻
使用迄今为止最好的模型,测试一个高尔夫新闻。
ag_news_label = {1: "World", 2: "Sports", 3: "Business", 4: "Sci/Tec"}
def predict(text, text_pipeline):
with torch.no_grad():
text = torch.tensor(text_pipeline(text))
output = model(text, torch.tensor([0]))
return output.argmax(1).item() + 1
ex_text_str = "MEMPHIS, Tenn. – Four days ago, Jon Rahm was \
enduring the season’s worst weather conditions on Sunday at The \
Open on his way to a closing 75 at Royal Portrush, which \
considering the wind and the rain was a respectable showing. \
Thursday’s first round at the WGC-FedEx St. Jude Invitational \
was another story. With temperatures in the mid-80s and hardly any \
wind, the Spaniard was 13 strokes better in a flawless round. \
Thanks to his best putting performance on the PGA Tour, Rahm \
finished with an 8-under 62 for a three-stroke lead, which \
was even more impressive considering he’d never played the \
front nine at TPC Southwind."
model = model.to("cpu")
print("This is a %s news" % ag_news_label[predict(ex_text_str, text_pipeline)])
输出
This is a Sports news