[翻译Pytorch教程]NLP部分:使用TorchText进行文本分类

本教程展示如何在torchtext中调用文本分类数据集,包括:

  • AG_NEWS,
  • SogouNews,
  • DBpedia,
  • YelpReviewPolarity,
  • YelpReviewFull,
  • YahooAnswers,
  • AmazonReviewPolarity,
  • AmazonReviewFull

这个例子展示了如何用这些文本分类TextClassification数据集之一训练一个有监督学习算法。

使用ngrams加载数据

一组ngrams特征被用于获取局部单词顺序的一些局部信息。实际中,bi-gramtri-gram 作为词组被用于获取比单独一个单词更好的效益。比如:

"load data with ngrams"
Bi-grams results: "load data", "data with", "with ngrams"
Tri-grams results: "load data with", "data with ngrams"

文本分类TextClassification 数据集支持ngrams方法。通过设定ngrams为2,数据集中的样本会被处理为单个单词加bi-grams字符串的列表。

%matplotlib inline
import torch
import torchtext
from torchtext.datasets import text_classification
NGRAMS = 2
import os
if not os.path.isdir('./.data'):
	os.mkdir('./.data')
# 此处为原教程中使用的代码,可以自动下载所需数据,国内由于网络原因会导致连接失败,后面会导入本地数据
# train_dataset, test_dataset = text_classification.DATASETS['AG_NEWS'](
#     root='./.data', ngrams=NGRAMS, vocab=None)
BATCH_SIZE = 16
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

手动下载数据并导入

由于网络原因国内不能自动下载数据(百度网盘地址 提取码: 2vj9),需要手动下载数据压缩包ag_news_csv.tar.gz,并将其放到'./.data/'文件夹下

# 导入需要的库及函数
import logging
from torchtext.utils import extract_archive, unicode_csv_reader
from torchtext.vocab import build_vocab_from_iterator
from torchtext.datasets.text_classification import *
from torchtext.datasets.text_classification import _csv_iterator,_create_data_from_iterator

# 定义创建数据集函数,原函数在torchtext.datasets.text_classification文件中,本教程所需参数直接设成了默认值
def _setup_datasets(dataset_tar='./.data/ag_news_csv.tar.gz',dataset_name="AG_NEWS", root='./.data', ngrams=NGRAMS, vocab=None, include_unk=False):
    # 注释掉下载数据的代码
    #     dataset_tar = download_from_url(URLS[dataset_name], root=root)
    extracted_files = extract_archive(dataset_tar)  #解压数据文件

    for fname in extracted_files:
        if fname.endswith('train.csv'):
            train_csv_path = fname
        if fname.endswith('test.csv'):
            test_csv_path = fname

    if vocab is None:
        logging.info('Building Vocab based on {}'.format(train_csv_path))
        vocab = build_vocab_from_iterator(_csv_iterator(train_csv_path, ngrams)) #创建词典
    else:
        if not isinstance(vocab, Vocab):
            raise TypeError("Passed vocabulary is not of type Vocab")
    logging.info('Vocab has {} entries'.format(len(vocab)))
    logging.info('Creating training data')
    train_data, train_labels = _create_data_from_iterator(   #创建训练数据
        vocab, _csv_iterator(train_csv_path, ngrams, yield_cls=True), include_unk) 
    logging.info('Creating testing data')
    test_data, test_labels = _create_data_from_iterator(   #创建测试数据
        vocab, _csv_iterator(test_csv_path, ngrams, yield_cls=True), include_unk)
    if len(train_labels ^ test_labels) > 0:
        raise ValueError("Training and test labels don't match")
    return (TextClassificationDataset(vocab, train_data, train_labels),  #返回数据集实例
            TextClassificationDataset(vocab, test_data, test_labels))
train_dataset, test_dataset = _setup_datasets()

输出:

120000lines [00:07, 16793.47lines/s]
120000lines [00:13, 9134.56lines/s]
7600lines [00:00, 9351.33lines/s]

定义模型(Define the model)

模型是由EmbeddingBag
层和线性层组成,如下图所示。nn.EmbeddingBag计算嵌入层的“bag”的平均值。这里的文本条目有不同的长度。nn.EmbeddingBag要求没有填充,因此文本长度被存储在偏置中。
此外,由于 nn.EmbeddingBag积累了词嵌入的平均值,nn.EmbeddingBag 可以提高处理张量序列的效果和内存效率。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LZxZw0d3-1583039399954)(attachment:image.png)]

import torch.nn as nn
import torch.nn.functional as F
class TextSentiment(nn.Module):
    def __init__(self, vocab_size, embed_dim, num_class):
        super().__init__()
        self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)
        self.fc = nn.Linear(embed_dim, num_class)
        self.init_weights()

    def init_weights(self):
        initrange = 0.5
        self.embedding.weight.data.uniform_(-initrange, initrange)
        self.fc.weight.data.uniform_(-initrange, initrange)
        self.fc.bias.data.zero_()

    def forward(self, text, offsets):
        embedded = self.embedding(text, offsets)
        return self.fc(embedded)

初始化实例(Initiate an instance)

AG_NEWS 数据集有四种标签,因此类别数量为4.

1 : World
2 : Sports
3 : Business
4 : Sci/Tec

词典大小等于词典长度(包括单个单词和ngrams)。类别数量等于标签数量,对于 AG_NEWS 来说是4。

VOCAB_SIZE = len(train_dataset.get_vocab())
EMBED_DIM = 32
NUN_CLASS = len(train_dataset.get_labels())
model = TextSentiment(VOCAB_SIZE, EMBED_DIM, NUN_CLASS).to(device)

产生训练批次的函数

由于文本长度不同,自定义了generate_batch()函数用于产生文本批次和偏置。函数被传递给 torch.utils.data.DataLoader中的collate_fncollate_fn的输入是大小为batch_size的张量的列表,collate_fn将列表打包进最小批次(mini-batch)。请注意,要确保collate_fn被声明为顶层def(声明)。这确保了这个函数可以在任何位置被调用。

原始数据批次输入的文本条目被打包到一个列表并串联成了一个单独张量作为nn.EmbeddingBag的输入。偏置(offsets)是由分隔符组成的张量,分隔符用来表示文本张量中每个独立序列的起始索引。标签(Label)是保存每个文本条目的标签的张量。

def generate_batch(batch):
    label = torch.tensor([entry[0] for entry in batch])
    text = [entry[1] for entry in batch]
    offsets = [0] + [len(entry) for entry in text]
    # torch.Tensor.cumsum 返回dim维度元素的累积和
    # torch.Tensor([1.0, 2.0, 3.0]).cumsum(dim=0)

    offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
    text = torch.cat(text)
    return text, offsets, label

定义函数训练模型并验证结果

为PyTorch用户推荐torch.utils.data.DataLoader,它可以轻松的使数据加载并行化(相关教程)。这里使用DataLoader 加载AG_NEWS数据集并将其传递给模型用于训练/验证。

from torch.utils.data import DataLoader

def train_func(sub_train_):

    # 训练模型
    train_loss = 0
    train_acc = 0
    data = DataLoader(sub_train_, batch_size=BATCH_SIZE, shuffle=True,
                      collate_fn=generate_batch)
    for i, (text, offsets, cls) in enumerate(data):
        optimizer.zero_grad()
        text, offsets, cls = text.to(device), offsets.to(device), cls.to(device)
        output = model(text, offsets)
        loss = criterion(output, cls)
        train_loss += loss.item()
        loss.backward()
        optimizer.step()
        train_acc += (output.argmax(1) == cls).sum().item()

    # 调节学习率
    scheduler.step()

    return train_loss / len(sub_train_), train_acc / len(sub_train_)

def test(data_):
    loss = 0
    acc = 0
    data = DataLoader(data_, batch_size=BATCH_SIZE, collate_fn=generate_batch)
    for text, offsets, cls in data:
        text, offsets, cls = text.to(device), offsets.to(device), cls.to(device)
        with torch.no_grad():
            output = model(text, offsets)
            loss = criterion(output, cls)
            loss += loss.item()
            acc += (output.argmax(1) == cls).sum().item()

    return loss / len(data_), acc / len(data_)

分割数据集及训练模型

由于原始AG_NEWS数据没有校验集,本文按照0.95(训练)和0.05(校验)的比例将训练数据分割为训练/校验集。这里使用PyTorch核心库中的torch.utils.data.dataset.random_split函数。

CrossEntropyLoss标准将nn.LogSoftmax()nn.NLLLoss()合并到一个类。当训练C个类别的分类问题时非常有用。SGD作为优化器实现了随机梯度下降方法。初始学习率被设置为4.0。StepLR用于每次训练(epochs)调整学习率。

import time
from torch.utils.data.dataset import random_split
N_EPOCHS = 5
min_valid_loss = float('inf')

criterion = torch.nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=4.0)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.9)

train_len = int(len(train_dataset) * 0.95)
sub_train_, sub_valid_ = \
    random_split(train_dataset, [train_len, len(train_dataset) - train_len])

for epoch in range(N_EPOCHS):

    start_time = time.time()
    train_loss, train_acc = train_func(sub_train_)
    valid_loss, valid_acc = test(sub_valid_)

    secs = int(time.time() - start_time)
    mins = secs / 60
    secs = secs % 60

    print('Epoch: %d' %(epoch + 1), " | time in %d minutes, %d seconds" %(mins, secs))
    print(f'\tLoss: {train_loss:.4f}(train)\t|\tAcc: {train_acc * 100:.1f}%(train)')
    print(f'\tLoss: {valid_loss:.4f}(valid)\t|\tAcc: {valid_acc * 100:.1f}%(valid)')

输出:

Epoch: 1  | time in 0 minutes, 22 seconds
	Loss: 0.0261(train)	|	Acc: 84.7%(train)
	Loss: 0.0001(valid)	|	Acc: 90.5%(valid)
Epoch: 2  | time in 0 minutes, 17 seconds
	Loss: 0.0119(train)	|	Acc: 93.6%(train)
	Loss: 0.0001(valid)	|	Acc: 90.9%(valid)
Epoch: 3  | time in 0 minutes, 9 seconds
	Loss: 0.0069(train)	|	Acc: 96.5%(train)
	Loss: 0.0000(valid)	|	Acc: 89.9%(valid)
Epoch: 4  | time in 0 minutes, 22 seconds
	Loss: 0.0039(train)	|	Acc: 98.1%(train)
	Loss: 0.0000(valid)	|	Acc: 91.4%(valid)
Epoch: 5  | time in 0 minutes, 22 seconds
	Loss: 0.0022(train)	|	Acc: 99.0%(train)
	Loss: 0.0000(valid)	|	Acc: 91.3%(valid)

使用测试数据评价模型

print('Checking the results of test dataset...')
test_loss, test_acc = test(test_dataset)
print(f'\tLoss: {test_loss:.4f}(test)\t|\tAcc: {test_acc * 100:.1f}%(test)')

输出:

Checking the results of test dataset...
	Loss: 0.0003(test)	|	Acc: 88.3%(test)

在随机新闻上测试

使用目前最好的模型并测试一条高尔夫新闻。标签信息可以在此获得。

import re
from torchtext.data.utils import ngrams_iterator
from torchtext.data.utils import get_tokenizer

ag_news_label = {1 : "World",
                 2 : "Sports",
                 3 : "Business",
                 4 : "Sci/Tec"}

def predict(text, model, vocab, ngrams):
    tokenizer = get_tokenizer("basic_english")
    with torch.no_grad():
        text = torch.tensor([vocab[token]
                            for token in ngrams_iterator(tokenizer(text), ngrams)])
        output = model(text, torch.tensor([0]))
        return output.argmax(1).item() + 1

ex_text_str = "MEMPHIS, Tenn. – Four days ago, Jon Rahm was \
    enduring the season’s worst weather conditions on Sunday at The \
    Open on his way to a closing 75 at Royal Portrush, which \
    considering the wind and the rain was a respectable showing. \
    Thursday’s first round at the WGC-FedEx St. Jude Invitational \
    was another story. With temperatures in the mid-80s and hardly any \
    wind, the Spaniard was 13 strokes better in a flawless round. \
    Thanks to his best putting performance on the PGA Tour, Rahm \
    finished with an 8-under 62 for a three-stroke lead, which \
    was even more impressive considering he’d never played the \
    front nine at TPC Southwind."

vocab = train_dataset.get_vocab()
model = model.to("cpu")

print("This is a %s news" %ag_news_label[predict(ex_text_str, model, vocab, 2)])

输出:

This is a Sports news

这是一个运动新闻

样例代码见此笔记

  • 5
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
AG's News Topic Classification Dataset Version 3, Updated 09/09/2015 ORIGIN AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html . The AG's news topic classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu) from the dataset above. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). DESCRIPTION The AG's news topic classification dataset is constructed by choosing 4 largest classes from the original corpus. Each class contains 30,000 training samples and 1,900 testing samples. The total number of training samples is 120,000 and testing 7,600. The file classes.txt contains a list of classes corresponding to each label. The files train.csv and test.csv contain all the training samples as comma-sparated values. There are 3 columns in them, corresponding to class index (1 to 4), title and description. The title and description are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值