N1:Pytorch文本分类入门

一、前期准备

这是一个使用PyTorch实现的简单文本分类实战案例。在这个例子中,我们将使用AG News数据集进行文本分类。

1.1 加载数据

import torch
import torch.nn as nn
import torchvision
from torchvision import transforms,datasets
import os,PIL,pathlib,warnings
warnings.filterwarnings('ignore')
device = torch.device('cuda')
from torchtext.datasets import AG_NEWS
train_iter = AG_NEWS(root='.data',split = 'train')

torchtext.datasets.AG_NEWS 是一个用于加载 AG News 数据集的 TorchText 数据集类。AG News 数据集是一个用于文本分类任务的常见数据集,其中包含四个类别的新闻文章:世界、科技、体育和商业。

官方API地址:https://pytorch.org/text/stable/datasets.html#ag-news

1.2 构建词典

from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator

tokenizer  = get_tokenizer('basic_english') 

def yield_tokens(data_iter):
    for _, text in data_iter:
        yield tokenizer(text)

vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"])
vocab.set_default_index(vocab["<unk>"])
vocab(['here', 'is', 'an', 'example'])

[475, 21, 30, 5297]

text_pipeline = lambda x: vocab(tokenizer(x))
label_pipeline = lambda x: int(x) - 1
text_pipeline('here is the an example')

[475, 21, 2, 30, 5297]

1.3 生成数据批次和迭代器

from torch.utils.data import DataLoader
def collate_batch(batch):
    label_list, text_list, offsets = [], [], [0]

    for(_label, text) in batch:
        label_list.append(label_pipeline(_label))# label_pipeline函数将标签转换为整数
        processed_text = torch.tensor(text_pipeline(text), dtype=torch.int64)# text_pipeline函数将文本转换为整数序列
        text_list.append(processed_text)
        offsets.append(processed_text.size(0))# 记录每个文本的长度
    label_list = torch.tensor(label_list, dtype=torch.int64)
    text_list = torch.cat(text_list)
    offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)# 记录每个文本的起始位置
    return label_list.to(device), text_list.to(device), offsets.to(device)
dataloader = DataLoader(train_iter, batch_size=8, shuffle=True, collate_fn=collate_batch)

二、 模型

这里我们定义TextClassificationModel模型,首先对文本进行嵌入,然后对句子嵌入之后的结果进行均值聚合。
在这里插入图片描述

class TextClassificationModel (nn.Module):
    def __init__(self, vocab_size, embed_dim, num_class):
        super(TextClassificationModel, self).__init__()
        self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=False)
        self.fc = nn.Linear(embed_dim, num_class)
        self.init_weights()
    def init_weights(self):
        initrange = 0.5
        self.embedding.weight.data.uniform_(-initrange, initrange)
        self.fc.weight.data.uniform_(-initrange, initrange)
        self.fc.bias.data.zero_()
    def forward(self, text, offsets):
        embedded = self.embedding(text, offsets)
        return self.fc(embedded)

self.embedding.weight.data.uniform_(-initrange, initrange) 这段代码是在 PyTorch 框架下用于初始化神经网络的词嵌入层**(embedding layer)**权重的一种方法。这里使用了均匀分布的随机值来初始化权重,具体来说,其作用如下:

self.embedding: 这是神经网络中的词嵌入层 (embedding layer)。词嵌入层的作用是将离散的单词表示(通常为整数索引)映射为固定大小的连续向量。这些向量捕捉了单词之间的语义关系,并作为网络的输入。
self.embedding.weight: 这是词嵌入层的权重矩阵,它的形状为 (vocab_size, embedding_dim),其中 vocab_size 是词汇表的大小,embedding_dim 是嵌入向量的维度。
self.embedding.weight.data: 这是权重矩阵的数据部分,我们可以在这里直接操作其底层的张量。
.uniform_(-initrange, initrange): 这是一个原地操作 (in-place operation),用于将权重矩阵的值用一个均匀分布进行初始化。均匀分布的范围为 [-initrange, initrange],其中 initrange 是一个正数。

通过这种方式初始化词嵌入层的权重,可以使得模型在训练开始时具有一定的随机性,有助于避免梯度消失或梯度爆炸等问题。在训练过程中,这些权重将通过优化算法不断更新,以捕捉到更好的单词表示。

三、训练

3.1 定义参数

num_class = len(set([label for (label, text) in train_iter]))
vocab_size = len(vocab)
embed_dim = 32
model = TextClassificationModel(vocab_size, embed_dim, num_class).to(device)

3.2 训练和评估函数

import time
import tqdm as tqdm
def train(dataloader):
    model.train()
    total_acc = 0
    total_count = 0
    train_loss = 0
    par = tqdm.tqdm(dataloader)
    i = 0
    for ( label, text, offsets) in (par):
        i+=1
        predict_label = model(text, offsets)
        optimizer.zero_grad()
        loss = criterion(predict_label, label)
        loss.backward()
        optimizer.step()
        total_acc += (predict_label.argmax(1) == label).sum().item()
        total_count += label.size(0)
        train_loss += loss.item()
        par.set_description('loss: %.3f | acc: %.3f' % (train_loss / (i + 1), total_acc / total_count))
def evaluate(dataloader):
    model.eval()
    total_acc = 0
    total_count = 0
    train_loss = 0
    par = tqdm.tqdm(dataloader)
    i = 0
    with torch.no_grad():
        for  ( label, text, offsets) in par:
            i += 1
            predict_label = model(text, offsets)
            loss = criterion(predict_label, label)
            total_acc += (predict_label.argmax(1) == label).sum().item()
            total_count += label.size(0)
            train_loss += loss.item()
            par.set_description('loss: %.3f | acc: %.3f' % (train_loss / (i + 1), total_acc / total_count))
    return total_acc/total_count, train_loss/total_count

3.3 拆分数据

from torch.utils.data.dataset import random_split
from torchtext.data.functional import to_map_style_dataset
Epochs = 10
learing_rate = 5
batch_size = 16
criterion = torch.nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=learing_rate)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.1)
total_accu = None
test_iter = AG_NEWS(root='.data',split='train')
train_dataset = to_map_style_dataset(train_iter)
test_dataset = to_map_style_dataset(test_iter)
num_train = int(len(train_dataset) * 0.95)
split_train , split_valid = random_split(train_dataset, [num_train, len(train_dataset) - num_train])
train_dataloader = DataLoader(split_train, batch_size=batch_size, shuffle = True,collate_fn=collate_batch)
valid_dataloader = DataLoader(split_valid, batch_size=batch_size, shuffle = True,collate_fn=collate_batch)
test_dataloader = DataLoader(test_dataset, batch_size=batch_size, shuffle = True,collate_fn=collate_batch)

3.4 训练

for epoch in range(1,Epochs+1):
    train(train_dataloader)
    val_acc ,val_loss = evaluate(valid_dataloader)
    if total_accu is not None and  total_accu > val_acc:
        scheduler.step()
    else:
        total_accu = val_acc
        torch.save(model.state_dict(), 'model.pt')
    print('epoch: %d | val_acc: %.3f | val_loss: %.3f' % (epoch, val_acc, val_loss))

loss: 0.199 | acc: 0.932: 100%|██████████| 7125/7125 [00:21<00:00, 328.41it/s]
loss: 0.233 | acc: 0.921: 100%|██████████| 375/375 [00:00<00:00, 458.94it/s]
epoch: 1 | val_acc: 0.921 | val_loss: 0.015
loss: 0.183 | acc: 0.937: 100%|██████████| 7125/7125 [00:21<00:00, 326.07it/s]
loss: 0.235 | acc: 0.919: 100%|██████████| 375/375 [00:00<00:00, 445.06it/s]
epoch: 2 | val_acc: 0.919 | val_loss: 0.015
loss: 0.125 | acc: 0.959: 100%|██████████| 7125/7125 [00:22<00:00, 320.01it/s]
loss: 0.208 | acc: 0.934: 100%|██████████| 375/375 [00:00<00:00, 439.28it/s]
epoch: 3 | val_acc: 0.934 | val_loss: 0.013
loss: 0.117 | acc: 0.961: 100%|██████████| 7125/7125 [00:22<00:00, 320.72it/s]
loss: 0.209 | acc: 0.934: 100%|██████████| 375/375 [00:00<00:00, 379.11it/s]
epoch: 4 | val_acc: 0.934 | val_loss: 0.013
loss: 0.112 | acc: 0.963: 100%|██████████| 7125/7125 [00:26<00:00, 268.08it/s]
loss: 0.215 | acc: 0.932: 100%|██████████| 375/375 [00:01<00:00, 342.71it/s]
epoch: 5 | val_acc: 0.932 | val_loss: 0.013
loss: 0.105 | acc: 0.966: 100%|██████████| 7125/7125 [00:23<00:00, 306.65it/s]
loss: 0.213 | acc: 0.934: 100%|██████████| 375/375 [00:00<00:00, 399.39it/s]
epoch: 6 | val_acc: 0.934 | val_loss: 0.013
loss: 0.104 | acc: 0.967: 100%|██████████| 7125/7125 [00:23<00:00, 308.61it/s]
loss: 0.213 | acc: 0.934: 100%|██████████| 375/375 [00:00<00:00, 422.62it/s]
epoch: 7 | val_acc: 0.934 | val_loss: 0.013
loss: 0.104 | acc: 0.967: 100%|██████████| 7125/7125 [00:22<00:00, 310.63it/s]
loss: 0.213 | acc: 0.934: 100%|██████████| 375/375 [00:00<00:00, 414.77it/s]
epoch: 8 | val_acc: 0.934 | val_loss: 0.013
loss: 0.104 | acc: 0.967: 100%|██████████| 7125/7125 [00:23<00:00, 309.20it/s]
loss: 0.213 | acc: 0.934: 100%|██████████| 375/375 [00:00<00:00, 431.55it/s]
epoch: 9 | val_acc: 0.934 | val_loss: 0.013
loss: 0.104 | acc: 0.967: 100%|██████████| 7125/7125 [00:24<00:00, 295.64it/s]
loss: 0.213 | acc: 0.935: 100%|██████████| 375/375 [00:00<00:00, 407.65it/s]
epoch: 10 | val_acc: 0.935 | val_loss: 0.013

四、评估模型

test_acc, test_loss = evaluate(test_dataloader)
print('test accuracy {:8.3f}'.format(test_acc))

loss: 0.109 | acc: 0.965: 100%|██████████| 7500/7500 [00:15<00:00, 492.69it/s]
test accuracy 0.965

五、总结

简单入门了NLP,为后续多模态的学习打基础。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值