4.GitHub pytorch sentiment analysis(多类版multi-class)


https://github.com/bentrevett/pytorch-sentiment-analysis/blob/master/5%20-%20Multi-class%20Sentiment%20Analysis.ipynb

Multi-class Sentiment Analysis

之前的notebook中展现了2分类情感分析,negative、positive
当我们只有两个类别时,我们的输出可以是单个标量(没有方向:int,float…),在0-1之间.代表着一个样本的标签是什么

当我们有超过2个样本时,我们的输出必须是C维度向量,C是类的数量

这个笔记里,我们会进行6分类问题,要说明的是这个数据库不是情感分析数据集,该数据集里的样本都是一个个问题,任务是将这些问题分类成6个类别.但不妨碍,这个笔记教我们如何做多分类问题

跟二分类问题相比,我们不需要设置LABEL field中的dtype.当做一个多分类问题时,PyTorch希望标签是数值化的LongTensors类型

第二个区别是,我们使用TREC数据集来代替IMDB,其中的fine_grained参数控制的是选择50类分类还是6分类,这里选false 6分类

1.加载数据

import torch
from torchtext import data
from torchtext import datasets
import random

SEED = 1234

torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True

TEXT = data.Field(tokenize = 'spacy')
LABEL = data.LabelField()

train_data, test_data = datasets.TREC.splits(TEXT, LABEL, fine_grained=False)

train_data, valid_data = train_data.split(random_state = random.seed(SEED))

查看数据样本

vars(train_data[-1])

{‘text’: [‘What’, ‘is’, ‘a’, ‘Cartesian’, ‘Diver’, ‘?’], ‘label’: ‘DESC’}

2.构建词向量,词表

因为数据集小,只有3800个训练样本,词汇表也很小,只有7500 unique tokens,意味着,我们不需要像以前一样设置max_size.这里还是设置25000是因为,实际的词汇数量比这个小.如果实际词汇很大,需要设置最大词汇量来舍去词频靠后的词

MAX_VOCAB_SIZE = 25_000

TEXT.build_vocab(train_data, 
                 max_size = MAX_VOCAB_SIZE, 
                 vectors = "glove.6B.100d", 
                 unk_init = torch.Tensor.normal_)

LABEL.build_vocab(train_data)

3.查看labels

6个label:
1.HUM for questions about humans 关于人类的问题
2.ENTY for questions about entities 关于实体的问题
3.DESC for questions asking you for a description 关于形容的问题
4.NUM for questions where the answer is numerical 关于数字的问题
5.LOC for questions where the answer is a location 关于地点的问题
6.ABBR for questions asking about abbreviations 关于缩略语的问题

print(LABEL.vocab.stoi)

defaultdict(<function _default_unk_index at 0x7f0a50190d08>, {‘HUM’: 0, ‘ENTY’: 1, ‘DESC’: 2, ‘NUM’: 3, ‘LOC’: 4, ‘ABBR’: 5})

4.设置iterators迭代器

BATCH_SIZE = 64

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
    (train_data, valid_data, test_data), 
    batch_size = BATCH_SIZE, 
    device = device)

5.创建模型

原文用的是CNN卷积神经网络,

import torch.nn as nn
import torch.nn.functional as F

class CNN(nn.Module):
    def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, 
                 dropout, pad_idx):
        
        super().__init__()
        
        self.embedding = nn.Embedding(vocab_size, embedding_dim)
        
        self.convs = nn.ModuleList([
                                    nn.Conv2d(in_channels = 1, 
                                              out_channels = n_filters, 
                                              kernel_size = (fs, embedding_dim)) 
                                    for fs in filter_sizes
                                    ])
        
        self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
        
        self.dropout = nn.Dropout(dropout)
        
    def forward(self, text):
        
        #text = [sent len, batch size]
        
        text = text.permute(1, 0)
                
        #text = [batch size, sent len]
        
        embedded = self.embedding(text)
                
        #embedded = [batch size, sent len, emb dim]
        
        embedded = embedded.unsqueeze(1)
        
        #embedded = [batch size, 1, sent len, emb dim]
        
        conved = [F.relu(conv(embedded)).squeeze(3) for conv in self.convs]
            
        #conv_n = [batch size, n_filters, sent len - filter_sizes[n]]
        
        pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]
        
        #pooled_n = [batch size, n_filters]
        
        cat = self.dropout(torch.cat(pooled, dim = 1))

        #cat = [batch size, n_filters * len(filter_sizes)]
            
        return self.fc(cat)

6.实例化模型

INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
N_FILTERS = 100
FILTER_SIZES = [2,3,4]
OUTPUT_DIM = len(LABEL.vocab)
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]

model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX)

7.数模型中有多少个参数需要训练

def count_parameters(model):
    return sum(p.numel() for p in model.parameters() if p.requires_grad)

print(f'The model has {count_parameters(model):,} trainable parameters')

The model has 3,063,522 trainable parameters

8.加载预训练词向量到模型中

pretrained_embeddings = TEXT.vocab.vectors

model.embedding.weight.data.copy_(pretrained_embeddings)

9.将unknwon,padding tokens的向量进行0初始化

UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]

model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)

10.设置损失函数

与前几个notebook比,损失函数(也就是criterion)是不同的.之前用的是BCEWithLogisLoss,现在使用的是CrossEntropyLoss,它使用的是softrmax函数,来计算cross entropy

一般来说:
CrossEntropyLoss :用于多分类问题
BCEWithLogitsLoss :用于2分类问题,(0,1),也用于多标签分类(multilabel classification) 1vs rest

import torch.optim as optim

optimizer = optim.Adam(model.parameters())

criterion = nn.CrossEntropyLoss()

model = model.to(device)
criterion = criterion.to(device)

10.构建精确度函数

之前计算2分类问题的精确度时,我们假设输出超过0.5就是positive,小于0.5时negative.
这里,我们有6分类,输出的时6维的向量,每个元素是其属于某个分类的概率(each element is the beleief that the example belongs to that class.)

举例:‘HUM’ = 0, ‘ENTY’ = 1, ‘DESC’ = 2, ‘NUM’ = 3, ‘LOC’ = 4 and ‘ABBR’ = 5.,
一个句子经过模型预测后得到的输出值是:[5.1, 0.3, 0.1, 2.1, 0.2, 0.6],意味着,模型有很强的信念认为,这个样本属于分类0,也就是有关人类的问题.并且有点认为,这个样本属于分类3,一个数字问题

我们使用argmax获取预测输出中哪个元素最大,将他认为是预测值,跟实际值进行计算准确度

def categorical_accuracy(preds, y):
    """
    Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
    """
    max_preds = preds.argmax(dim = 1, keepdim = True) # get the index of the max probability
    correct = max_preds.squeeze(1).eq(y)
    return correct.sum() / torch.FloatTensor([y.shape[0]]).to(device) # 这里要制定.to(device) 设置在GPU、CPU跑要不然之后会报错

11.构建训练函数

训练循环,跟之前几个notebook很像,只是不需要squeeze模型的预测值.因为,CrossEntropyLoss 希望其输入的形状是[batch size, n classes]
label形状是[batch size]

def train(model, iterator, optimizer, criterion):
    
    epoch_loss = 0
    epoch_acc = 0
    
    model.train()
    
    for batch in iterator:     
        optimizer.zero_grad()      
        predictions = model(batch.text)      
        loss = criterion(predictions, batch.label) 
        acc = categorical_accuracy(predictions, batch.label)  
        loss.backward()
        optimizer.step()
        epoch_loss += loss.item()
        epoch_acc += acc.item()
        
    return epoch_loss / len(iterator), epoch_acc / len(iterator)

12.构建评估函数

def evaluate(model, iterator, criterion):
    
    epoch_loss = 0
    epoch_acc = 0
    
    model.eval()
    
    with torch.no_grad():
    
        for batch in iterator:

            predictions = model(batch.text)
            
            loss = criterion(predictions, batch.label)
            
            acc = categorical_accuracy(predictions, batch.label)

            epoch_loss += loss.item()
            epoch_acc += acc.item()
        
    return epoch_loss / len(iterator), epoch_acc / len(iterator)

13.构建计时模块

import time

def epoch_time(start_time, end_time):
    elapsed_time = end_time - start_time
    elapsed_mins = int(elapsed_time / 60)
    elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
    return elapsed_mins, elapsed_secs

14. 正式训练

N_EPOCHS = 5

best_valid_loss = float('inf')

for epoch in range(N_EPOCHS):

    start_time = time.time()
    
    train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
    valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
    
    end_time = time.time()

    epoch_mins, epoch_secs = epoch_time(start_time, end_time)
    
    if valid_loss < best_valid_loss:
        best_valid_loss = valid_loss
        torch.save(model.state_dict(), 'tut5-model.pt')
    
    print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
    print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
    print(f'\t Val. Loss: {valid_loss:.3f} |  Val. Acc: {valid_acc*100:.2f}%')

在这里插入图片描述

15.评估模型

model.load_state_dict(torch.load('tut5-model.pt'))

test_loss, test_acc = evaluate(model, test_iterator, criterion)

print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')

Test Loss: 0.409 | Test Acc: 86.46%

16. 实际预测

import spacy
nlp = spacy.load('en')

def predict_class(model, sentence, min_len = 4):
    model.eval()
    tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
    if len(tokenized) < min_len:
        tokenized += ['<pad>'] * (min_len - len(tokenized))
    indexed = [TEXT.vocab.stoi[t] for t in tokenized]
    tensor = torch.LongTensor(indexed).to(device)
    tensor = tensor.unsqueeze(1)
    preds = model(tensor)
    max_preds = preds.argmax(dim = 1) # 跟之前的区别是,这里用的是argmax取预测值最大的那个元素作为分类结果
    return max_preds.item()
pred_class = predict_class(model, "Who is Keyser Söze?")
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值