Pytorch-IMDB电影评论情感分析

原文链接: http://chenhao.space/post/a5b86241.html

Pytorch-情感分析

第一步:导入IMDB电影数据集,只有训练集和测试集

import torch
from torchtext import data

SEED = 1234

torch.manual_seed(SEED)  # 为CPU设置随机种子
torch.cuda.manual_seed(SEED)  #为GPU设置随机种子
# 在程序刚开始加这条语句可以提升一点训练速度,没什么额外开销
torch.backends.cudnn.deterministic = True

# 首先,我们要创建两个Field 对象:这两个对象包含了我们打算如何预处理文本数据的信息。
# spaCy:英语分词器,类似于NLTK库,如果没有传递tokenize参数,则默认只是在空格上拆分字符串。
# torchtext.data.Field : 用来定义字段的处理方法(文本字段,标签字段)
TEXT = data.Field(tokenize='spacy')
#LabelField是Field类的一个特殊子集,专门用于处理标签。 
LABEL = data.LabelField(dtype=torch.float)

# 加载IMDB电影评论数据集
from torchtext import datasets
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
downloading aclImdb_v1.tar.gz
aclImdb_v1.tar.gz: 100%|██████████| 84.1M/84.1M [00:07<00:00, 11.1MB/s]
# 查看数据集
print(vars(train_data.examples[0]))
{'text': ['This', 'is', 'one', 'of', 'the', 'finest', 'films', 'to', 'come', 'out', 'of', 'Hong', 'Kong', "'s", "'", 'New', 'Wave', "'", 'that', 'began', 'with', 'Tsui', 'Hark', "'s", '"', 'ZU', ':', 'Warriors', 'of', 'Magic', 'Mountain', '"', '.', 'Tsui', 'set', 'a', 'tone', 'for', 'the', 'New', 'Wave', "'s", 'approach', 'to', 'the', 'martial', 'arts', 'film', 'that', 'pretty', 'much', 'all', 'the', 'directors', 'of', 'the', 'New', 'Wave', '(', 'Jackie', 'Chan', ',', 'Sammo', 'Hung', ',', 'Wong', 'Jing', ',', 'Ching', 'Siu', 'Tung', ',', 'etc', '.', ')', 'accepted', 'from', 'then', 'on', 'as', 'a', 'given', ';', 'namely', ',', 'the', 'approach', 'to', 'such', 'films', 'thenceforth', 'would', 'need', 'more', 'than', 'a', 'touch', 'of', 'irony', ',', 'if', 'not', 'outright', 'comedy', '.', '"', 'Burning', 'Paradise', '"', 'put', 'a', 'stop', 'to', 'all', 'that', ',', 'and', 'with', 'a', 'vengeance.<br', '/><br', '/>It', "'s", 'not', 'that', 'there', 'is', "n't", 'humor', 'here', ';', 'but', 'it', 'is', 'a', 'purely', 'human', 'humor', ',', 'as', 'with', 'the', 'aged', 'Buddhist', 'priest', 'at', 'the', 'beginning', 'who', 'somehow', 'manages', 'a', 'quick', 'feel', 'of', 'the', 'nubile', 'young', 'prostitute', 'while', 'hiding', 'in', 'a', 'bundle', 'of', 'straw', '.', 'But', 'this', 'is', 'just', 'as', 'humans', 'are', ',', 'not', 'even', 'Buddhist', 'priests', 'can', 'be', 'saints', 'all', 'the', 'time.<br', '/><br', '/>When', 'irony', 'is', 'at', 'last', 'introduced', 'into', 'the', 'film', ',', 'it', 'is', 'the', 'nastiest', 'possible', ',', 'emanating', 'from', 'the', "'", 'abbot', "'", 'of', 'Red', 'Lotus', 'Temple', ',', 'who', 'is', 'a', 'study', 'in', 'pure', 'nihilism', 'such', 'as', 'has', 'never', 'been', 'recorded', 'on', 'film', 'before', '.', 'He', 'is', 'the', 'very', 'incarnation', 'of', 'Milton', "'s", 'Satan', 'from', '"', 'Paradise', 'Lost', '"', ':', '"', 'Better', 'to', 'rule', 'in', 'Hell', 'than', 'serve', 'in', 'heaven', '!', '"', 'And', 'if', 'he', 'ca', "n't", 'get', 'to', 'Satan', "'s", 'hell', 'soon', 'enough', ',', 'he', "'ll", 'turn', 'the', 'world', 'around', 'him', 'into', 'a', 'living', 'hell', 'he', 'can', 'rule.<br', '/><br', '/>That', "'s", 'the', 'motif', 'underscoring', 'the', 'brutal', 'violence', 'of', 'much', 'of', 'the', 'imagery', 'here', ':', 'It', "'s", 'not', 'that', 'the', 'Abbot', 'just', 'wants', 'to', 'kill', 'people', ';', 'he', 'wants', 'them', 'to', 'despair', ',', 'to', 'feel', 'utterly', 'hopeless', ',', 'to', 'accept', 'his', 'nihilism', 'as', 'all', '-', 'encompassing', 'reality', '.', 'Thus', 'there', "'s", 'a', 'definite', 'sense', 'pervading', 'the', 'Red', 'Temple', 'scenes', 'that', 'there', 'just', 'might', 'not', 'be', 'any', 'other', 'reality', 'outside', 'of', 'the', 'Temple', 'itself', '-', 'it', 'has', 'become', 'all', 'there', 'is', 'to', 'the', 'universe', ',', 'and', 'the', 'Abbot', ',', 'claiming', 'mastery', 'of', 'infinite', 'power', ',', 'is', 'in', 'charge.<br', '/><br', '/>Of', 'course', ',', 'fortunately', ',', 'the', 'film', 'does', "n't", 'end', 'there', '.', 'Though', 'there', 'are', 'losses', ',', 'the', 'human', 'will', 'to', 'be', 'just', 'ordinarily', 'human', 'at', 'last', 'prevails', '.', '(', 'If', 'you', 'want', 'to', 'know', 'how', ',', 'see', 'the', 'film', '!', ')', 'Yet', 'there', 'is', 'no', 'doubt', 'that', ',', 'in', 'viewing', 'this', 'film', ',', 'we', 'visit', 'hell', '.', 'Hopefully', ',', 'we', 'do', 'not', 'witness', 'our', 'own', 'afterlives', ';', 'but', 'we', 'certainly', 'feel', 'chastened', 'by', 'the', 'experience', '-', 'and', 'somehow', 'better', 'for', 'it', 'over', 'all', '.'], 'label': 'pos'}

 


第二步:将训练集划分为训练集和验证集

  1. 由于我们现在只有train/test这两个分类,所以我们需要创建一个新的validation set。我们可以使用.split()创建新的分类。
  2. 默认的数据分割是 70、30,如果我们声明split_ratio,可以改变split之间的比例,split_ratio=0.8表示80%的数据是训练集,20%是验证集。
  3. 我们还声明random_state这个参数,确保我们每次分割的数据集都是一样的。
import random

# 默认split_ratio=0.7
train_data, valid_data = train_data.split(random_state=random.seed(SEED))
print(f'Number of training examples: {len(train_data)}')
print(f'Number of validation examples: {len(valid_data)}')
print(f'Number of testing examples: {len(test_data)}')
Number of training examples: 17500
Number of validation examples: 7500
Number of testing examples: 25000

 


第三步:用训练集建立vocabulary,把每个单词映射到一个数字。

  • 我们使用最常见的25k个单词来构建我们的单词表,用max_size这个参数可以做到这一点。
  • 所有其他的单词都用<unk>来表示。
# 从预训练的词向量(vectors)中,将当前(corpus语料库)词汇表的词向量抽取出来,构成当前 corpus 的 Vocab(词汇表)
# 预训练的 vectors 来自glove模型,每个单词有100维。glove模型训练的词向量参数来自很大的语料库
# 而我们的电影评论的语料库小一点,所以词向量需要更新,glove的词向量适合用做初始化参数。
TEXT.build_vocab(train_data, max_size=25000, vectors="glove.6B.100d", unk_init=torch.Tensor.normal_)
LABEL.build_vocab(train_data)
.vector_cache/glove.6B.zip: 862MB [00:44, 19.5MB/s]                           
100%|█████████▉| 399664/400000 [00:19<00:00, 20291.20it/s]
print(f'Unique tokens in TEXT vocabulary: {len(TEXT.vocab)}')
print(f'Unique tokens in LABEL vocabulary: {len(LABEL.vocab)}')
Unique tokens in TEXT vocabulary: 25002
Unique tokens in LABEL vocabulary: 2
print(LABEL.vocab.itos)
['neg', 'pos']
print(LABEL.vocab.stoi)
defaultdict(<function _default_unk_index at 0x7fb5205c2a60>, {'neg': 0, 'pos': 1})
print(TEXT.vocab.stoi)
defaultdict(<function _default_unk_index at 0x7fb5205c2a60>, {'<unk>': 0, '<pad>': 1, 'the': 2, ',': 3, '.': 4, 'and': 5, 'a': 6, 'of': 7, 'to': 8, 'is': 9, 'in': 10, 'I': 11, 'it': 12, 'that': 13, '"': 14, "'s": 15, 'this': 16, '-': 17, '/><br': 18, 'was': 19, 'as': 20, 'with': 21, 'movie': 22, 'for': 23, 'film': 24, 'The': 25, 'but': 26, '(': 27, ')': 28, "n't": 29, 'on': 30, 'you': 31, 'are': 32, 'not': 33, 'have': 34, 'his': 35, 'be': 36, 'he': 37, 'one': 38, 'at': 39, 'by': 40, 'all': 41, '!': 42, 'an': 43, 'who': 44, 'they': 45, 'from': 46, 'like': 47, 'so': 48, 'her': 49, "'": 50, 'about': 51, 'or': 52, 'has': 53, 'It': 54, 'out': 55, 'just': 56, 'do': 57, '?': 58, 'some': 59, 'good': 60, 'more': 61, 'very': 62, 'would': 63, 'up': 64, 'what': 65, 'This': 66, 'there': 67, 'time': 68, 'can': 69, 'when': 70, 'which': 71, 'had': 72, 'she': 73, 'if': 74, 'only': 75, 'story': 76, 'really': 77, 'were': 78, 'their': 79, 'see': 80, 'no': 81, 'even': 82, 'my': 83, 'me': 84, 'did': 85, 'does': 86, '...': 87, 'than': 88, ':': 89, 'much': 90, 'could': 91, 'been': 92, 'get': 93, 'into': 94, 'we': 95, 'well': 96, 'bad': 97, 'people': 98, 'will': 99, 'because': 100, ......,'chieftain': 24995, 'child.<br': 24996, 'childbirth': 24997, 'chilly': 24998, 'chime': 24999, 'chinese': 25000, 'chokes': 25001})
# 语料库单词频率越高,索引越靠前。前两个默认为unk和pad。
print(TEXT.vocab.itos)

['<unk>', '<pad>', 'the', ',', '.', 'and', 'a', 'of', 'to', 'is', 'in', 'I', 'it', 'that', '"', "'s", 'this', '-', '/><br', 'was', 'as', 'with', 'movie', 'for', 'film', 'The', 'but', '(', ')', "n't", 'on', 'you', 'are', 'not', 'have', 'his', 'be', 'he', 'one', 'at', 'by', 'all', '!', 'an', 'who', 'they', 'from', 'like', 'so', 'her', "'", 'about', 'or', 'has', 'It', 'out', 'just', 'do', '?', 'some', 'good', 'more', 'very', 'would', 'up', 'what', 'This', 'there', 'time', 'can', 'when', 'which', 'had', 'she', 'if', 'only', 'story',....... ]

25002多出来的2就是<unk><pad>

print(TEXT.vocab.freqs.most_common(20))
[('the', 200806), (',', 190507), ('.', 163859), ('and', 108678), ('a', 108379), ('of', 99904), ('to', 92850), ('is', 75910), ('in', 60829), ('I', 54227), ('it', 53199), ('that', 48835), ('"', 43297), ("'s", 42758), ('this', 41960), ('-', 36735), ('/><br', 35706), ('was', 34813), ('as', 29872), ('with', 29443)]

 


第四步:创建iterators,每个iteration都会返回一个batch的样本。

  • 我们会使用BucketIteratorBucketIterator会把长度差不多的句子放到同一个batch中,确保每个batch中不出现太多的padding。
  • 严格来说,我们这份notebook中的模型代码都有一个问题,也就是我们把<pad>也当做了模型的输入进行训练。更好的做法是在模型中把由<pad>产生的输出给消除掉。在这这里我们简单处理,直接把<pad>也用作模型输入了。由于<pad>数量不多,模型的效果也不差。
  • 如果我们有GPU,还可以指定每个iteration返回的tensor都在GPU上。
BATCH_SIZE = 64

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

# 相当于把样本划分batch,知识多做了一步,把相等长度的单词尽可能的划分到一个batch,不够长的就用padding。
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
    (train_data, valid_data, test_data),
    batch_size = BATCH_SIZE,
    device = device
)
next(iter(train_iterator)).label
next(iter(train_iterator)).text
tensor([[  66,  603, 2228,  ..., 1863,    0,   66],
        [  22,  533,    3,  ...,    2,    9,    9],
        [  19,  119,  106,  ..., 1449,   33,    6],
        ...,
        [   1,    1,    1,  ...,    1,    1,    1],
        [   1,    1,    1,  ...,    1,    1,    1],
        [   1,    1,    1,  ...,    1,    1,    1]], device='cuda:0')
# 多运行一次可以发现一条评论的单词长度会变
next(iter(train_iterator))
next(iter(train_iterator)).text
tensor([[2458,   11,    0,  ...,   11,  171, 9535],
        [   6,   34, 4148,  ...,   34,   31, 1697],
        [ 403,   92, 1139,  ...,  124,  213,  133],
        ...,
        [   1,    1,    1,  ...,    1,    1,    1],
        [   1,    1,    1,  ...,    1,    1,    1],
        [   1,    1,    1,  ...,    1,    1,    1]], device='cuda:0')

 


第五步:创建 Word Averaging 模型

  • 我们首先介绍一个简单的Word Averaging模型。这个模型非常简单,我们把每个单词都通过Embedding层投射成word embedding vector,然后把一句话中的所有word vector做个平均,就是整个句子的vector表示了。接下来把这个sentence vector传入一个Linear层,做分类即可。
  • 我们使用avg_pool2d来做average pooling。我们的目标是把sentence length那个维度平均成1,然后保留embedding这个维度。
  • avg_pool2d的kernel size是 (embedded.shape[1], 1),所以句子长度的那个维度会被压扁。
import torch.nn as nn
import torch.nn.functional as F

class WordAVGModel(nn.Module):
  def __init__(self, vocab_size, embedding_dim, output_dim, pad_idx):
    # 初始化参数
    super().__init__()
    
    # embedding的作用就是将每个单词变成一个词向量
    # vocab_size=词汇表长度,embedding_dim=每个单词的维度
    # padding_idx:如果提供的话,输出遇到此下标时用零填充。这里如果遇到padding的单词就用0填充。
    self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=pad_idx)
    
    # output_dim输出的维度,一个数就可以了,=1
    self.fc = nn.Linear(embedding_dim, output_dim)
    
  def forward(self, text):  # text维度为(sent_len, 1)
    embedded = self.embedding(text)
    # text 下面会指定,为一个batch的数据
    # embedded = [sent_len, batch_size, emb_dim]
    # sen_len 一条评论的单词数
    # batch_size 一个batch有多少条评论
    # emb_dim 一个单词的维度
    # 假设[sent_len, batch_size, emb_dim] = (1000, 64, 100)
    # 则进行运算: (text: 1000, 64, 25000)*(self.embedding: 1000, 25000, 100) = (1000, 64, 100)
    
    # [batch_size, sent_len, emb_dim] 更换顺序
    embedded = embedded.permute(1, 0, 2)
    
    # [batch_size, embedding_dim]把单词长度的维度压扁为1,并降维
    # embedded 为input_size,(embedded.shape[1], 1)) 为kernel_size
    # squeeze(1)表示删除索引为1的那个维度
    pooled = F.avg_pool2d(embedded, (embedded.shape[1], 1)).squeeze(1)
    
    # (batch_size, embedding_dim)*(embedding_dim, output_dim) = (batch_size, output_dim)
    return self.fc(pooled)
INPUT_DIM = len(TEXT.vocab)  # 25002
EMBEDDING_DIM = 100
OUTPUT_DIM = 1

# PAD_IDX = 1 为pad的索引
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]

model = WordAVGModel(INPUT_DIM, EMBEDDING_DIM, OUTPUT_DIM, PAD_IDX)
print(PAD_IDX)
1
# 统计参数数量
def count_parameters(model):
  # numel()函数:返回数组中元素的个数
  return sum(p.numel() for p in model.parameters() if p.requires_grad)

print(f'The model has {count_parameters(model):,} trainable parameters')
The model has 2,500,301 trainable parameters

 


第六步:初始化参数

# 把上面vectors="glove.6B.100d"取出的词向量作为初始化参数
# 数量为25000*100个参数,25000个单词,每个单词的词向量维度为100
pretrained_embeddings = TEXT.vocab.vectors
model.embedding.weight.data.copy_(pretrained_embeddings)
tensor([[-0.1117, -0.4966,  0.1631,  ...,  1.2647, -0.2753, -0.1325],
        [-0.8555, -0.7208,  1.3755,  ...,  0.0825, -1.1314,  0.3997],
        [-0.0382, -0.2449,  0.7281,  ..., -0.1459,  0.8278,  0.2706],
        ...,
        [ 0.2455, -0.0385, -0.4767,  ..., -0.2939, -0.0752,  0.0441],
        [ 0.4327,  0.3958,  0.5878,  ..., -1.1461,  0.2348, -0.2359],
        [-0.3970,  0.4024,  1.0612,  ..., -0.0136, -0.3363,  0.6442]],
       device='cuda:0')
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]  # UNK_IDX = 0

# 词汇表25002个单词,前两个unk和pad也需要初始化,把它们初始化为0
model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data
tensor([[ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
        [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
        [-0.0382, -0.2449,  0.7281,  ..., -0.1459,  0.8278,  0.2706],
        ...,
        [ 0.2455, -0.0385, -0.4767,  ..., -0.2939, -0.0752,  0.0441],
        [ 0.4327,  0.3958,  0.5878,  ..., -1.1461,  0.2348, -0.2359],
        [-0.3970,  0.4024,  1.0612,  ..., -0.0136, -0.3363,  0.6442]])

 


第七步:训练模型

import torch.optim as optim

# 定义优化器
optimizer = optim.Adam(model.parameters())

# 定义损失函数,这个BCEWithLogitsLoss特殊情况,二分类损失函数
criterion = nn.BCEWithLogitsLoss()

# 送到GPU上去
model = model.to(device)
criterion = criterion.to(device)
# 计算预测的准确率

def binary_accuracy(preds, y):
  """
  Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
  """
  
  # .round函数 四舍五入,rounded_preds要么为0,要么为1
  # neg为0, pos为1
  rounded_preds = torch.round(torch.sigmoid(preds))
  
  # convert into float for division
  """
  a = torch.tensor([1, 1])
  b = torch.tensor([1, 1])
  print(a == b)
  output: tensor([1, 1], dtype=torch.uint8)
  
  a = torch.tensor([1, 0])
  b = torch.tensor([1, 1])
  print(a == b)
  output: tensor([1, 0], dtype=torch.uint8)
  """
  correct = (rounded_preds == y).float()
  acc = correct.sum() / len(correct)
  
  return acc
  
def train(model, iterator, optimizer, criterion):
  
  epoch_loss = 0
  epoch_acc = 0
  total_len = 0
  
  # model.train()代表了训练模式
  # model.train() :启用 BatchNormalization 和 Dropout
  # model.eval() :不启用 BatchNormalization 和 Dropout
  model.train() 
  
  # iterator为train_iterator
  for batch in iterator:
    # 梯度清零,加这步防止梯度叠加
    optimizer.zero_grad()
    
    # batch.text 就是上面forward函数的参数text
    # 压缩维度,不然跟 batch.label 维度对不上
    predictions = model(batch.text).squeeze(1)
    
    loss = criterion(predictions, batch.label)
    acc = binary_accuracy(predictions, batch.label)
    
    loss.backward()  # 反向传播
    optimizer.step() # 梯度下降
    
    # loss.item() 以及本身除以了 len(batch.label)
    # 所以得再乘一次,得到一个batch的损失,累加得到所有样本损失
    epoch_loss += loss.item() * len(batch.label)
    
    # (acc.item(): 一个batch的正确率) * batch数 = 正确数
    # train_iterator 所有batch的正确数累加
    epoch_acc += acc.item() * len(batch.label)
    
    # 计算 train_iterator 所有样本的数量,应该是17500
    total_len += len(batch.label)
  
  # epoch_loss / total_len :train_iterator所有batch的损失
  # epoch_acc / total_len :train_iterator所有batch的正确率
  return epoch_loss / total_len, epoch_acc / total_len
# 不用优化器了
def evaluate(model, iterator, criterion):
  
  epoch_loss = 0
  epoch_acc = 0
  total_len = 0
  
  # 转成测试模式,冻结dropout层或其他层
  model.eval() 
  
  with torch.no_grad():
    # iterator为valid_iterator
    for batch in iterator:
      
      # 没有反向传播和梯度下降
      
      predictions = model(batch.text).squeeze(1)
      loss = criterion(predictions, batch.label)
      acc = binary_accuracy(predictions, batch.label)

      epoch_loss += loss.item() * len(batch.label)
      epoch_acc += acc.item() * len(batch.label)
      total_len += len(batch.label)
  
  
  # 调回训练模式
  model.train()
  
  return epoch_loss / total_len, epoch_acc / total_len
import time 

# 查看每个epoch的时间
def epoch_time(start_time, end_time):  
    elapsed_time = end_time - start_time
    elapsed_mins = int(elapsed_time / 60)
    elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
    return elapsed_mins, elapsed_secs

 


第八步:查看模型运行结果

N_EPOCHS = 10

best_valid_loss = float('inf')  # 初试的验证集loss设置为无穷大

for epoch in range(N_EPOCHS):
  start_time = time.time()
  
  train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
  valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
  
  end_time = time.time()
  
  epoch_mins, epoch_secs = epoch_time(start_time, end_time)
  
  # 只要模型效果变好,就存模型(参数)
  if valid_loss < best_valid_loss:
    best_valid_loss = valid_loss
    torch.save(model.state_dict(), 'wordavg-model.pt')
    
  print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
  print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
  print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
Epoch: 01 | Epoch Time: 0m 6s
	Train Loss: 0.577 | Train Acc: 71.55%
	 Val. Loss: 0.461 | Val. Acc: 80.96%
Epoch: 02 | Epoch Time: 0m 6s
	Train Loss: 0.467 | Train Acc: 83.26%
	 Val. Loss: 0.375 | Val. Acc: 85.43%
Epoch: 03 | Epoch Time: 0m 6s
	Train Loss: 0.402 | Train Acc: 87.45%
	 Val. Loss: 0.350 | Val. Acc: 87.33%
Epoch: 04 | Epoch Time: 0m 6s
	Train Loss: 0.359 | Train Acc: 89.32%
	 Val. Loss: 0.356 | Val. Acc: 88.07%
Epoch: 05 | Epoch Time: 0m 6s
	Train Loss: 0.327 | Train Acc: 90.38%
	 Val. Loss: 0.361 | Val. Acc: 88.72%
Epoch: 06 | Epoch Time: 0m 6s
	Train Loss: 0.298 | Train Acc: 91.18%
	 Val. Loss: 0.373 | Val. Acc: 89.03%
Epoch: 07 | Epoch Time: 0m 6s
	Train Loss: 0.274 | Train Acc: 92.34%
	 Val. Loss: 0.382 | Val. Acc: 89.37%
Epoch: 08 | Epoch Time: 0m 6s
	Train Loss: 0.253 | Train Acc: 92.87%
	 Val. Loss: 0.395 | Val. Acc: 89.49%
Epoch: 09 | Epoch Time: 0m 6s
	Train Loss: 0.235 | Train Acc: 93.58%
	 Val. Loss: 0.410 | Val. Acc: 89.61%
Epoch: 10 | Epoch Time: 0m 6s
	Train Loss: 0.220 | Train Acc: 94.21%
	 Val. Loss: 0.421 | Val. Acc: 89.75%

 


第九步:预测结果

# 用保存的模型参数预测数据
model.load_state_dict(torch.load("wordavg-model.pt"))
<All keys matched successfully>
# spacy是分词工具,跟NLTK类似
import spacy  
nlp = spacy.load('en')

def predict_sentiment(sentence):
  # 分词
  tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
  # sentence 的索引
  indexed = [TEXT.vocab.stoi[t] for t in tokenized]
  
  tensor = torch.LongTensor(indexed).to(device)  # seq_len
  tensor = tensor.unsqueeze(1)   # seq_len * batch_size (1)
  
  # tensor与text一样的tensor
  prediction = torch.sigmoid(model(tensor))
  
  return prediction.item()
predict_sentiment("I love this film bad")
9.618100193620194e-06
predict_sentiment("This film is great")
1.0
test_loss, test_acc = evaluate(model, test_iterator, criterion)

print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
Test Loss: 0.391 | Test Acc: 86.04%

 


RNN模型(BiLSTM)

  • 下面我们尝试把模型换成一个recurrent neural network (RNN)。RNN经常会被用来encode一个sequence
    h t = RNN ( x t , h t − 1 ) h_t = \text{RNN}(x_t, h_{t-1}) ht=RNN(xt,ht1)
  • 我们使用最后一个hidden state h T h_T hT来表示整个句子。
  • 然后我们把 h T h_T hT通过一个线性变换 f f f,然后用来预测句子的情感。
class RNN(nn.Module):
  def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim,
               n_layers, bidirectional, dropout, pad_idx):
    
    super().__init__()
    self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=pad_idx)
    
    # embedding_dim: 每个词向量的维度
    # hidden_dim: 隐藏层的维度
    # num_layers: 神经网络深度,纵向深度
    # bidrectional: 是否双向循环RNN
    # dropout是指在深度学习网络的训练过程中,对于神经网络单元,按照一定的概率将其暂时从网络中丢弃。
    # 经过交叉验证,隐含节点dropout率等于0.5的时候效果最好,原因是0.5的时候dropout随机生成的网络结构最多。
    self.rnn = nn.LSTM(embedding_dim, hidden_dim, num_layers=n_layers,
                       bidirectional=bidirectional, dropout=dropout)
    
    self.fc = nn.Linear(hidden_dim*2, output_dim)  # *2是因为BiLSTM
    self.dropout = nn.Dropout(dropout)
    
    
  def forward(self, text):
    embedded = self.dropout(self.embedding(text)) # [sent len, batch size, emb dim]
    
    # output = [sent len, batch size, hid dim * num directions]
    # hidden = [num layers * num directions, batch size, hid dim]
    # cell = [num layers * num directions, batch size, hid dim]
    output, (hidden, cell) = self.rnn(embedded)
    
    # concat the final forward (hidden[-2,:,:]) and backward (hidden[-1,:,:]) hidden layers
    # and apply dropout
    # [batch size, hid dim * num directions], 横着拼接的
    # 倒数第一个和倒数第二个是BiLSTM最后要保留的状态
    hidden = self.dropout(torch.cat((hidden[-2, :, :], hidden[-1, :, :]), dim=1))
    
    return self.fc(hidden.squeeze)
INPUT_DIM = len(TEXT.vocab)  # 25002
EMBEDDING_DIM = 100
HIDDEN_DIM = 256
OUTPUT_DIM = 1
N_LAYERS = 2
BIDIRECTIONAL = True
DROPOUT = 0.5

# PAD_IDX = 1 为pad的索引
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]

model = RNN(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM,
            N_LAYERS, BIDIRECTIONAL, DROPOUT, PAD_IDX)
print(f'The model has {count_parameters(model):,} trainable parameters')
The model has 4,810,857 trainable parameters

 


初始化参数

# pretrained_embeddings = TEXT.vocab.vectors
model.embedding.weight.data.copy_(pretrained_embeddings)

UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]  # UNK_IDX = 0

# 词汇表25002个单词,前两个unk和pad也需要初始化,把它们初始化为0
model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)

print(model.embedding.weight.data)
tensor([[ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
        [ 0.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
        [-0.0382, -0.2449,  0.7281,  ..., -0.1459,  0.8278,  0.2706],
        ...,
        [ 0.2455, -0.0385, -0.4767,  ..., -0.2939, -0.0752,  0.0441],
        [ 0.4327,  0.3958,  0.5878,  ..., -1.1461,  0.2348, -0.2359],
        [-0.3970,  0.4024,  1.0612,  ..., -0.0136, -0.3363,  0.6442]],
       device='cuda:0')

 


训练RNN模型

# import torch.optim as optim

# 定义优化器
optimizer = optim.Adam(model.parameters())

# 定义损失函数,这个BCEWithLogitsLoss特殊情况,二分类损失函数
# criterion = nn.BCEWithLogitsLoss()

# 送到GPU上去
model = model.to(device)
# criterion = criterion.to(device)
N_EPOCHS = 10
best_valid_loss = float('inf')

for epoch in range(N_EPOCHS):
    start_time = time.time()
    train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
    valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
    
    end_time = time.time()

    epoch_mins, epoch_secs = epoch_time(start_time, end_time)
    
    if valid_loss < best_valid_loss:
        best_valid_loss = valid_loss
        torch.save(model.state_dict(), 'lstm-model.pt')
    
    print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
    print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
    print(f'\t Val. Loss: {valid_loss:.3f} |  Val. Acc: {valid_acc*100:.2f}%')
Epoch: 01 | Epoch Time: 0m 6s
	Train Loss: 0.594 | Train Acc: 70.69%
	 Val. Loss: 0.493 |  Val. Acc: 78.31%
Epoch: 02 | Epoch Time: 0m 6s
	Train Loss: 0.502 | Train Acc: 82.04%
	 Val. Loss: 0.373 |  Val. Acc: 84.53%
Epoch: 03 | Epoch Time: 0m 6s
	Train Loss: 0.440 | Train Acc: 85.97%
	 Val. Loss: 0.363 |  Val. Acc: 86.17%
Epoch: 04 | Epoch Time: 0m 6s
	Train Loss: 0.391 | Train Acc: 88.27%
	 Val. Loss: 0.342 |  Val. Acc: 87.73%
Epoch: 05 | Epoch Time: 0m 6s
	Train Loss: 0.357 | Train Acc: 89.50%
	 Val. Loss: 0.350 |  Val. Acc: 88.20%
Epoch: 06 | Epoch Time: 0m 6s
	Train Loss: 0.325 | Train Acc: 90.49%
	 Val. Loss: 0.360 |  Val. Acc: 88.60%
Epoch: 07 | Epoch Time: 0m 6s
	Train Loss: 0.299 | Train Acc: 91.28%
	 Val. Loss: 0.376 |  Val. Acc: 88.88%
Epoch: 08 | Epoch Time: 0m 6s
	Train Loss: 0.276 | Train Acc: 92.13%
	 Val. Loss: 0.382 |  Val. Acc: 89.25%
Epoch: 09 | Epoch Time: 0m 6s
	Train Loss: 0.256 | Train Acc: 92.91%
	 Val. Loss: 0.396 |  Val. Acc: 89.45%
Epoch: 10 | Epoch Time: 0m 6s
	Train Loss: 0.239 | Train Acc: 93.41%
	 Val. Loss: 0.410 |  Val. Acc: 89.60%

 


预测结果

model.load_state_dict(torch.load('lstm-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)

print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
Test Loss: 0.381 | Test Acc: 86.31%

 


CNN模型

class CNN(nn.Module):
  def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes,
               output_dim, dropout, pad_idx):
    super().__init__()
    
    self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=pad_idx)
    self.convs = nn.ModuleList([
        nn.Conv2d(in_channels = 1, out_channels = n_filters,
                  kernel_size = (fs, embedding_dim))
        for fs in filter_sizes
    ])
    
    self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
    self.dropout = nn.Dropout(dropout)
    
    
  def forward(self, text):
    text = text.permute(1, 0)        # [batch size, sent len]
    embedded = self.embedding(text)  # [batch size, sent len, emb dim]
    embedded = embedded.unsqueeze(1) # [batch size, 1, sent len, emb dim]
    conved = [F.relu(conv(embedded)).squeeze(3) for conv in self.convs]
    
    # conv_n = [batch size, n_filters, sent len - filter_sizes[n]]
    
    pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]
    
    # pooled_n = [batch size, n_filters]
    
    cat = self.dropout(torch.cat(pooled, dim=1))
    
    # cat = [batch size, n_filters * len(filter_sizes)]
    
    return self.fc(cat)
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
N_FILTERS = 100
FILTER_SIZES = [3,4,5]
OUTPUT_DIM = 1
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]

model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX)
model.embedding.weight.data.copy_(pretrained_embeddings)

UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]
model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
model = model.to(device)
optimizer = optim.Adam(model.parameters())
criterion = nn.BCEWithLogitsLoss()
criterion = criterion.to(device)

N_EPOCHS = 10

best_valid_loss = float('inf')

for epoch in range(N_EPOCHS):

    start_time = time.time()
    
    train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
    valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
    
    end_time = time.time()

    epoch_mins, epoch_secs = epoch_time(start_time, end_time)
    
    if valid_loss < best_valid_loss:
        best_valid_loss = valid_loss
        torch.save(model.state_dict(), 'CNN-model.pt')
    
    print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
    print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
    print(f'\t Val. Loss: {valid_loss:.3f} |  Val. Acc: {valid_acc*100:.2f}%')
Epoch: 01 | Epoch Time: 0m 30s
	Train Loss: 0.653 | Train Acc: 61.07%
	 Val. Loss: 0.504 |  Val. Acc: 78.20%
Epoch: 02 | Epoch Time: 0m 30s
	Train Loss: 0.427 | Train Acc: 80.60%
	 Val. Loss: 0.352 |  Val. Acc: 84.85%
Epoch: 03 | Epoch Time: 0m 30s
	Train Loss: 0.306 | Train Acc: 87.18%
	 Val. Loss: 0.315 |  Val. Acc: 86.56%
Epoch: 04 | Epoch Time: 0m 30s
	Train Loss: 0.221 | Train Acc: 91.37%
	 Val. Loss: 0.303 |  Val. Acc: 87.43%
Epoch: 05 | Epoch Time: 0m 31s
	Train Loss: 0.161 | Train Acc: 93.86%
	 Val. Loss: 0.319 |  Val. Acc: 87.47%
Epoch: 06 | Epoch Time: 0m 30s
	Train Loss: 0.114 | Train Acc: 95.86%
	 Val. Loss: 0.347 |  Val. Acc: 87.21%
Epoch: 07 | Epoch Time: 0m 30s
	Train Loss: 0.078 | Train Acc: 97.43%
	 Val. Loss: 0.355 |  Val. Acc: 87.41%
Epoch: 08 | Epoch Time: 0m 30s
	Train Loss: 0.055 | Train Acc: 98.30%
	 Val. Loss: 0.386 |  Val. Acc: 87.33%
Epoch: 09 | Epoch Time: 0m 30s
	Train Loss: 0.041 | Train Acc: 98.85%
	 Val. Loss: 0.412 |  Val. Acc: 87.48%
Epoch: 10 | Epoch Time: 0m 30s
	Train Loss: 0.031 | Train Acc: 99.08%
	 Val. Loss: 0.440 |  Val. Acc: 87.23%
model.load_state_dict(torch.load('CNN-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
Test Loss: 0.334 | Test Acc: 85.49%

参考资料

https://github.com/bentrevett/pytorch-sentiment-analysis

  • 6
    点赞
  • 72
    收藏
    觉得还不错? 一键收藏
  • 6
    评论
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值