4.tensorflow 文本分类

电影评论文本分类

学习笔记来自文本分类
使用评论文本将影评分为积极(positive)或消极(nagetive)两类。这是一个二分类问题。

数据集包含 50,000 条影评文本。从该数据集切割出的25,000条评论用作训练,另外 25,000 条用作测试。

1.导入库

import tensorflow as tf
from tensorflow import keras
import numpy as np

2.下载 IMDB 数据集

imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

num_words=10000

 参数(num_words=10000)将数据限定为前10000个最常出现的单词,如果数据集中存在大于10000的单词,则令其为2。

2.1 了解数据

数据集是经过预处理的:每个样本都是一个表示影评中词汇的整数数组。每个标签都是一个值为 0 或 1 的整数值,其中 0 代表消极评论,1 代表积极评论。

影评被转换为整数值,其中每个整数代表词典中的一个单词。由于影评长度不同,单词数量也不同。

print(train_data[0])
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]
len(train_data[0]), len(train_data[1])#第一条和第二条影评长度
(218, 189)

3. 将整数转换回单词

创建一个辅助函数来查询一个包含了整数到字符串映射的字典对象:

# 一个映射单词到整数索引的词典
word_index = imdb.get_word_index()

# 保留第一个索引
#v+3使得只索引3以上的序号
word_index = {k:(v+3) for k,v in word_index.items()}

#下面的赋值会使0,1,2,3不显示问号,而是显示具体的原因
word_index["<PAD>"] = 0   #填充
word_index["<START>"] = 1  #开始
word_index["<UNK>"] = 2  #由于我们将数据限定在前100000常用的词,超出这部分会显示UNK
word_index["<UNUSED>"] = 3

#将字典反转,将原本是'tsukino':52009  转换成52009: 'tsukino'
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])

def decode_review(text):
    #get(i, '?')使字典中存在的下标输出单词,不存在的输出?
    return ' '.join([reverse_word_index.get(i, '?') for i in text])

join()

 用于将序列中的元素以指定的字符连接生成一个新的字符串。例如:
 s1 = "_"  s2 = ""
 seq = ("r", "u", "n", "o", "o", "b")  # 字符串序列
 s1.join(seq):r-u-n-o-o-b   s1.join(seq): runoob
#显示首条评论的文本
decode_review(train_data[0])
"<START> this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert <UNK> is an amazing actor and now the same being director <UNK> father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for <UNK> and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also <UNK> to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the <UNK> list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all"

4. 准备数据

影评——即整数数组必须在输入神经网络之前转换为张量。这种转换可以通过以下两种方式来完成:

  • 将数组转换为表示单词出现与否的由 0 和 1 组成的向量,类似于 one-hot 编码。例如,序列[3, 5]将转换为一个 10,000 维的向量,该向量除了索引为 3 和 5 的位置是 1 以外,其他都为 0。然后,将其作为网络的首层——一个可以处理浮点型向量数据的稠密层。不过,这种方法需要大量的内存,需要一个大小为 num_words * num_reviews 的矩阵。

  • 或者,我们可以填充数组来保证输入数据具有相同的长度,然后创建一个大小为 max_length * num_reviews 的整型张量。我们可以使用能够处理此形状数据的嵌入层作为网络中的第一层。

我们使用第二种进行转换,由于电影评论长度必须相同,我们将使用pad_sequences函数来使长度标准化:

train_data = keras.preprocessing.sequence.pad_sequences(train_data,
                                                        value=word_index["<PAD>"],
                                                        padding='post',
                                                        maxlen=256)

test_data = keras.preprocessing.sequence.pad_sequences(test_data,
                                                       value=word_index["<PAD>"],
                                                       padding='post',
                                                       maxlen=256)
  • keras.preprocessing.sequence.pad_sequences
sequences浮点数或整数构成的两层嵌套列表
maxlen=NoneNone或整数,为序列的最大长度。大于此长度的序列将被截短,小于此长度的序列将在后部填0.
dtype=‘int32’返回的numpy array的数据类型
padding=‘pre’‘pre’或‘post’,确定当需要补0时,在序列的起始还是结尾补`
truncating=‘pre’‘pre’或‘post’,确定当需要截断序列时,从起始还是结尾截断
value=0此值将在填充时代替默认的填充值0
#样本的长度:经过转换,数据长度变得一样长
len(train_data[0]), len(train_data[1])
(256, 256)

5. 构建模型

神经网络由堆叠的层来构建,这需要从两个主要方面来进行体系结构决策:

  • 模型里有多少层?
  • 每个层里有多少隐层单元(hidden units)

在此样本中,输入数据包含一个单词索引的数组。要预测的标签为 0 或 1。让我们来为该问题构建一个模型:

# 输入形状是用于电影评论的词汇数目(10,000 词)
vocab_size = 10000

model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))

model.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 embedding (Embedding)       (None, None, 16)          160000    
                                                                 
 global_average_pooling1d (G  (None, 16)               0         
 lobalAveragePooling1D)                                          
                                                                 
 dense (Dense)               (None, 16)                272       
                                                                 
 dense_1 (Dense)             (None, 1)                 17        
                                                                 
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________

层按顺序堆叠以构建分类器:

嵌入(Embedding)

该层采用整数编码的词汇表,并查找每个词索引的嵌入向量(embedding vector)。这些向量是通过模型训练学习到的。向量向输出数组增加了一个维度。得到的维度为:`(batch, sequence, embedding)`。
深度学习的任务就是把高维原始数据(图像,句子)映射到低维流形,使得高维的原始数据被映射到低维流形之后变得可分,而这个映射就叫嵌入(Embedding)。比如Word Embedding,就是把单词组成的句子映射到一个表征向量。

GlobalAveragePooling1D

将通过对序列维度求平均值来为每个样本返回一个定长输出向量。这允许模型以尽可能最简单的方式处理变长输入。
该定长输出向量通过一个有 16 个隐层单元的全连接(`Dense`)层传输。

Sigmoid 激活函数

 其函数值为介于 0 与 1 之间的浮点数,表示概率或置信度。

6.损失函数与优化器

使用 binary_crossentropy 损失函数,处理二分类以及概率问题(它能够度量概率分布之间的“距离”)

model.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics=['accuracy'])

7. 创建一个验证集并训练

在训练时,我们想要检查模型在未见过的数据上的准确率(accuracy)。通过从原始训练数据中分离 10,000 个样本来创建一个验证集。(为什么现在不使用测试集?我们的目标是只使用训练数据来开发和调整模型,然后只使用一次测试数据来评估准确率(accuracy))。

#将0-10000的数据用于验证集,10000-25000用于训练集
x_val = train_data[:10000]
partial_x_train = train_data[10000:]

y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]

以 512 个样本的 mini-batch 大小迭代 40 个 epoch 来训练模型。这是指对 x_trainy_train 张量中所有样本的的 40 次迭代。在训练过程中,监测来自验证集的 10,000 个样本上的损失值(loss)和准确率(accuracy):

history = model.fit(partial_x_train,
                    partial_y_train,
                    epochs=40,
                    batch_size=512,
                    validation_data=(x_val, y_val),
                    verbose=1)
Epoch 1/40
30/30 [==============================] - 1s 16ms/step - loss: 0.6924 - accuracy: 0.5721 - val_loss: 0.6909 - val_accuracy: 0.6930
Epoch 2/40
30/30 [==============================] - 0s 13ms/step - loss: 0.6879 - accuracy: 0.7002 - val_loss: 0.6842 - val_accuracy: 0.7289
Epoch 3/40
30/30 [==============================] - 0s 12ms/step - loss: 0.6764 - accuracy: 0.7529 - val_loss: 0.6688 - val_accuracy: 0.7445
Epoch 4/40
30/30 [==============================] - 0s 12ms/step - loss: 0.6548 - accuracy: 0.7577 - val_loss: 0.6437 - val_accuracy: 0.7657
Epoch 5/40
30/30 [==============================] - 0s 12ms/step - loss: 0.6216 - accuracy: 0.7857 - val_loss: 0.6081 - val_accuracy: 0.7843
Epoch 6/40
30/30 [==============================] - 0s 12ms/step - loss: 0.5785 - accuracy: 0.8058 - val_loss: 0.5661 - val_accuracy: 0.7998
Epoch 7/40
30/30 [==============================] - 0s 12ms/step - loss: 0.5303 - accuracy: 0.8266 - val_loss: 0.5218 - val_accuracy: 0.8064
Epoch 8/40
30/30 [==============================] - 0s 12ms/step - loss: 0.4815 - accuracy: 0.8436 - val_loss: 0.4790 - val_accuracy: 0.8284
Epoch 9/40
30/30 [==============================] - 0s 12ms/step - loss: 0.4361 - accuracy: 0.8586 - val_loss: 0.4436 - val_accuracy: 0.8408
Epoch 10/40
30/30 [==============================] - 0s 12ms/step - loss: 0.3972 - accuracy: 0.8700 - val_loss: 0.4101 - val_accuracy: 0.8507
Epoch 11/40
30/30 [==============================] - 0s 12ms/step - loss: 0.3635 - accuracy: 0.8795 - val_loss: 0.3847 - val_accuracy: 0.8583
Epoch 12/40
30/30 [==============================] - 0s 13ms/step - loss: 0.3359 - accuracy: 0.8885 - val_loss: 0.3656 - val_accuracy: 0.8612
Epoch 13/40
30/30 [==============================] - 0s 12ms/step - loss: 0.3133 - accuracy: 0.8945 - val_loss: 0.3492 - val_accuracy: 0.8670
Epoch 14/40
30/30 [==============================] - 0s 12ms/step - loss: 0.2930 - accuracy: 0.8998 - val_loss: 0.3360 - val_accuracy: 0.8720
Epoch 15/40
30/30 [==============================] - 0s 13ms/step - loss: 0.2760 - accuracy: 0.9059 - val_loss: 0.3260 - val_accuracy: 0.8742
Epoch 16/40
30/30 [==============================] - 0s 13ms/step - loss: 0.2606 - accuracy: 0.9093 - val_loss: 0.3170 - val_accuracy: 0.8780
Epoch 17/40
30/30 [==============================] - 0s 12ms/step - loss: 0.2475 - accuracy: 0.9135 - val_loss: 0.3104 - val_accuracy: 0.8783
Epoch 18/40
30/30 [==============================] - 0s 13ms/step - loss: 0.2352 - accuracy: 0.9191 - val_loss: 0.3046 - val_accuracy: 0.8804
Epoch 19/40
30/30 [==============================] - 0s 12ms/step - loss: 0.2234 - accuracy: 0.9227 - val_loss: 0.2996 - val_accuracy: 0.8815
Epoch 20/40
30/30 [==============================] - 0s 12ms/step - loss: 0.2137 - accuracy: 0.9268 - val_loss: 0.2958 - val_accuracy: 0.8815
Epoch 21/40
30/30 [==============================] - 0s 13ms/step - loss: 0.2044 - accuracy: 0.9286 - val_loss: 0.2925 - val_accuracy: 0.8830
Epoch 22/40
30/30 [==============================] - 0s 12ms/step - loss: 0.1949 - accuracy: 0.9337 - val_loss: 0.2904 - val_accuracy: 0.8846
Epoch 23/40
30/30 [==============================] - 0s 13ms/step - loss: 0.1864 - accuracy: 0.9373 - val_loss: 0.2885 - val_accuracy: 0.8843
Epoch 24/40
30/30 [==============================] - 0s 13ms/step - loss: 0.1789 - accuracy: 0.9410 - val_loss: 0.2869 - val_accuracy: 0.8841
Epoch 25/40
30/30 [==============================] - 0s 13ms/step - loss: 0.1712 - accuracy: 0.9457 - val_loss: 0.2857 - val_accuracy: 0.8853
Epoch 26/40
30/30 [==============================] - 0s 13ms/step - loss: 0.1643 - accuracy: 0.9477 - val_loss: 0.2863 - val_accuracy: 0.8846
Epoch 27/40
30/30 [==============================] - 0s 12ms/step - loss: 0.1580 - accuracy: 0.9507 - val_loss: 0.2857 - val_accuracy: 0.8852
Epoch 28/40
30/30 [==============================] - 0s 13ms/step - loss: 0.1513 - accuracy: 0.9523 - val_loss: 0.2868 - val_accuracy: 0.8835
Epoch 29/40
30/30 [==============================] - 0s 13ms/step - loss: 0.1455 - accuracy: 0.9561 - val_loss: 0.2862 - val_accuracy: 0.8867
Epoch 30/40
30/30 [==============================] - 0s 12ms/step - loss: 0.1397 - accuracy: 0.9576 - val_loss: 0.2875 - val_accuracy: 0.8862
Epoch 31/40
30/30 [==============================] - 0s 13ms/step - loss: 0.1342 - accuracy: 0.9605 - val_loss: 0.2894 - val_accuracy: 0.8853
Epoch 32/40
30/30 [==============================] - 0s 13ms/step - loss: 0.1294 - accuracy: 0.9621 - val_loss: 0.2910 - val_accuracy: 0.8846
Epoch 33/40
30/30 [==============================] - 0s 14ms/step - loss: 0.1244 - accuracy: 0.9637 - val_loss: 0.2915 - val_accuracy: 0.8850
Epoch 34/40
30/30 [==============================] - 0s 13ms/step - loss: 0.1195 - accuracy: 0.9669 - val_loss: 0.2937 - val_accuracy: 0.8853
Epoch 35/40
30/30 [==============================] - 0s 14ms/step - loss: 0.1151 - accuracy: 0.9678 - val_loss: 0.2958 - val_accuracy: 0.8852
Epoch 36/40
30/30 [==============================] - 0s 14ms/step - loss: 0.1105 - accuracy: 0.9693 - val_loss: 0.2989 - val_accuracy: 0.8846
Epoch 37/40
30/30 [==============================] - 0s 12ms/step - loss: 0.1064 - accuracy: 0.9701 - val_loss: 0.3003 - val_accuracy: 0.8853
Epoch 38/40
30/30 [==============================] - 0s 13ms/step - loss: 0.1023 - accuracy: 0.9718 - val_loss: 0.3029 - val_accuracy: 0.8849
Epoch 39/40
30/30 [==============================] - 0s 13ms/step - loss: 0.0992 - accuracy: 0.9727 - val_loss: 0.3065 - val_accuracy: 0.8837
Epoch 40/40
30/30 [==============================] - 0s 12ms/step - loss: 0.0948 - accuracy: 0.9745 - val_loss: 0.3095 - val_accuracy: 0.8827

8.测试集上评估模型

results = model.evaluate(test_data,  test_labels, verbose=2)

print(results)
782/782 - 0s - loss: 0.3296 - accuracy: 0.8730 - 320ms/epoch - 409us/step
[0.3295901119709015, 0.8730000257492065]

9.创建一个准确率(accuracy)和损失值(loss)随时间变化的图表

model.fit() 返回一个 History 对象,该对象包含一个字典,其中包含训练阶段所发生的一切事件:

history_dict = history.history
history_dict.keys()
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])

有四个条目:在训练和验证期间,每个条目对应一个监控指标。我们可以使用这些条目来绘制训练与验证过程的损失值(loss)和准确率(accuracy),以便进行比较。

import matplotlib.pyplot as plt

acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']

epochs = range(1, len(acc) + 1)

# “bo”代表 "蓝点"
plt.plot(epochs, loss, 'go', label='Training loss')
# b代表“蓝色实线”
plt.plot(epochs, val_loss, 'g', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()

plt.show()

在这里插入图片描述

plt.clf()   # 清除数字

plt.plot(epochs, acc, 'go', label='Training acc')
plt.plot(epochs, val_acc, 'g', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()

plt.show()

在这里插入图片描述

验证过程的损失值(loss)与准确率(accuracy)的情况却并非如此——它们似乎在 20 个 epoch 后达到峰值。这是过拟合的一个实例:模型在训练数据上的表现比在以前从未见过的数据上的表现要更好。我们可以在20个epoches那停止训练。

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值