电影评论文本分类
本片指南介绍的是对电影评论的正负性(positive、negative)进行分类。这是我们熟知的二分类问题。
我们将使用来自网络电影数据库的IMDB数据集,其包含了50000条影评文本。从该数据集中用25000条品论用做训练,另外25000条用作测试。训练集与测试集是数量相等平衡的(balanced),意味着它们包含相等数量的的正负面评论。
该片指南使用了tf.keras,它是一个TensorFlow中用于构建和训练模型的高级API。有关使用tf.keras进行文本分类的高级教程我在后面还会进行记述。
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
2.8.0
下载IMDB数据集
IMDB数据集已经打包在TensorFlow中。该数据已经经过了预处理,评论(单词序列)已经被转换成整数序列,其中每个整数表示字典中的特定单词。下面我们将IMDB的数据集下载下来。
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
17465344/17464789 [==============================] - 2s 0us/step
17473536/17464789 [==============================] - 2s 0us/step
参数num_words=10000保留了训练数据集中最常出现的10000个单词。为了保持数据规模的客观理性,低频词汇将会被丢弃。
探索数据
让我们来了解一下数据格式。该数据集是经过预处理的:每个样本都是一个代表影评中词汇的整数数组。每个标签都是一个值为0或者1的整数值,其中0代表消极评论1代表积极评论。
print("Training entries :{}, labels:{}".format(len(train_data), len(train_labels)))
Training entries :25000, labels:25000
评论文本被转换为整数值,其中每个整数代表词典中的一个单词。首条评论是这样的:
print(len(train_data[0]))
print(train_data[0])
218
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]
电影评论可能具有不同的长度。以下代码显示第一条和第二条品论中的单词数量。由于神经网络中的输入必须是统一的长度,我们稍后将解决这个问题。
len(train_data[0]),len(train_data[1])
(218, 189)
将整数转换回单词
了解如何将整数转换回文本对我们理解起来是有帮助的。我们将创建一个辅助函数来查询一个包含了整数到字符串映射的字典对象:
# 一个映射单词到整数索引的词典
word_index = imdb.get_word_index()
#保留第一个索引
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
现在我们可以使用decode_review函数来显示首条评论的文本:
decode_review(train_data[0])
"<START> this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert <UNK> is an amazing actor and now the same being director <UNK> father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for <UNK> and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also <UNK> to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the <UNK> list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all"
准备数据
我们将影评即整数数组必须在输入神经网络之前转换为张量。这种转换可以通过以下两种方式来完成:
·将数组转换为表示单词出现与否的由0和1组成的向量,类似于one-hot编码。例如,序列[3,5]将转换为一个10000维的向量,该向量除了索引为3和5的位置为1外,其他都为0.然后,将其作为网络的首层–一个可以处理浮点型向量数据的稠密层。不过这种方法需要一个大小为num_words * num_review的矩阵。
·或者我们可以填充数组来保证输入数据具有相同的长度,然后创建一个大小为max_length*num_reviews的整型张量。可以使用能够处理此形状数据的嵌入层作为神经网络中的第一层。
本指南中我们使用第二层。由于电影评论长度必须相同,我们将用pad_sequences函数将长度标准化。
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
现在让我们看一下样本的长度:
len(train_data[0]), len(train_data[1])
(256, 256)
检查一下首条评论(当前已经被填充)
print(train_data[0])
[ 1 14 22 16 43 530 973 1622 1385 65 458 4468 66 3941
4 173 36 256 5 25 100 43 838 112 50 670 2 9
35 480 284 5 150 4 172 112 167 2 336 385 39 4
172 4536 1111 17 546 38 13 447 4 192 50 16 6 147
2025 19 14 22 4 1920 4613 469 4 22 71 87 12 16
43 530 38 76 15 13 1247 4 22 17 515 17 12 16
626 18 2 5 62 386 12 8 316 8 106 5 4 2223
5244 16 480 66 3785 33 4 130 12 16 38 619 5 25
124 51 36 135 48 25 1415 33 6 22 12 215 28 77
52 5 14 407 16 82 2 8 4 107 117 5952 15 256
4 2 7 3766 5 723 36 71 43 530 476 26 400 317
46 7 4 2 1029 13 104 88 4 381 15 297 98 32
2071 56 26 141 6 194 7486 18 4 226 22 21 134 476
26 480 5 144 30 5535 18 51 36 28 224 92 25 104
4 226 65 16 38 1334 88 12 16 283 5 16 4472 113
103 32 15 16 5345 19 178 32 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0]
构建模型
神经网络由堆叠的层来构建,这需要从两个方面来进行体系构建决策:
·模型里有多少层?
·每个层里有多少个隐藏层(hidden units)?
在此样本中,输入数据包含一个单词索引的数组。要预测的标签为0或1.下面我们要为该问题构建一个模型:
# 输入形状是用于电影评论的词汇数目(10000词)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, None, 16) 160000
global_average_pooling1d (G (None, 16) 0
lobalAveragePooling1D)
dense (Dense) (None, 16) 272
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________
层按照顺序堆叠以构建分类器:
1.第一层是嵌入(Embedding)层。该层采用整数编码的词汇表,并查找每个词索引的嵌入向量(embedding vector)。这些向量是通过模型训练学习得到的。向量向输出数组增加了一个维度。得到的维度为:(batch, sequence, embedding).“这里我不知道该不该将其理解为自动编码器?”
2.接下来,Global Average Pooling1D将通过对序列维度求平均值来为每一个样本返回一个定长输出向量。这允许模型以尽可能最简单的方式处理变长输入。
3.该定长输出向量通过一个有16个隐藏单元的全连接(Dense)层传输。
4.最后一层与单个输出结点密集链接。使用Sigmoid激活函数,其函数值介于0与1之间的浮点数,表示概率或者置信度。
隐层单元
上述模型在输入输出之间有两个中间层或“隐藏层”。输出(单元、结点或神经元)的数量即为层表示空间的维度。换句话说,是学习内部表示时网络所允许的自由度。
如果模型具有更多的隐层单元(更高维度的表示空间)和/或更多层,则可以学习到更复杂的表示。但是,这会是网络的计算成本偏高,并且可能会导致过拟合(overfitting)能够在训练数据上而不是测试数据上改善性能的模式。
损失函数与优化器
一个模型需要损失函数和优化器来进行训练。由于这是二分类问题且模型输出为概率值(一个使用sigmoid激活函数的单一单元层),我们将使用binary_crossentropy损失函数。
这不是损失函数的唯一选择,例如,我们可以选择mean_sqquared_error.但是,一般来说binary_crossentropy更适合处理概率–它能够度量概率分布之间“距离”,或者在我们的示例中,指的是度量groun-truth分布与预测值之间的“距离”。
我们在研究回归问题(房价预测)时,我们将介绍如何使用另一种叫做均方差的损失函数。现在我们配置模型来使用优化器和损失函数:
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
创建一个验证集
在训练时,我们想要检查模型在未见过的数据上的准确率(accurauy)。通过从原始训练数据数据中分离10000个样本来创建一个验证集。(为什么不使用测试集?我们的目标是只使用训练数据来开发和调整模型,然后只使用一次测试数据来评估准确率(accuracy))。
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
训练模型
以512个样本的mini-batch(将所有数据分为若干个batch,每个batch包含一部分训练样本)大小迭代40个epoch来训练模型。这是指对x_train和y_train张量中所有样本的40次迭代。在训练过程中,监测来自验证集的10000个样本上的损失值(loss)和准确率(accuracy):
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
Epoch 1/40
30/30 [==============================] - 5s 33ms/step - loss: 0.6919 - accuracy: 0.5051 - val_loss: 0.6901 - val_accuracy: 0.5357
Epoch 2/40
30/30 [==============================] - 0s 11ms/step - loss: 0.6863 - accuracy: 0.6171 - val_loss: 0.6822 - val_accuracy: 0.6787
Epoch 3/40
30/30 [==============================] - 0s 8ms/step - loss: 0.6743 - accuracy: 0.7011 - val_loss: 0.6677 - val_accuracy: 0.7257
Epoch 4/40
30/30 [==============================] - 0s 8ms/step - loss: 0.6537 - accuracy: 0.7463 - val_loss: 0.6447 - val_accuracy: 0.7270
Epoch 5/40
30/30 [==============================] - 0s 16ms/step - loss: 0.6231 - accuracy: 0.7708 - val_loss: 0.6127 - val_accuracy: 0.7667
Epoch 6/40
30/30 [==============================] - 0s 9ms/step - loss: 0.5836 - accuracy: 0.8023 - val_loss: 0.5738 - val_accuracy: 0.7957
Epoch 7/40
30/30 [==============================] - 0s 8ms/step - loss: 0.5384 - accuracy: 0.8261 - val_loss: 0.5308 - val_accuracy: 0.8215
Epoch 8/40
30/30 [==============================] - 0s 8ms/step - loss: 0.4913 - accuracy: 0.8498 - val_loss: 0.4894 - val_accuracy: 0.8328
Epoch 9/40
30/30 [==============================] - 0s 8ms/step - loss: 0.4465 - accuracy: 0.8638 - val_loss: 0.4515 - val_accuracy: 0.8450
Epoch 10/40
30/30 [==============================] - 0s 8ms/step - loss: 0.4071 - accuracy: 0.8745 - val_loss: 0.4193 - val_accuracy: 0.8549
Epoch 11/40
30/30 [==============================] - 0s 8ms/step - loss: 0.3721 - accuracy: 0.8836 - val_loss: 0.3937 - val_accuracy: 0.8576
Epoch 12/40
30/30 [==============================] - 0s 8ms/step - loss: 0.3430 - accuracy: 0.8899 - val_loss: 0.3718 - val_accuracy: 0.8642
Epoch 13/40
30/30 [==============================] - 0s 8ms/step - loss: 0.3180 - accuracy: 0.8954 - val_loss: 0.3546 - val_accuracy: 0.8683
Epoch 14/40
30/30 [==============================] - 0s 9ms/step - loss: 0.2971 - accuracy: 0.9011 - val_loss: 0.3404 - val_accuracy: 0.8730
Epoch 15/40
30/30 [==============================] - 0s 8ms/step - loss: 0.2790 - accuracy: 0.9070 - val_loss: 0.3292 - val_accuracy: 0.8750
Epoch 16/40
30/30 [==============================] - 0s 9ms/step - loss: 0.2628 - accuracy: 0.9127 - val_loss: 0.3201 - val_accuracy: 0.8769
Epoch 17/40
30/30 [==============================] - 0s 8ms/step - loss: 0.2485 - accuracy: 0.9169 - val_loss: 0.3125 - val_accuracy: 0.8782
Epoch 18/40
30/30 [==============================] - 0s 8ms/step - loss: 0.2356 - accuracy: 0.9223 - val_loss: 0.3067 - val_accuracy: 0.8809
Epoch 19/40
30/30 [==============================] - 0s 8ms/step - loss: 0.2241 - accuracy: 0.9247 - val_loss: 0.3023 - val_accuracy: 0.8799
Epoch 20/40
30/30 [==============================] - 0s 9ms/step - loss: 0.2129 - accuracy: 0.9283 - val_loss: 0.2973 - val_accuracy: 0.8827
Epoch 21/40
30/30 [==============================] - 0s 8ms/step - loss: 0.2030 - accuracy: 0.9321 - val_loss: 0.2949 - val_accuracy: 0.8823
Epoch 22/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1941 - accuracy: 0.9357 - val_loss: 0.2913 - val_accuracy: 0.8845
Epoch 23/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1855 - accuracy: 0.9408 - val_loss: 0.2891 - val_accuracy: 0.8845
Epoch 24/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1773 - accuracy: 0.9437 - val_loss: 0.2879 - val_accuracy: 0.8842
Epoch 25/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1698 - accuracy: 0.9470 - val_loss: 0.2870 - val_accuracy: 0.8849
Epoch 26/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1629 - accuracy: 0.9490 - val_loss: 0.2873 - val_accuracy: 0.8843
Epoch 27/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1565 - accuracy: 0.9520 - val_loss: 0.2869 - val_accuracy: 0.8849
Epoch 28/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1507 - accuracy: 0.9539 - val_loss: 0.2871 - val_accuracy: 0.8848
Epoch 29/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1439 - accuracy: 0.9567 - val_loss: 0.2872 - val_accuracy: 0.8859
Epoch 30/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1381 - accuracy: 0.9599 - val_loss: 0.2881 - val_accuracy: 0.8871
Epoch 31/40
30/30 [==============================] - 0s 9ms/step - loss: 0.1329 - accuracy: 0.9611 - val_loss: 0.2893 - val_accuracy: 0.8871
Epoch 32/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1276 - accuracy: 0.9638 - val_loss: 0.2908 - val_accuracy: 0.8863
Epoch 33/40
30/30 [==============================] - 0s 9ms/step - loss: 0.1228 - accuracy: 0.9659 - val_loss: 0.2940 - val_accuracy: 0.8848
Epoch 34/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1182 - accuracy: 0.9675 - val_loss: 0.2955 - val_accuracy: 0.8849
Epoch 35/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1140 - accuracy: 0.9685 - val_loss: 0.2961 - val_accuracy: 0.8845
Epoch 36/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1093 - accuracy: 0.9705 - val_loss: 0.2990 - val_accuracy: 0.8846
Epoch 37/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1053 - accuracy: 0.9719 - val_loss: 0.3016 - val_accuracy: 0.8848
Epoch 38/40
30/30 [==============================] - 0s 8ms/step - loss: 0.1013 - accuracy: 0.9732 - val_loss: 0.3038 - val_accuracy: 0.8844
Epoch 39/40
30/30 [==============================] - 0s 8ms/step - loss: 0.0973 - accuracy: 0.9744 - val_loss: 0.3080 - val_accuracy: 0.8824
Epoch 40/40
30/30 [==============================] - 0s 8ms/step - loss: 0.0944 - accuracy: 0.9753 - val_loss: 0.3106 - val_accuracy: 0.8824
评估模型
我们来看一下模型的性能如何。将返回两个值。损失值(loss)(值越低越好)与准确率(accuracy)
results = model.evaluate(test_data, test_labels, verbose=2)
print(results)
782/782 - 2s - loss: 0.3300 - accuracy: 0.8712 - 2s/epoch - 2ms/step
[0.33000096678733826, 0.8712000250816345]
这种十分朴素的方法得到了约87%的准确率。若采用更好的方法,模型的准确率应当接近95%。
创建一个准确率和损失值随时间变化的图表
model.fit()返回一个History对象,该对象包含一个字典,其中包含训练阶段所发生的一切事件。
history_dict = history.history
history_dict.keys()
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
有四个条目:在训练和验证期间,每个条目对应一个监控指标。我们可以使用这些条目来绘制训练与验证过程的损失值(loss)和准确率(accuracy),以便进行比较。
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc)+1)
#'bo'代表蓝点
plt.plot(epochs, loss, 'bo', label='Training loss')
# b代表‘蓝色实线’
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

plt.clf() #清除数字
#'bo'代表蓝点
plt.plot(epochs, acc, 'bo', label='Training loss')
# b代表‘蓝色实线’
plt.plot(epochs, val_acc, 'b', label='Validation loss')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()

在该图中,点代表训练损失值(loss)与准确率(accuracy),实线代表验证损失值(loss)与准确率(accuracy)。
注意训练损失值随每一个epoch而下降,训练准确率(accuracy)随每一个epoch上升。在这使用梯度下降优化是可预期的–我们想要的就是在每次迭代中最小化期望值。
验证过程中的loss与accuracy的情况却并非如此,它们在20个epoch之后达到峰值。这是过拟合的一个实例:模型在训练数据上的表现比在以前未见过的数据上的表现要更好。在此之后,模型过度优化并学习特定于训练数据的表示,而不能够泛化到测试数据。
对于这种情况,我们可以通过在20个左右epoch后停止训练来避免过拟合,至于如何通过回调自动执行此操作我们后面再叙述。

1608

被折叠的 条评论
为什么被折叠?



