影评文本分类

电影评论分类:二分类问题

使用IMDB 数据集,它包含来自互联网电影数据库(IMDB)的50 000 条严重两极分
化的评论。数据集被分为用于训练的25 000 条评论与用于测试的25 000 条评论,训练集和测试集都包含50% 的正面评论和50% 的负面评论。

加载IMDB数据集

IMDB 数据集内置于Keras 库。它已经过预处理:评论(单词序列)已经被转换为整数序列,其中每个整数代表字典中的某个单词。

imdb = keras.datasets.imdb
(train_data,train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

参数num_words=10000 的意思是仅保留训练数据中前10 000 个最常出现的单词。低频单词将被舍弃。这样得到的向量数据不会太大,便于处理。

探索数据

train_data 和test_data 这两个变量都是评论组成的列表,每条评论又是单词索引组成
的列表(表示一系列单词)。train_labels 和test_labels 都是0 和1 组成的列表,其中0
代表负面(negative),1 代表正面(positive)。

print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
>>Training entries: 25000, labels: 25000
print(train_data[0])
>>[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]
print(train_labels[0])
>>1
len(train_data[0]), len(train_data[1])
>>(218, 189)

将整数转换回字词

了解如何将整数转换回文本可能很有用。在以下代码中,将创建一个辅助函数来查询包含整数到字符串映射的字典对象。

# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()

# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2  # unknown
word_index["<UNUSED>"] = 3

reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])

def decode_review(text):
    return ' '.join([reverse_word_index.get(i, '?') for i in text])
decode_review(train_data[0])
>>"<START> this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert <UNK> is an amazing actor and now the same being director <UNK> father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for <UNK> and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also <UNK> to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the <UNK> list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all"

准备数据

不能将整数序列直接输入神经网络。需要将列表转换为张量。转换方法有以下两种。

  • 填充列表,使其具有相同的长度,再将列表转换成形状为 (samples, word_indices)
    的整数张量,然后网络第一层使用能处理这种整数张量的层(即Embedding 层,本书后面会详细介绍)。
  • 对列表进行 one-hot 编码,将其转换为 0 和 1 组成的向量。举个例子,序列[3, 5]将会被转换为10 000 维向量,只有索引为3 和5 的元素是1,其余元素都是0。然后网络第一层可以用Dense 层,它能够处理浮点数向量数据。
# 影评转换为张量。由于影评的长度必须相同,使用 pad_sequences 函数将长度标准化
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
                                                        value=word_index["<PAD>"],
                                                        padding='post',
                                                        maxlen=256)

test_data = keras.preprocessing.sequence.pad_sequences(test_data,
                                                       value=word_index["<PAD>"],
                                                       padding='post',
                                                       maxlen=256)
len(train_data[0]), len(train_data[1])
>>(256, 256)
# 第一条影评
print(train_data[0])
>>[   1   14   22   16   43  530  973 1622 1385   65  458 4468   66 3941
    4  173   36  256    5   25  100   43  838  112   50  670    2    9
   35  480  284    5  150    4  172  112  167    2  336  385   39    4
  172 4536 1111   17  546   38   13  447    4  192   50   16    6  147
 2025   19   14   22    4 1920 4613  469    4   22   71   87   12   16
   43  530   38   76   15   13 1247    4   22   17  515   17   12   16
  626   18    2    5   62  386   12    8  316    8  106    5    4 2223
 5244   16  480   66 3785   33    4  130   12   16   38  619    5   25
  124   51   36  135   48   25 1415   33    6   22   12  215   28   77
   52    5   14  407   16   82    2    8    4  107  117 5952   15  256
    4    2    7 3766    5  723   36   71   43  530  476   26  400  317
   46    7    4    2 1029   13  104   88    4  381   15  297   98   32
 2071   56   26  141    6  194 7486   18    4  226   22   21  134  476
   26  480    5  144   30 5535   18   51   36   28  224   92   25  104
    4  226   65   16   38 1334   88   12   16  283    5   16 4472  113
  103   32   15   16 5345   19  178   32    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0]

构建网络

输入数据是向量,而标签是标量(1和0),这是最简单的情况。有一类网络在这种问题上表现很好,就是带有relu激活的全连接层(Dense)的简单堆叠。

# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000

model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))

model.summary()

# 第一层是 Embedding 层。该层会在整数编码的词汇表中查找每个字词-索引的嵌入向量。模型在接受训练时会学习这些向量。这些向量会向输出数组添加一个维度。
# 生成的维度为:(batch, sequence, embedding)
# 接下来,一个 GlobalAveragePooling1D 层通过对序列维度求平均值,针对每个样本返回一个长度固定的输出向量。
# 这样,模型便能够以尽可能简单的方式处理各种长度的输入
# 该长度固定的输出向量会传入一个全连接 (Dense) 层(包含 16 个隐藏单元)
# 最后一层与单个输出节点密集连接。应用 sigmoid 激活函数后,结果是介于 0 到 1 之间的浮点值,表示概率或置信水平。
>>
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None, None, 16)          160000    
_________________________________________________________________
global_average_pooling1d (Gl (None, 16)                0         
_________________________________________________________________
dense (Dense)                (None, 16)                272       
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 17        
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________

配置模型使用优化器和损失函数

由于面对的是一个二分类问题,网络输出是一个概率值(网络最后一层使用sigmoid 激活函数,仅包含一个单元),那么最好使用binary_crossentropy(二元交叉熵)损失。这并不是唯一可行的选择,比如你还可以使用mean_squared_error(均方误差)。但对于输出概率值的模型,交叉熵(crossentropy)往往是最好的选择。交叉熵是来自于信息论领域的概念,用于衡量概率分布之间的距离,在这个例子中就是真实分布与预测值之间的距离。

model.compile(optimizer=tf.train.AdamOptimizer(),
              loss='binary_crossentropy',
              metrics=['accuracy'])

留出验证集

x_val = train_data[:10000]
partial_x_train = train_data[10000:]

y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]

训练模型

# 用有 512 个样本的小批次训练模型 40 个周期。这将对 x_train 和 y_train 张量中的所有样本进行 40 次迭代。
# 在训练期间,监控模型在验证集的 10000 个样本上的损失和准确率:
history = model.fit(partial_x_train,
                    partial_y_train,
                    epochs=40,
                    batch_size=512,
                    validation_data=(x_val, y_val),
                    verbose=1)
>>
Train on 15000 samples, validate on 10000 samples
WARNING:tensorflow:From E:\Anaconda3\Anaconda3_install\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/40
15000/15000 [==============================] - 1s 50us/sample - loss: 0.6912 - acc: 0.6431 - val_loss: 0.6881 - val_acc: 0.7217
Epoch 2/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.6828 - acc: 0.7466 - val_loss: 0.6772 - val_acc: 0.7467
Epoch 3/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.6660 - acc: 0.7658 - val_loss: 0.6565 - val_acc: 0.7680
Epoch 4/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.6377 - acc: 0.7761 - val_loss: 0.6250 - val_acc: 0.7729
Epoch 5/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.5980 - acc: 0.8033 - val_loss: 0.5849 - val_acc: 0.7972
Epoch 6/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.5505 - acc: 0.8249 - val_loss: 0.5409 - val_acc: 0.8117
Epoch 7/40
15000/15000 [==============================] - 1s 44us/sample - loss: 0.5003 - acc: 0.8397 - val_loss: 0.4951 - val_acc: 0.8294
Epoch 8/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.4523 - acc: 0.8573 - val_loss: 0.4549 - val_acc: 0.8410
Epoch 9/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.4096 - acc: 0.8701 - val_loss: 0.4203 - val_acc: 0.8516
Epoch 10/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.3729 - acc: 0.8810 - val_loss: 0.3927 - val_acc: 0.8556
Epoch 11/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.3424 - acc: 0.8876 - val_loss: 0.3703 - val_acc: 0.8642
Epoch 12/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.3169 - acc: 0.8959 - val_loss: 0.3531 - val_acc: 0.8669
Epoch 13/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.2959 - acc: 0.9011 - val_loss: 0.3383 - val_acc: 0.8718
Epoch 14/40
15000/15000 [==============================] - 1s 44us/sample - loss: 0.2771 - acc: 0.9070 - val_loss: 0.3273 - val_acc: 0.8761
Epoch 15/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.2611 - acc: 0.9111 - val_loss: 0.3183 - val_acc: 0.8766
Epoch 16/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.2468 - acc: 0.9168 - val_loss: 0.3109 - val_acc: 0.8766
Epoch 17/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.2334 - acc: 0.9210 - val_loss: 0.3048 - val_acc: 0.8806
Epoch 18/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.2216 - acc: 0.9249 - val_loss: 0.2997 - val_acc: 0.8825
Epoch 19/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.2109 - acc: 0.9263 - val_loss: 0.2953 - val_acc: 0.8831
Epoch 20/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.2013 - acc: 0.9311 - val_loss: 0.2925 - val_acc: 0.8831
Epoch 21/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.1913 - acc: 0.9369 - val_loss: 0.2900 - val_acc: 0.8844
Epoch 22/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1831 - acc: 0.9399 - val_loss: 0.2880 - val_acc: 0.8849
Epoch 23/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.1747 - acc: 0.9437 - val_loss: 0.2876 - val_acc: 0.8847
Epoch 24/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1675 - acc: 0.9466 - val_loss: 0.2865 - val_acc: 0.8848
Epoch 25/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.1601 - acc: 0.9500 - val_loss: 0.2855 - val_acc: 0.8856
Epoch 26/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1535 - acc: 0.9529 - val_loss: 0.2866 - val_acc: 0.8840
Epoch 27/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.1472 - acc: 0.9550 - val_loss: 0.2864 - val_acc: 0.8856
Epoch 28/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1412 - acc: 0.9570 - val_loss: 0.2875 - val_acc: 0.8851
Epoch 29/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1359 - acc: 0.9601 - val_loss: 0.2893 - val_acc: 0.8851
Epoch 30/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1304 - acc: 0.9613 - val_loss: 0.2891 - val_acc: 0.8864
Epoch 31/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1248 - acc: 0.9644 - val_loss: 0.2906 - val_acc: 0.8861
Epoch 32/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1198 - acc: 0.9671 - val_loss: 0.2924 - val_acc: 0.8855
Epoch 33/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1150 - acc: 0.9682 - val_loss: 0.2951 - val_acc: 0.8846
Epoch 34/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1106 - acc: 0.9693 - val_loss: 0.2976 - val_acc: 0.8851
Epoch 35/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1065 - acc: 0.9704 - val_loss: 0.3005 - val_acc: 0.8840
Epoch 36/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.1023 - acc: 0.9723 - val_loss: 0.3024 - val_acc: 0.8833
Epoch 37/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.0980 - acc: 0.9737 - val_loss: 0.3054 - val_acc: 0.8828
Epoch 38/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.0942 - acc: 0.9755 - val_loss: 0.3093 - val_acc: 0.8816
Epoch 39/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.0911 - acc: 0.9767 - val_loss: 0.3135 - val_acc: 0.8816
Epoch 40/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.0871 - acc: 0.9781 - val_loss: 0.3166 - val_acc: 0.8824

评估模型

results = model.evaluate(test_data, test_labels)
print(results)
25000/25000 [==============================] - 0s 16us/sample - loss: 0.3390 - acc: 0.8702
[0.338996932888031, 0.87016]

绘制准确率和损失的变化图

# model.fit() 返回一个 History 对象,该对象包含一个字典,其中包括训练期间发生的所有情况:
history_dict = history.history
history_dict.keys()
>>dict_keys(['loss', 'acc', 'val_loss', 'val_acc'])
import matplotlib.pyplot as plt

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1, len(acc) + 1)

# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()

plt.show()

在这里插入图片描述

plt.clf()   # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()

plt.show()

在这里插入图片描述

参考资料

影评文本分类
Python深度学习

  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
以下是一个简单的影评倾向性文本分类器,使用Python和scikit-learn库。该分类器可以对影评进行分类,判断它们是正面的还是负面的。 首先,需要准备几个Python依赖项: - scipy - numpy - pandas - scikit-learn 可以使用以下命令在终端中安装它们: ``` pip install numpy pip install pandas pip install scipy pip install scikit-learn ``` 接下来,我们需要准备训练数据。我们将使用IMDb数据集,其中包含25,000条正面的和25,000条负面的电影评论。可以在这里下载它:http://ai.stanford.edu/~amaas/data/sentiment/ 下载完成后,将其解压缩到一个文件夹中。在该文件夹中,有两个文件夹,一个是正面评论,另一个是负面评论。每个文件夹中都有大约12,500个文本文件。 接下来,我们需要将这些文本文件加载到Python中。以下是一个函数,它将读取指定目录中的所有文本文件并将它们转换为一个Pandas DataFrame: ```python import os import pandas as pd def load_data(directory): data = [] for filename in os.listdir(directory): if filename.endswith(".txt"): with open(os.path.join(directory, filename)) as f: review = f.read() data.append(review) df = pd.DataFrame(data, columns=["review"]) df["sentiment"] = directory.split("/")[-1] return df ``` 现在可以使用以下代码将所有文本文件加载到DataFrame中: ```python pos_df = load_data("aclImdb/train/pos") neg_df = load_data("aclImdb/train/neg") train_df = pd.concat([pos_df, neg_df], ignore_index=True) ``` 现在,我们需要对训练数据进行一些预处理。我们将使用scikit-learn的CountVectorizer来将文本转换为数字特征向量,并使用TfidfTransformer来进行TF-IDF归一化。 以下是预处理数据所需的代码: ```python from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer vectorizer = CountVectorizer(stop_words="english") transformer = TfidfTransformer() X_train_counts = vectorizer.fit_transform(train_df["review"]) X_train_tfidf = transformer.fit_transform(X_train_counts) y_train = train_df["sentiment"].map({"pos": 1, "neg": 0}) ``` 现在我们准备训练分类器。这里我们将使用scikit-learn的逻辑回归分类器。 以下是分类器训练所需的代码: ```python from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf.fit(X_train_tfidf, y_train) ``` 现在我们已经训练好了分类器,可以用它来分类新的评论。 以下是一个函数,它将采用一个评论字符串并返回一个0或1的分类结果: ```python def predict_sentiment(clf, review): X_test_counts = vectorizer.transform([review]) X_test_tfidf = transformer.transform(X_test_counts) y_pred = clf.predict(X_test_tfidf)[0] return y_pred ``` 现在可以使用以下代码测试分类器: ```python review = "This movie was great! I loved it." sentiment = predict_sentiment(clf, review) print("Sentiment: ", sentiment) review = "This movie was terrible. I hated it." sentiment = predict_sentiment(clf, review) print("Sentiment: ", sentiment) ``` 这个分类器只是一个简单的开始。还有许多方式可以改进它,包括使用更复杂的特征提取方法和更复杂的分类器。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值