电影评论分类:二分类问题(IMDB数据集)

IMDB数据集是Keras内部集成的,初次导入需要下载一下,之后就可以直接用了。

IMDB数据集包含来自互联网的50000条严重两极分化的评论,该数据被分为用于训练的25000条评论和用于测试的25000条评论,训练集和测试集都包含50%的正面评价和50%的负面评价。该数据集已经经过预处理:评论(单词序列)已经被转换为整数序列,其中每个整数代表字典中的某个单词。

加载数据集

from keras.datasets import imdb

(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

参数num_words = 10000的意思是仅保留训练数据的前10000个最常见出现的单词,低频单词将被舍弃。这样得到的向量数据不会太大,便于处理。

train_data[0]

[1,
14,
22,
16,
43,
530,
973,
1622,
1385,
65,
458,…

train_labels[0]

0表示负面,1表示正面。

下面的代码可以将某条评论迅速解码为英文单词:

# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])

我们这里,key是单词,value是数字,转换索引之后,就让数字变成键了。我们使用join方法,使用空格来分隔每个单词。

展示结果如下:

"? this film was just brilliant casting location scenery story direction everyone’s really suited the part they played and you could just imagine being there robert ? is an amazing actor and now the same being director ? father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for ? and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also ? to the two little boy’s that played the ? of norman and paul they were just brilliant children are often left out of the ? list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don’t you think the whole story was so lovely because it was true and was someone’s life after all that was shared with us all"

  • 16
    点赞
  • 101
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 19
    评论
好的,以下是使用循环神经网络(RNN)对IMDB电影评论数据集进行文本情感分类的代码: ```python import tensorflow as tf from tensorflow.keras.datasets import imdb from tensorflow.keras.layers import Embedding, LSTM, Dense from tensorflow.keras.models import Sequential from tensorflow.keras.preprocessing import sequence # 设置超参数 max_features = 20000 # 只考虑最常见的20,000个单词 max_len = 80 # 每个评论最多包含80个单词 embedding_size = 128 # 词向量的维度 # 加载IMDB电影评论数据集 (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features) # 对数据进行预处理 x_train = sequence.pad_sequences(x_train, maxlen=max_len) x_test = sequence.pad_sequences(x_test, maxlen=max_len) # 构建模型 model = Sequential() model.add(Embedding(max_features, embedding_size, input_length=max_len)) model.add(LSTM(units=128)) model.add(Dense(units=1, activation='sigmoid')) # 编译模型 model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # 训练模型 model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test)) ``` 这个代码中,我们首先导入了所需的库和类。然后,我们设置了一些超参数,包括每个评论最多包含的单词数、词向量的维度和我们要考虑的最常见的单词数。 接下来,我们使用 `imdb.load_data()` 函数从Keras中加载IMDB电影评论数据集,并将其分成训练集和测试集。 然后,我们对数据进行了预处理,使用 `sequence.pad_sequences()` 函数将每个评论的长度统一设置为 `max_len`,并使用零值填充序列。 接着,我们构建了一个简单的RNN模型,包括一个嵌入层、一个LSTM层和一个全连接层。我们使用 `model.compile()` 函数来编译模型,指定了损失函数、优化器和评价指标。 最后,我们使用 `model.fit()` 函数来训练模型,并在测试集上进行了验证。在这个例子中,我们将模型训练了5个epoch,使用批大小为32。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 19
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Einstellung

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值