电影评论分类:二分类问题

加载IMDB数据集

from keras.datasets import imdb
(train_data,train_labels),(test_data,test_labels) = imdb.load_data(num_words=10000)
print(train_data[0])
print(train_labels[0])

max(max(sequence) for sequence in train_data)

[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]
1
9999
import numpy as np

def vectorize_sequences(sequences,dimension=10000):
    results = np.zeros((len(sequences),dimension))
    for i,sequence in enumerate(sequences):
        results[i,sequence] = 1
    return results

x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)

print(x_train[0])

y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')

[0. 1. 1. ... 0. 0. 0.]

定义模型

from keras import models
from keras import layers

model = models.Sequential()
model.add(layers.Dense(16,activation='relu',input_shape=(10000,)))
model.add(layers.Dense(16,activation='relu'))
model.add(layers.Dense(1,activation='sigmoid'))

# 损失函数为二元交叉熵,三个参数依次为:优化器,损失函数,评价指标
model.compile(optimizer = 'rmsprop',loss = 'binary_crossentropy',metrics = ['acc'])

# 把训练样本分为训练集和交叉测试集
x_val = x_train[:10000]
partial_x_train = x_train[10000:]

y_val = y_train[:10000]
partial_y_train = y_train[10000:]


history= model.fit(partial_x_train,
                   partial_y_train,
                   epochs = 20,
                   batch_size = 512,validation_data = (x_val,y_val))

history_dict = history.history
history_dict.keys()

Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 6s 376us/step - loss: 0.5306 - acc: 0.7805 - val_loss: 0.3949 - val_acc: 0.8664
Epoch 2/20
15000/15000 [==============================] - 4s 299us/step - loss: 0.3118 - acc: 0.9048 - val_loss: 0.3421 - val_acc: 0.8625
Epoch 3/20
15000/15000 [==============================] - 4s 297us/step - loss: 0.2289 - acc: 0.9239 - val_loss: 0.2894 - val_acc: 0.8841
Epoch 4/20
15000/15000 [==============================] - 4s 285us/step - loss: 0.1805 - acc: 0.9405 - val_loss: 0.2866 - val_acc: 0.8845
Epoch 5/20
15000/15000 [==============================] - 5s 303us/step - loss: 0.1478 - acc: 0.9530 - val_loss: 0.2906 - val_acc: 0.8841
Epoch 6/20
15000/15000 [==============================] - 5s 325us/step - loss: 0.1245 - acc: 0.9613 - val_loss: 0.3290 - val_acc: 0.8767
Epoch 7/20
15000/15000 [==============================] - 5s 322us/step - loss: 0.1000 - acc: 0.9704 - val_loss: 0.3336 - val_acc: 0.8762
Epoch 8/20
15000/15000 [==============================] - 4s 278us/step - loss: 0.0859 - acc: 0.9743 - val_loss: 0.3378 - val_acc: 0.8798
Epoch 9/20
15000/15000 [==============================] - 4s 285us/step - loss: 0.0680 - acc: 0.9829 - val_loss: 0.3521 - val_acc: 0.8792
Epoch 10/20
15000/15000 [==============================] - 4s 294us/step - loss: 0.0598 - acc: 0.9834 - val_loss: 0.3829 - val_acc: 0.8752
Epoch 11/20
15000/15000 [==============================] - 4s 281us/step - loss: 0.0477 - acc: 0.9887 - val_loss: 0.4076 - val_acc: 0.8729
Epoch 12/20
15000/15000 [==============================] - 4s 293us/step - loss: 0.0397 - acc: 0.9910 - val_loss: 0.4375 - val_acc: 0.8757
Epoch 13/20
15000/15000 [==============================] - 4s 297us/step - loss: 0.0331 - acc: 0.9923 - val_loss: 0.4761 - val_acc: 0.8665
Epoch 14/20
15000/15000 [==============================] - 5s 301us/step - loss: 0.0287 - acc: 0.9941 - val_loss: 0.4909 - val_acc: 0.8710
Epoch 15/20
15000/15000 [==============================] - 4s 288us/step - loss: 0.0232 - acc: 0.9952 - val_loss: 0.5176 - val_acc: 0.8700
Epoch 16/20
15000/15000 [==============================] - 4s 284us/step - loss: 0.0178 - acc: 0.9971 - val_loss: 0.5469 - val_acc: 0.8676
Epoch 17/20
15000/15000 [==============================] - 5s 304us/step - loss: 0.0161 - acc: 0.9970 - val_loss: 0.5769 - val_acc: 0.8700
Epoch 18/20
15000/15000 [==============================] - 5s 304us/step - loss: 0.0107 - acc: 0.9990 - val_loss: 0.6284 - val_acc: 0.8669
Epoch 19/20
15000/15000 [==============================] - 4s 286us/step - loss: 0.0114 - acc: 0.9984 - val_loss: 0.6401 - val_acc: 0.8659
Epoch 20/20
15000/15000 [==============================] - 5s 325us/step - loss: 0.0062 - acc: 0.9998 - val_loss: 0.6928 - val_acc: 0.8575





dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])

绘制训练损失和验证损失

import matplotlib.pyplot as plt

history_dict = history.history
loss_values = history_dict['loss']
val_loss_values= history_dict['val_loss']

epochs = range(1,len(loss_values) + 1)

plt.plot(epochs,loss_values,'bo',label='Training loss')
plt.plot(epochs,val_loss_values,'b',label='Validation loss')
plt.title('Training and Validation loss ')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()

plt.show()

在这里插入图片描述

绘制训练精度和验证精度

plt.clf()
acc = history_dict['acc']
val_acc = history_dict['val_acc']

plt.plot(epochs,acc,'bo',label='Training acc')
plt.plot(epochs,val_acc,'b',label='Validation acc')
plt.title('Training and Validation loss ')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()

plt.show()

在这里插入图片描述


  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值