imdb数据集_keras如何导入本地下载的imdb数据集?

052d7840c4bb8fb68495da53d402470b.png

第一步 下载数据集到本地

下载链接:https://pan.baidu.com/s/15belo01keGvFri43K8TxFQ

提取码:9h3u

存储位置:C:/用户/用户名/.keras/datasets

(用户名不同人不一样,可能电脑不一样存储位置也略有差异)

第二步 导入数据集

import keras
import numpy as np
# load data
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

----------

查看数据集是否导入正确

print(train_labels[0]) #1
print(max([max(sequence) for sequence in train_data])) #9999

----------

遇到的一些小问题以及解决办法:

若出现了几个问题,最后差不多是这样:raise ValueError("Object arrays cannot be loaded when " ValueError: Object arrays cannot be loaded ……

这说明numpy版本太高了,我一开始的版本是1.16.4,之后转换成了1.16.2

版本转换:

cmd输入xxxxxxxxxxxxxxxx numpy==1.16.2

xxxxxxxxxx为https://mirrors.tuna.tsinghua.edu.cn/help/pypi/中代码,可加快下载速度,直接复制,只需要将some-package改成numpy==1.16.2即可

第三步 电影评论二分类完整代码示例

import keras
import numpy as np
# load data
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
print(train_labels[0]) #1
print(max([max(sequence) for sequence in train_data])) #9999
# 将索引解码为单词,需要下载imdb_word_index.json至C:/用户/用户名/.keras/datasets
# 链接:https://pan.baidu.com/s/1kkmpXrr1tkFtg7D3LX_lcw 提取码:wzjw
word_index = imdb.get_word_index() #将单词映射为整数索引的字典
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) #键值颠倒,将整数索引映射为单词
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
#print(decoded_review)

# 对列表进行one-shot编码,eg.将[3,5]转换成[0.0,0.0,1.0,0.0,1.0,0.0,0.0,0.0,...]
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i,sequence] = 1. #将 results[i] 的指定索引设为 1
return results
# handle input data
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
#print(x_train[0]) #[0. 1. 1. ... 0. 0. 0.]
# handle output data
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
# 验证集预留
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]

# build model
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
# train model
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])

history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
history_dict = history.history
#print(history_dict.keys()) #dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
# 绘制训练损失、验证损失、训练精度、验证精度
import matplotlib.pyplot as plt
# plot loss
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc)+1)
plt.plot(epochs, loss, 'bo', label='Training loss') #blue o
plt.plot(epochs, val_loss, 'b', label='Validation loss') #blue solid line
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# plot accuracy
plt.clf() #清空图像
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()

----------

此处验证集用来确定训练NN可采用的最佳epoch,训练集--->NN参数,得出epochs=4
用新参数搭建的NN去训练train_data,注释掉history以及history之后的代码:

model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
print(results)
print(model.predict(x_test))

------------

进一步实验的实验结果,(控制变量:

[0.29455984374523164, 0.88312] #原结构共三层

[0.2833905682277679, 0.88576] #共两层

[0.30949291754722597, 0.87984] #神经元个数为32

[0.08610797638118267, 0.88308] #mse

[0.32080167996406556, 0.87764] #用tanh代替relu

选定的结构较为合适。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值