官网实例详解4.42(imdb.py)-keras学习笔记四

IMDB情感分类数据集

imdb.py处理IMDB情感数据集,加载数据、获取词索引(在词典的编号)

Keras开发包文件目录

Keras实例文件目录

名词解释

IMDB(Internet Movie Database,简称IMDb,互联网电影资料库),是一个关于电影演员、电影、电视节目、电视明星和电影制作的在线数据库。官网介绍

数据集介绍

本例使用数据集信息如下:

imdb.npz

imdb_word_index.json

数据集下载

 

imdb.npz文件中数据和格式如下:

[list([1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 19193, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 10311, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 12118, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32])
 list([1, 194, 1153, 194, 8255, 78, 228, 5, 6, 1463, 4369, 5012, 134, 26, 4, 715, 8, 118, 1634, 14, 394, 20, 13, 119, 954, 189, 102, 5, 207, 110, 3103, 21, 14, 69, 188, 8, 30, 23, 7, 4, 249, 126, 93, 4, 114, 9, 2300, 1523, 5, 647, 4, 116, 9, 35, 8163, 4, 229, 9, 340, 1322, 4, 118, 9, 4, 130, 4901, 19, 4, 1002, 5, 89, 29, 952, 46, 37, 4, 455, 9, 45, 43, 38, 1543, 1905, 398, 4, 1649, 26, 6853, 5, 163, 11, 3215, 10156, 4, 1153, 9, 194, 775, 7, 8255, 11596, 349, 2637, 148, 605, 15358, 8003, 15, 123, 125, 68, 2, 6853, 15, 349, 165, 4362, 98, 5, 4, 228, 9, 43, 2, 1157, 15, 299, 120, 5, 120, 174, 11, 220, 175, 136, 50, 9, 4373, 228, 8255, 5, 2, 656, 245, 2350, 5, 4, 9837, 131, 152, 491, 18, 2, 32, 7464, 1212, 14, 9, 6, 371, 78, 22, 625, 64, 1382, 9, 8, 168, 145, 23, 4, 1690, 15, 16, 4, 1355, 5, 28, 6, 52, 154, 462, 33, 89, 78, 285, 16, 145, 95])
 list([1, 14, 47, 8, 30, 31, 7, 4, 249, 108, 7, 4, 5974, 54, 61, 369, 13, 71, 149, 14, 22, 112, 4, 2401, 311, 12, 16, 3711, 33, 75, 43, 1829, 296, 4, 86, 320, 35, 534, 19, 263, 4821, 1301, 4, 1873, 33, 89, 78, 12, 66, 16, 4, 360, 7, 4, 58, 316, 334, 11, 4, 1716, 43, 645, 662, 8, 257, 85, 1200, 42, 1228, 2578, 83, 68, 3912, 15, 36, 165, 1539, 278, 36, 69, 2, 780, 8, 106, 14, 6905, 1338, 18, 6, 22, 12, 215, 28, 610, 40, 6, 87, 326, 23, 2300, 21, 23, 22, 12, 272, 40, 57, 31, 11, 4, 22, 47, 6, 2307, 51, 9, 170, 23, 595, 116, 595, 1352, 13, 191, 79, 638, 89, 2, 14, 9, 8, 106, 607, 624, 35, 534, 6, 227, 7, 129, 113])
 ...
 list([1, 11, 6, 230, 245, 6401, 9, 6, 1225, 446, 2, 45, 2174, 84, 8322, 4007, 21, 4, 912, 84, 14532, 325, 725, 134, 15271, 1715, 84, 5, 36, 28, 57, 1099, 21, 8, 140, 8, 703, 5, 11656, 84, 56, 18, 1644, 14, 9, 31, 7, 4, 9406, 1209, 2295, 2, 1008, 18, 6, 20, 207, 110, 563, 12, 8, 2901, 17793, 8, 97, 6, 20, 53, 4767, 74, 4, 460, 364, 1273, 29, 270, 11, 960, 108, 45, 40, 29, 2961, 395, 11, 6, 4065, 500, 7, 14492, 89, 364, 70, 29, 140, 4, 64, 4780, 11, 4, 2678, 26, 178, 4, 529, 443, 17793, 5, 27, 710, 117, 2, 8123, 165, 47, 84, 37, 131, 818, 14, 595, 10, 10, 61, 1242, 1209, 10, 10, 288, 2260, 1702, 34, 2901, 17793, 4, 65, 496, 4, 231, 7, 790, 5, 6, 320, 234, 2766, 234, 1119, 1574, 7, 496, 4, 139, 929, 2901, 17793, 7750, 5, 4241, 18, 4, 8497, 13164, 250, 11, 1818, 7561, 4, 4217, 5408, 747, 1115, 372, 1890, 1006, 541, 9303, 7, 4, 59, 11027, 4, 3586, 2])
 list([1, 1446, 7079, 69, 72, 3305, 13, 610, 930, 8, 12, 582, 23, 5, 16, 484, 685, 54, 349, 11, 4120, 2959, 45, 58, 1466, 13, 197, 12, 16, 43, 23, 2, 5, 62, 30, 145, 402, 11, 4131, 51, 575, 32, 61, 369, 71, 66, 770, 12, 1054, 75, 100, 2198, 8, 4, 105, 37, 69, 147, 712, 75, 3543, 44, 257, 390, 5, 69, 263, 514, 105, 50, 286, 1814, 23, 4, 123, 13, 161, 40, 5, 421, 4, 116, 16, 897, 13, 2, 40, 319, 5872, 112, 6700, 11, 4803, 121, 25, 70, 3468, 4, 719, 3798, 13, 18, 31, 62, 40, 8, 7200, 4, 2, 7, 14, 123, 5, 942, 25, 8, 721, 12, 145, 5, 202, 12, 160, 580, 202, 12, 6, 52, 58, 11418, 92, 401, 728, 12, 39, 14, 251, 8, 15, 251, 5, 2, 12, 38, 84, 80, 124, 12, 9, 23])
 list([1, 17, 6, 194, 337, 7, 4, 204, 22, 45, 254, 8, 106, 14, 123, 4, 12815, 270, 14437, 5, 16923, 12255, 732, 2098, 101, 405, 39, 14, 1034, 4, 1310, 9, 115, 50, 305, 12, 47, 4, 168, 5, 235, 7, 38, 111, 699, 102, 7, 4, 4039, 9245, 9, 24, 6, 78, 1099, 17, 2345, 16553, 21, 27, 9685, 6139, 5, 2, 1603, 92, 1183, 4, 1310, 7, 4, 204, 42, 97, 90, 35, 221, 109, 29, 127, 27, 118, 8, 97, 12, 157, 21, 6789, 2, 9, 6, 66, 78, 1099, 4, 631, 1191, 5, 2642, 272, 191, 1070, 6, 7585, 8, 2197, 2, 10755, 544, 5, 383, 1271, 848, 1468, 12183, 497, 16876, 8, 1597, 8778, 19280, 21, 60, 27, 239, 9, 43, 8368, 209, 405, 10, 10, 12, 764, 40, 4, 248, 20, 12, 16, 5, 174, 1791, 72, 7, 51, 6, 1739, 22, 4, 204, 131, 9])] train sequences

每个list是一个句子,句子中每个数字表示单词的编号;怎么获取编号对应的单词?见imdb_word_index.json文件

imdb_word_index.json文件格式

{"fawn": 34701, "tsukino": 52006,..., "paget": 18509, "expands": 20597}

共有88584个单词,key-value格式存放,key代表单词,value代表(单词)编号

词频(单词在语料中出现次数)越高编号越小。例如, the出现次数最高,编号为1。

 

代码注释

"""IMDB sentiment classification dataset.
IMDB情感分类数据集
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from ..utils.data_utils import get_file
from ..preprocessing.sequence import _remove_long_seq
import numpy as np
import json
import warnings


def load_data(path='imdb.npz', num_words=None, skip_top=0,
              maxlen=None, seed=113,
              start_char=1, oov_char=2, index_from=3, **kwargs):
    """Loads the IMDB dataset.
    加载IMDB数据集

    # Arguments
        path: where to cache the data (relative to `~/.keras/dataset`).
        path: 缓存数据位置(见‘~/.Kalas/DataSet’)。
        num_words: max number of words to include. Words are ranked
            by how often they occur (in the training set) and only
            the most frequent words are kept
        num_words: 包括的最大单词数。单词按它们出现的频次排列(在训练集中),只有最频繁的单词被保存下来。
        skip_top: skip the top N most frequently occurring words
            (which may not be informative).
        skip_top:跳过n个最经常出现的单词(这可能不是信息性(有价值,例如,a、of、in、the等)的)。
        maxlen: sequences longer than this will be filtered out.
        manlen:序列最大长度,超过该长度截断
        seed: random seed for sample shuffling.
        seed: 用于样品筛选(顺序调整)的随机种子。
        start_char: The start of a sequence will be marked with this character.
            Set to 1 because 0 is usually the padding character.
        start_char: 序列的开始将用字符来标记。设置为1,因为0通常是填充字符。
        oov_char: words that were cut out because of the `num_words`
            or `skip_top` limit will be replaced with this character.
        oov_char: 由于“num_words”或“skip_top”限制而被删掉的单词将被这个字符所取代。
        index_from: index actual words with this index and higher.
        index_from: 用这个索引和更高的索引实际单词。

    # Returns
    返回:
        Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.
        数字元组:(x_train, y_train), (x_test, y_test)`.

    # Raises
    补充
        ValueError: in case `maxlen` is so low
            that no input sequence could be kept.
        ValueError:如果“maxlen”较少,没有输入序列可以保持。

    Note that the 'out of vocabulary' character is only used for
    words that were present in the training set but are not included
    because they're not making the `num_words` cut here.
    Words that were not seen in the training set but are in the test set
    have simply been skipped.
    请注意,“out of vocabulary”字符只用于训练集中存在的单词,但不包括,因为它们没有在这里剪切“num_words”。
    在训练集中没有看到但在测试集中的单词被简单地跳过。

    out of vocabulary,非推荐术语
    oov,非推荐术语
    """
    # Legacy support
    # 后续支持
    if 'nb_words' in kwargs:
        warnings.warn('The `nb_words` argument in `load_data` '
                      'has been renamed `num_words`.')
        num_words = kwargs.pop('nb_words')
    if kwargs:
        raise TypeError('Unrecognized keyword arguments: ' + str(kwargs))

    path = get_file(path,
                    origin='https://s3.amazonaws.com/text-datasets/imdb.npz',
                    file_hash='599dadb1135973df5b59232a0e9a887c')
    with np.load(path) as f:
        x_train, labels_train = f['x_train'], f['y_train']
        x_test, labels_test = f['x_test'], f['y_test']

    np.random.seed(seed)
    indices = np.arange(len(x_train))
    np.random.shuffle(indices)
    x_train = x_train[indices]
    labels_train = labels_train[indices]

    indices = np.arange(len(x_test))
    np.random.shuffle(indices)
    x_test = x_test[indices]
    labels_test = labels_test[indices]

    xs = np.concatenate([x_train, x_test])
    labels = np.concatenate([labels_train, labels_test])

    if start_char is not None:
        xs = [[start_char] + [w + index_from for w in x] for x in xs]
    elif index_from:
        xs = [[w + index_from for w in x] for x in xs]

    if maxlen:
        xs, labels = _remove_long_seq(maxlen, xs, labels)
        if not xs:
            raise ValueError('After filtering for sequences shorter than maxlen=' +
                             str(maxlen) + ', no sequence was kept. '
                             'Increase maxlen.')
    if not num_words:
        num_words = max([max(x) for x in xs])

    # by convention, use 2 as OOV word
    # 按照惯例,使用2作为OOV字
    # reserve 'index_from' (=3 by default) characters:
    # 保留“index_from”(默认值为3)字符:
    # 0 (padding), 1 (start), 2 (OOV) )
    # 0(填充),1(开始),2(非推荐术语)

    if oov_char is not None:
        xs = [[w if (skip_top <= w < num_words) else oov_char for w in x] for x in xs]
    else:
        xs = [[w for w in x if skip_top <= w < num_words] for x in xs]

    idx = len(x_train)
    x_train, y_train = np.array(xs[:idx]), np.array(labels[:idx])
    x_test, y_test = np.array(xs[idx:]), np.array(labels[idx:])

    return (x_train, y_train), (x_test, y_test)


def get_word_index(path='imdb_word_index.json'):
    """Retrieves the dictionary mapping word indices back to words.
    检索字典单词映射并转换为单词

    # Arguments
    参数
        path: where to cache the data (relative to `~/.keras/dataset`).
        path: 缓存数据位置(见‘~/.Kalas/DataSet’)。

    # Returns
    返回
        The word index dictionary.
        词索引字典
    """
    path = get_file(path,
                    origin='https://s3.amazonaws.com/text-datasets/imdb_word_index.json',
                    file_hash='bfafd718b763782e994055a2d397834f')
    with open(path) as f:
        return json.load(f)

代码执行

 

 

Keras详细介绍

英文:https://keras.io/

中文:http://keras-cn.readthedocs.io/en/latest/

实例下载

https://github.com/keras-team/keras

https://github.com/keras-team/keras/tree/master/examples

完整项目下载

方便没积分童鞋,请加企鹅452205574,共享文件夹。

包括:代码、数据集合(图片)、已生成model、安装库文件等。

 

 

e-studio.pro.v4.42.029a是一款专业的办公软件。它为用户提供了一系列功能强大的工具,帮助用户高效地管理和处理办公相关的任务。 首先,e-studio.pro.v4.42.029a具有高级的文档编辑和处理功能。用户可以使用该软件创建和编辑各种类型的文档,如报告、合同、演示文稿等。同时,它还支持导入和导出多种文件格式,方便用户的文档互操作性。 其次,e-studio.pro.v4.42.029a提供了丰富的协作和共享功能。用户可以在同一个文档上进行实时协作,多人同时编辑和评论,提高了团队的工作效率。此外,用户还可以将文档分享给他人,轻松实现文件的共享和讨论。 除此之外,e-studio.pro.v4.42.029a还拥有强大的数据管理和安全保护功能。用户可以使用该软件轻松管理大量的文档和数据,进行分类和搜索。同时,它也提供了严格的安全措施,保护用户的敏感信息不被未授权的人员访问。 最后,e-studio.pro.v4.42.029a还具备高效的工作流程管理功能。用户可以自定义工作流程,设置任务和审批流程,提升工作效率和准确性。此外,它还提供了统计分析功能,帮助用户实时监控和评估工作的进展和成果。 总之,e-studio.pro.v4.42.029a是一款功能全面、易用性强的办公软件。它帮助用户完成各种办公任务,提高工作效率和质量,让用户的办公工作更加便捷和高效。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值