NLP-文本分类(2)

1 任务

  1. 数据集
    数据集:中、英文数据集各一份

中文数据集:THUCNews
THUCNews数据子集:https://pan.baidu.com/s/1hugrfRu 密码:qfud
英文数据集:IMDB数据集 Sentiment Analysis

  1. IMDB数据集下载和探索

参考TensorFlow官方教程:影评文本分类 | TensorFlow
科赛 - Kesci.com

  1. THUCNews数据集下载和探索

参考博客中的数据集部分和预处理部分:CNN字符级中文文本分类-基于TensorFlow实现 - 一蓑烟雨 - CSDN博客
参考代码:text-classification-cnn-rnn/cnews_loader.py at mas…

  1. 学习召回率、准确率、ROC曲线、AUC、PR曲线这些基本概念

参考1:机器学习之类别不平衡问题 (2) —— ROC和PR曲线_慕课手记

2.IMDB数据集下载探索

#导入包
import tensorflow as tf
from tensorflow import keras

import numpy as np

#导入iMBD数据集
imdb = keras.datasets.imdb

#探索数据集
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

#分析数据集
help(imdb.load_data)

Help on function load_data in module tensorflow.python.keras.datasets.imdb:

load_data(path='imdb.npz', num_words=None, skip_top=0, maxlen=None, seed=113, start_char=1, oov_char=2, index_from=3, **kwargs)
    Loads the IMDB dataset.
    
    Arguments:
        path: where to cache the data (relative to `~/.keras/dataset`).
        num_words: max number of words to include. Words are ranked
            by how often they occur (in the training set) and only
            the most frequent words are kept
        skip_top: skip the top N most frequently occurring words
            (which may not be informative).
        maxlen: sequences longer than this will be filtered out.
        seed: random seed for sample shuffling.
        start_char: The start of a sequence will be marked with this character.
            Set to 1 because 0 is usually the padding character.
        oov_char: words that were cut out because of the `num_words`
            or `skip_top` limit will be replaced with this character.
        index_from: index actual words with this index and higher.
        **kwargs: Used for backwards compatibility.
    
    Returns:
        Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.
    
    Raises:
        ValueError: in case `maxlen` is so low
            that no input sequence could be kept.
    
    Note that the 'out of vocabulary' character is only used for
    words that were present in the training set but are not included
    because they're not making the `num_words` cut here.
    Words that were not seen in the training set but are in the test set
    have simply been skipped.
print("Training shape: {}, labels: {}".format(train_data.shape, train_labels.shape))
Training shape: (25000,), labels: (25000,)
print(train_data[:5])
print(train_labels[:5])
[list([1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32])
 list([1, 194, 1153, 194, 8255, 78, 228, 5, 6, 1463, 4369, 5012, 134, 26, 4, 715, 8, 118, 1634, 14, 394, 20, 13, 119, 954, 189, 102, 5, 207, 110, 3103, 21, 14, 69, 188, 8, 30, 23, 7, 4, 249, 126, 93, 4, 114, 9, 2300, 1523, 5, 647, 4, 116, 9, 35, 8163, 4, 229, 9, 340, 1322, 4, 118, 9, 4, 130, 4901, 19, 4, 1002, 5, 89, 29, 952, 46, 37, 4, 455, 9, 45, 43, 38, 1543, 1905, 398, 4, 1649, 26, 6853, 5, 163, 11, 3215, 2, 4, 1153, 9, 194, 775, 7, 8255, 2, 349, 2637, 148, 605, 2, 8003, 15, 123, 125, 68, 2, 6853, 15, 349, 165, 4362, 98, 5, 4, 228, 9, 43, 2, 1157, 15, 299, 120, 5, 120, 174, 11, 220, 175, 136, 50, 9, 4373, 228, 8255, 5, 2, 656, 245, 2350, 5, 4, 9837, 131, 152, 491, 18, 2, 32, 7464, 1212, 14, 9, 6, 371, 78, 22, 625, 64, 1382, 9, 8, 168, 145, 23, 4, 1690, 15, 16, 4, 1355, 5, 28, 6, 52, 154, 462, 33, 89, 78, 285, 16, 145, 95])
 list([1, 14, 47, 8, 30, 31, 7, 4, 249, 108, 7, 4, 5974, 54, 61, 369, 13, 71, 149, 14, 22, 112, 4, 2401, 311, 12, 16, 3711, 33, 75, 43, 1829, 296, 4, 86, 320, 35, 534, 19, 263, 4821, 1301, 4, 1873, 33, 89, 78, 12, 66, 16, 4, 360, 7, 4, 58, 316, 334, 11, 4, 1716, 43, 645, 662, 8, 257, 85, 1200, 42, 1228, 2578, 83, 68, 3912, 15, 36, 165, 1539, 278, 36, 69, 2, 780, 8, 106, 14, 6905, 1338, 18, 6, 22, 12, 215, 28, 610, 40, 6, 87, 326, 23, 2300, 21, 23, 22, 12, 272, 40, 57, 31, 11, 4, 22, 47, 6, 2307, 51, 9, 170, 23, 595, 116, 595, 1352, 13, 191, 79, 638, 89, 2, 14, 9, 8, 106, 607, 624, 35, 534, 6, 227, 7, 129, 113])
 list([1, 4, 2, 2, 33, 2804, 4, 2040, 432, 111, 153, 103, 4, 1494, 13, 70, 131, 67, 11, 61, 2, 744, 35, 3715, 761, 61, 5766, 452, 9214, 4, 985, 7, 2, 59, 166, 4, 105, 216, 1239, 41, 1797, 9, 15, 7, 35, 744, 2413, 31, 8, 4, 687, 23, 4, 2, 7339, 6, 3693, 42, 38, 39, 121, 59, 456, 10, 10, 7, 265, 12, 575, 111, 153, 159, 59, 16, 1447, 21, 25, 586, 482, 39, 4, 96, 59, 716, 12, 4, 172, 65, 9, 579, 11, 6004, 4, 1615, 5, 2, 7, 5168, 17, 13, 7064, 12, 19, 6, 464, 31, 314, 11, 2, 6, 719, 605, 11, 8, 202, 27, 310, 4, 3772, 3501, 8, 2722, 58, 10, 10, 537, 2116, 180, 40, 14, 413, 173, 7, 263, 112, 37, 152, 377, 4, 537, 263, 846, 579, 178, 54, 75, 71, 476, 36, 413, 263, 2504, 182, 5, 17, 75, 2306, 922, 36, 279, 131, 2895, 17, 2867, 42, 17, 35, 921, 2, 192, 5, 1219, 3890, 19, 2, 217, 4122, 1710, 537, 2, 1236, 5, 736, 10, 10, 61, 403, 9, 2, 40, 61, 4494, 5, 27, 4494, 159, 90, 263, 2311, 4319, 309, 8, 178, 5, 82, 4319, 4, 65, 15, 9225, 145, 143, 5122, 12, 7039, 537, 746, 537, 537, 15, 7979, 4, 2, 594, 7, 5168, 94, 9096, 3987, 2, 11, 2, 4, 538, 7, 1795, 246, 2, 9, 2, 11, 635, 14, 9, 51, 408, 12, 94, 318, 1382, 12, 47, 6, 2683, 936, 5, 6307, 2, 19, 49, 7, 4, 1885, 2, 1118, 25, 80, 126, 842, 10, 10, 2, 2, 4726, 27, 4494, 11, 1550, 3633, 159, 27, 341, 29, 2733, 19, 4185, 173, 7, 90, 2, 8, 30, 11, 4, 1784, 86, 1117, 8, 3261, 46, 11, 2, 21, 29, 9, 2841, 23, 4, 1010, 2, 793, 6, 2, 1386, 1830, 10, 10, 246, 50, 9, 6, 2750, 1944, 746, 90, 29, 2, 8, 124, 4, 882, 4, 882, 496, 27, 2, 2213, 537, 121, 127, 1219, 130, 5, 29, 494, 8, 124, 4, 882, 496, 4, 341, 7, 27, 846, 10, 10, 29, 9, 1906, 8, 97, 6, 236, 2, 1311, 8, 4, 2, 7, 31, 7, 2, 91, 2, 3987, 70, 4, 882, 30, 579, 42, 9, 12, 32, 11, 537, 10, 10, 11, 14, 65, 44, 537, 75, 2, 1775, 3353, 2, 1846, 4, 2, 7, 154, 5, 4, 518, 53, 2, 2, 7, 3211, 882, 11, 399, 38, 75, 257, 3807, 19, 2, 17, 29, 456, 4, 65, 7, 27, 205, 113, 10, 10, 2, 4, 2, 2, 9, 242, 4, 91, 1202, 2, 5, 2070, 307, 22, 7, 5168, 126, 93, 40, 2, 13, 188, 1076, 3222, 19, 4, 2, 7, 2348, 537, 23, 53, 537, 21, 82, 40, 2, 13, 2, 14, 280, 13, 219, 4, 2, 431, 758, 859, 4, 953, 1052, 2, 7, 5991, 5, 94, 40, 25, 238, 60, 2, 4, 2, 804, 2, 7, 4, 9941, 132, 8, 67, 6, 22, 15, 9, 283, 8, 5168, 14, 31, 9, 242, 955, 48, 25, 279, 2, 23, 12, 1685, 195, 25, 238, 60, 796, 2, 4, 671, 7, 2804, 5, 4, 559, 154, 888, 7, 726, 50, 26, 49, 7008, 15, 566, 30, 579, 21, 64, 2574])
 list([1, 249, 1323, 7, 61, 113, 10, 10, 13, 1637, 14, 20, 56, 33, 2401, 18, 457, 88, 13, 2626, 1400, 45, 3171, 13, 70, 79, 49, 706, 919, 13, 16, 355, 340, 355, 1696, 96, 143, 4, 22, 32, 289, 7, 61, 369, 71, 2359, 5, 13, 16, 131, 2073, 249, 114, 249, 229, 249, 20, 13, 28, 126, 110, 13, 473, 8, 569, 61, 419, 56, 429, 6, 1513, 18, 35, 534, 95, 474, 570, 5, 25, 124, 138, 88, 12, 421, 1543, 52, 725, 6397, 61, 419, 11, 13, 1571, 15, 1543, 20, 11, 4, 2, 5, 296, 12, 3524, 5, 15, 421, 128, 74, 233, 334, 207, 126, 224, 12, 562, 298, 2167, 1272, 7, 2601, 5, 516, 988, 43, 8, 79, 120, 15, 595, 13, 784, 25, 3171, 18, 165, 170, 143, 19, 14, 5, 7224, 6, 226, 251, 7, 61, 113])]
[1 0 0 1 0]
#获取前两个文本长度
len(train_data[0]), len(train_data[1])

(218, 189)
#获取前三个类别
print(train_labels[:3])

[1 0 0]
#:
#获取字词-整数字典标签
word_index = imdb.get_word_index()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb_word_index.json
1646592/1641221 [==============================] - 0s 0us/step
# reverse_word_index反序字典格式:{序号:单词}
reverse_word_index=dict([(value, key) for (key, value) in word_index.items()])
# 查看前5个序号的单词,注意序号从1开始
for i in range(5):
    print(reverse_word_index[i+1])
the
and
a
of
to
# 在开头添加这几个特殊标志,并占用前几个字符
# 上一部得知,序号是从1开始,所以这里+3是正确的,即先把所有序号往后推3位,再添加特殊标志
word_index = {
   k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2  # unknown
word_index["<UNUSED>"] = 3
#把字典啊{字符:序号}转化成{序号:字符}的形式
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
for i in range(5):
    print(reverse_word_index[i])
<PAD>
<START>
<UNK>
<UNUSED>
the
# 定义一个函数,将序号文本转换成单词文本,用空格连接
#揭秘文档,text内容应该都是序号,要转换成字符
def decode_review(text):
    #把序号转换成字符,若有新字符,用?代替。最后并组成一个字符串
    return ' '.join([reverse_word_index.get(i, '?') for i in text])
text0=decode_review(train_data[0])
print(text0)
<START> this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert <UNK> is an amazing actor and now the same being director <UNK> father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for <UNK> and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also <UNK> to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the <UNK> list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all
#影评长度标准化
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
                                                        value=word_index["<PAD>"],
                                                        padding='post',
                                                        maxlen=256)


test_data = keras.preprocessing.sequence.pad_sequences(test_data,
                                                       value=word_index["<PAD>"],
                                                       padding='post',
                                                       maxlen=256)
print(train_data[0])
[   1   14   22   16   43  530  973 1622 1385   65  458 4468   66 3941
    4  173   36  256    5   25  100   43  838  112   50  670    2    9
   35  480  284    5  150    4  172  112  167    2  336  385   39    4
  172 4536 1111   17  546   38   13  447    4  192   50   16    6  147
 2025   19   14   22    4 1920 4613  469    4   22   71   87   12   16
   43  530   38   76   15   13 1247    4   22   17  515   17   12   16
  626   18    2    5   62  386   12    8  316    8  106    5    4 2223
 5244   16  480   66 3785   33    4  130   12   16   38  619    5   25
  124   51   36  135   48   25 1415   33    6   22   12  215   28   77
   52    5   14  407   16   82    2    8    4  107  117 5952   15  256
    4    2    7 3766    5  723   36   71   43  530  476   26  400  317
   46    7    4    2 1029   13  104   88    4  381   15  297   98   32
 2071   56   26  141    6  194 7486   18    4  226   22   21  134  476
   26  480    5  144   30 5535   18   51   36   28  224   92   25  104
    4  226   65   16   38 1334   88   12   16  283    5   16 4472  113
  103   32   15   16 5345   19  178   32    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0]
len(train_data[0]), len(train_data[1])
(256, 256)
#输入形状是用于电影评论的词汇计数(10,000字)
vocab_size = 10000

model = keras.Sequential()
#第一层是 Embedding 层。该层会在整数编码的词汇表中查找每个字词-索引的嵌入向量。
#模型在接受训练时会学习这些向量。这些向量会向输出数组添加一个维度。生成的维度为:(batch, sequence, embedding)。
model.add(keras.layers.Embedding(vocab_size, 16))
# GlobalAveragePooling1D 层通过对序列维度求平均值,针对每个样本返回一个长度固定的输出向量。
#这样,模型便能够以尽可能简单的方式处理各种长度的输入。
model.add(keras.layers.GlobalAveragePooling1D())
#该长度固定的输出向量会传入一个全连接 (Dense) 层(包含 16 个隐藏单元)。
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dropout(0.5))
#最后一层与单个输出节点密集连接。应用 sigmoid 激活函数后,结果是介于 0 到 1 之间的浮点值,表示概率或置信水平。
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))

model.summary()
WARNING:tensorflow:From F:\anaconda1\envs\baseline\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From F:\anaconda1\envs\baseline\lib\site-packages\tensorflow\python\keras\layers\core.py:143: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None, None, 16)          160000    
_________________________________________________________________
global_average_pooling1d (Gl (None, 16)                0         
_________________________________________________________________
dense (Dense)                (None, 16)                272       
_________________________________________________________________
dropout (Dropout)            (None, 16)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 17        
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________
#模型损失函数与指定优化函数
model.compile(optimizer=tf.train.AdamOptimizer(),
              loss='binary_crossentropy',
              metrics=['accuracy'])

#构建验证集
x_val = train_data[:10000]
partial_x_train = train_data[10000:]

y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]

#训练模型
history = model.fit(partial_x_train,
                    partial_y_train,
                    epochs=40,
                    batch_size=512,
                    validation_data=(x_val, y_val),
                    verbose=1)
Train on 15000 samples, validate on 10000 samples
WARNING:tensorflow:From F:\anaconda1\envs\baseline\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/40
15000/15000 [==============================] - 2s 122us/sample - loss: 0.6917 - acc: 0.5569 - val_loss: 0.6895 - val_acc: 0.7405
Epoch 2/40
15000/15000 [==============================] - 1s 57us/sample - loss: 0.6856 - acc: 0.6554 - val_loss: 0.6815 - val_acc: 0.7569
Epoch 3/40
15000/15000 [==============================] - 1s 56us/sample - loss: 0.6736 - acc: 0.7032 - val_loss: 0.6651 - val_acc: 0.7628
Epoch 4/40
15000/15000 [==============================] - 1s 59us/sample - loss: 0.6508 - acc: 0.7277 - val_loss: 0.6376 - val_acc: 0.7629
Epoch 5/40
15000/15000 [==============================] - 1s 75us/sample - loss: 0.6161 - acc: 0.7631 - val_loss: 0.6003 - val_acc: 0.7981
Epoch 6/40
15000/15000 [==============================] - 1s 57us/sample - loss: 0.5762 - acc: 0.7805 - val_loss: 0.5576 - val_acc: 0.8163
Epoch 7/40
15000/15000 [==============================] - 1s 72us/sample - loss: 0.5294 - acc: 0.8121 - val_loss: 0.5114 - val_acc: 0.8315
Epoch 8/40
15000/15000 [==============================] - 1s 63us/sample - loss: 0.4825 - acc: 0.8334 - val_loss: 0.4690 - val_acc: 0.8431
Epoch 9/40
15000/15000 [==============================] - 1s 58us/sample - loss: 0.4452 - acc: 0.8420 - val_loss: 0.4324 - val_acc: 0.8527
Epoch 10/40
15000/15000 [==============================] - 1s 58us/sample - loss: 0.4094 - acc: 0.8567 - val_loss: 0.4030 - val_acc: 0.8585
Epoch 11/40
15000/15000 [==============================] - 1s 54us/sample - loss: 0.3800 - acc: 0.8647 - val_loss: 0.3784 - val_acc: 0.8671
Epoch 12/40
15000/15000 [==============================] - 1s 53us/sample - loss: 0.3556 - acc: 0.8763 - val_loss: 0.3596 - val_acc: 0.8689
Epoch 13/40
15000/15000 [==============================] - 1s 55us/sample - loss: 0.3336 - acc: 0.8814 - val_loss: 0.3432 - val_acc: 0.8734
Epoch 14/40
15000/15000 [==============================] - 1s 55us/sample - loss: 0.3146 - acc: 0.8876 - val_loss: 0.3312 - val_acc: 0.8765
Epoch 15/40
15000/15000 [==============================] - 1s 55us/sample - loss: 0.2984 - acc: 0.8945 - val_loss: 0.3213 - val_acc: 0.8780
Epoch 16/40
15000/15000 [==============================] - 1s 58us/sample - loss: 0.2866 - acc: 0.8998 - val_loss: 0.3125 - val_acc: 0.8780
Epoch 17/40
15000/15000 [==============================] - 1s 55us/sample - loss: 0.2715 - acc: 0.9048 - val_loss: 0.3053 - val_acc: 0.8810
Epoch 18/40
15000/15000 [==============================] - 1s 57us/sample - loss: 0.2586 - acc: 0.9118 - val_loss: 0.3001 - val_acc: 0.8818
Epoch 19/40
15000/15000 [==============================] - 1s 56us/sample - loss: 0.2458 - acc: 0.9154 - val_loss: 0.2947 - val_acc: 0.8844
Epoch 20/40
15000/15000 [==============================] - 1s 59us/sample - loss: 0.2338 - acc: 0.9205 - val_loss: 0.2911 - val_acc: 0.8832
Epoch 21/40
15000/15000 [==============================] - 1s 59us/sample - loss: 0.2220 - acc: 0.9239 - val_loss: 0.2879 - val_acc: 0.8841
Epoch 22/40
15000/15000 [==============
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值