自然语言处理

1 IMDB数据集探索分析

import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)

下载IMDB数据集

imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

探索数据

print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
len(train_data[0]), len(train_data[1])
train_data[0] 

整数转换回字词

# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()

# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2  # unknown
word_index["<UNUSED>"] = 3

reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])

def decode_review(text):
    return ' '.join([reverse_word_index.get(i, '?') for i in text])
Downloading data from https://s3.amazonaws.com/text-datasets/imdb_word_index.json
1646592/1641221 [==============================] - 100s 61us/step
decode_review(train_data[0])
"<START> this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert <UNK> is an amazing actor and now the same being director <UNK> father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for <UNK> and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also <UNK> to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the <UNK> list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all"

准备数据

#影评(整数数组)必须转换为张量,然后才能馈送到神经网络中。我们可以通过以下两种方法实现这种转换:

对数组进行独热编码,将它们转换为由 0 和 1 构成的向量。例如,序列 [3, 5] 将变成一个 10000 维的向量,除索引 3 和 5 转换为 1 之外,其余全转换为 0。然后,将它作为网络的第一层,一个可以处理浮点向量数据的密集层。不过,这种方法会占用大量内存,需要一个大小为 num_words * num_reviews 的矩阵。

或者,我们可以填充数组,使它们都具有相同的长度,然后创建一个形状为 max_length * num_reviews 的整数张量。我们可以使用一个能够处理这种形状的嵌入层作为网络中的第一层。

在本教程中,我们将使用第二种方法。

由于影评的长度必须相同,我们将使用 pad_sequences 函数将长度标准化:

train_data = keras.preprocessing.sequence.pad_sequences(train_data,
                                                        value=word_index["<PAD>"],
                                                        padding='post',
                                                        maxlen=256)

test_data = keras.preprocessing.sequence.pad_sequences(test_data,
                                                       value=word_index["<PAD>"],
                                                       padding='post',
                                                       maxlen=256)
#处理后的样本长度
len(train_data[0]), len(train_data[1])
(256, 256)
print(train_data[0])
[   1   14   22   16   43  530  973 1622 1385   65  458 4468   66 3941    4
  173   36  256    5   25  100   43  838  112   50  670    2    9   35  480
  284    5  150    4  172  112  167    2  336  385   39    4  172 4536 1111
   17  546   38   13  447    4  192   50   16    6  147 2025   19   14   22
    4 1920 4613  469    4   22   71   87   12   16   43  530   38   76   15
   13 1247    4   22   17  515   17   12   16  626   18    2    5   62  386
   12    8  316    8  106    5    4 2223 5244   16  480   66 3785   33    4
  130   12   16   38  619    5   25  124   51   36  135   48   25 1415   33
    6   22   12  215   28   77   52    5   14  407   16   82    2    8    4
  107  117 5952   15  256    4    2    7 3766    5  723   36   71   43  530
  476   26  400  317   46    7    4    2 1029   13  104   88    4  381   15
  297   98   32 2071   56   26  141    6  194 7486   18    4  226   22   21
  134  476   26  480    5  144   30 5535   18   51   36   28  224   92   25
  104    4  226   65   16   38 1334   88   12   16  283    5   16 4472  113
  103   32   15   16 5345   19  178   32    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0]

构建模型

神经网络通过堆叠层创建而成,这需要做出两个架构方面的主要决策:

要在模型中使用多少个层?
要针对每个层使用多少个隐藏单元?
在本示例中,输入数据由字词-索引数组构成。要预测的标签是 0 或 1。接下来,我们为此问题构建一个模型:

# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000

model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))

model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None, None, 16)          160000    
_________________________________________________________________
global_average_pooling1d (Gl (None, 16)                0         
_________________________________________________________________
dense (Dense)                (None, 16)                272       
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 17        
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________

按顺序堆叠各个层以构建分类器:

第一层是 Embedding 层。该层会在整数编码的词汇表中查找每个字词-索引的嵌入向量。模型在接受训练时会学习这些向量。这些向量会向输出数组添加一个维度。生成的维度为:(batch, sequence, embedding)。
接下来,一个 GlobalAveragePooling1D 层通过对序列维度求平均值,针对每个样本返回一个长度固定的输出向量。这样,模型便能够以尽可能简单的方式处理各种长度的输入。
该长度固定的输出向量会传入一个全连接 (Dense) 层(包含 16 个隐藏单元)。
最后一层与单个输出节点密集连接。应用 sigmoid 激活函数后,结果是介于 0 到 1 之间的浮点值,表示概率或置信水平。
隐藏单元
上述模型在输入和输出之间有两个中间层(也称为“隐藏”层)。输出(单元、节点或神经元)的数量是相应层的表示法空间的维度。换句话说,该数值表示学习内部表示法时网络所允许的自由度。

如果模型具有更多隐藏单元(更高维度的表示空间)和/或更多层,则说明网络可以学习更复杂的表示法。不过,这会使网络耗费更多计算资源,并且可能导致学习不必要的模式(可以优化在训练数据上的表现,但不会优化在测试数据上的表现)。这称为过拟合,我们稍后会加以探讨。

损失函数和优化器
模型在训练时需要一个损失函数和一个优化器。由于这是一个二元分类问题且模型会输出一个概率(应用 S 型激活函数的单个单元层),因此我们将使用 binary_crossentropy 损失函数。

该函数并不是唯一的损失函数,例如,您可以选择 mean_squared_error。但一般来说,binary_crossentropy 更适合处理概率问题,它可测量概率分布之间的“差距”,在本例中则为实际分布和预测之间的“差距”。

稍后,在探索回归问题(比如预测房价)时,我们将了解如何使用另一个称为均方误差的损失函数。

现在,配置模型以使用优化器和损失函数:

model.compile(optimizer=tf.train.AdamOptimizer(),
              loss='binary_crossentropy',
              metrics=['accuracy'])

创建验证集

在训练时,我们需要检查模型处理从未见过的数据的准确率。我们从原始训练数据中分离出 10000 个样本,创建一个验证集。(为什么现在不使用测试集?我们的目标是仅使用训练数据开发和调整模型,然后仅使用一次测试数据评估准确率。)

x_val = train_data[:10000]
partial_x_train = train_data[10000:]

y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]

训练模型

用有 512 个样本的小批次训练模型 40 个周期。这将对 x_train 和 y_train 张量中的所有样本进行 40 次迭代。在训练期间,监控模型在验证集的 10000 个样本上的损失和准确率:

history = model.fit(partial_x_train,
                    partial_y_train,
                    epochs=40,
                    batch_size=512,
                    validation_data=(x_val, y_val),
                    verbose=1)
Train on 15000 samples, validate on 10000 samples
Epoch 1/40
15000/15000 [==============================] - 2s 127us/step - loss: 0.6940 - acc: 0.5108 - val_loss: 0.6912 - val_acc: 0.5194
Epoch 2/40
15000/15000 [==============================] - 1s 59us/step - loss: 0.6878 - acc: 0.5470 - val_loss: 0.6858 - val_acc: 0.5678
Epoch 3/40
15000/15000 [==============================] - 1s 61us/step - loss: 0.6823 - acc: 0.6019 - val_loss: 0.6809 - val_acc: 0.6327
Epoch 4/40
15000/15000 [==============================] - 1s 57us/step - loss: 0.6758 - acc: 0.6903 - val_loss: 0.6753 - val_acc: 0.5998
Epoch 5/40
15000/15000 [==============================] - 1s 60us/step - loss: 0.6674 - acc: 0.7321 - val_loss: 0.6651 - val_acc: 0.7369
Epoch 6/40
15000/15000 [==============================] - 1s 61us/step - loss: 0.6561 - acc: 0.7554 - val_loss: 0.6533 - val_acc: 0.7373
Epoch 7/40
15000/15000 [==============================] - 1s 60us/step - loss: 0.6410 - acc: 0.7662 - val_loss: 0.6380 - val_acc: 0.7556
Epoch 8/40
15000/15000 [==============================] - 1s 60us/step - loss: 0.6218 - acc: 0.7743 - val_loss: 0.6191 - val_acc: 0.7632
Epoch 9/40
15000/15000 [==============================] - 1s 60us/step - loss: 0.5992 - acc: 0.7799 - val_loss: 0.5963 - val_acc: 0.7712
Epoch 10/40
15000/15000 [==============================] - 1s 60us/step - loss: 0.5726 - acc: 0.7953 - val_loss: 0.5714 - val_acc: 0.7833
Epoch 11/40
15000/15000 [==============================] - 1s 60us/step - loss: 0.5444 - acc: 0.8006 - val_loss: 0.5484 - val_acc: 0.7858
Epoch 12/40
15000/15000 [==============================] - 1s 60us/step - loss: 0.5161 - acc: 0.8139 - val_loss: 0.5192 - val_acc: 0.8015
Epoch 13/40
15000/15000 [==============================] - 1s 61us/step - loss: 0.4862 - acc: 0.8305 - val_loss: 0.4946 - val_acc: 0.8149
Epoch 14/40
15000/15000 [==============================] - 1s 60us/step - loss: 0.4588 - acc: 0.8411 - val_loss: 0.4708 - val_acc: 0.8229
Epoch 15/40
15000/15000 [==============================] - 1s 61us/step - loss: 0.4330 - acc: 0.8495 - val_loss: 0.4482 - val_acc: 0.8293
Epoch 16/40
15000/15000 [==============================] - 1s 61us/step - loss: 0.4096 - acc: 0.8561 - val_loss: 0.4281 - val_acc: 0.8389
Epoch 17/40
15000/15000 [==============================] - 1s 63us/step - loss: 0.3871 - acc: 0.8673 - val_loss: 0.4104 - val_acc: 0.8455
Epoch 18/40
15000/15000 [==============================] - 1s 62us/step - loss: 0.3675 - acc: 0.8727 - val_loss: 0.3946 - val_acc: 0.8495
Epoch 19/40
15000/15000 [==============================] - 1s 63us/step - loss: 0.3501 - acc: 0.8785 - val_loss: 0.3811 - val_acc: 0.8532
Epoch 20/40
15000/15000 [==============================] - 1s 64us/step - loss: 0.3343 - acc: 0.8842 - val_loss: 0.3691 - val_acc: 0.8595
Epoch 21/40
15000/15000 [==============================] - 1s 63us/step - loss: 0.3201 - acc: 0.8881 - val_loss: 0.3587 - val_acc: 0.8623
Epoch 22/40
15000/15000 [==============================] - 1s 66us/step - loss: 0.3071 - acc: 0.8919 - val_loss: 0.3496 - val_acc: 0.8651
Epoch 23/40
15000/15000 [==============================] - 1s 63us/step - loss: 0.2956 - acc: 0.8962 - val_loss: 0.3421 - val_acc: 0.8656
Epoch 24/40
15000/15000 [==============================] - 1s 62us/step - loss: 0.2844 - acc: 0.8997 - val_loss: 0.3345 - val_acc: 0.8700
Epoch 25/40
15000/15000 [==============================] - 1s 65us/step - loss: 0.2744 - acc: 0.9037 - val_loss: 0.3282 - val_acc: 0.8715
Epoch 26/40
15000/15000 [==============================] - 1s 63us/step - loss: 0.2650 - acc: 0.9067 - val_loss: 0.3232 - val_acc: 0.8715
Epoch 27/40
15000/15000 [==============================] - 1s 62us/step - loss: 0.2568 - acc: 0.9089 - val_loss: 0.3179 - val_acc: 0.8744
Epoch 28/40
15000/15000 [==============================] - 1s 63us/step - loss: 0.2481 - acc: 0.9141 - val_loss: 0.3131 - val_acc: 0.8761
Epoch 29/40
15000/15000 [==============================] - 1s 65us/step - loss: 0.2406 - acc: 0.9169 - val_loss: 0.3096 - val_acc: 0.8768
Epoch 30/40
15000/15000 [==============================] - 1s 62us/step - loss: 0.2343 - acc: 0.9173 - val_loss: 0.3058 - val_acc: 0.8780
Epoch 31/40
15000/15000 [==============================] - 1s 62us/step - loss: 0.2266 - acc: 0.9211 - val_loss: 0.3031 - val_acc: 0.8784
Epoch 32/40
15000/15000 [==============================] - 1s 61us/step - loss: 0.2208 - acc: 0.9234 - val_loss: 0.3002 - val_acc: 0.8795
Epoch 33/40
15000/15000 [==============================] - 1s 67us/step - loss: 0.2139 - acc: 0.9253 - val_loss: 0.2977 - val_acc: 0.8813
Epoch 34/40
15000/15000 [==============================] - 1s 65us/step - loss: 0.2081 - acc: 0.9270 - val_loss: 0.2962 - val_acc: 0.8814
Epoch 35/40
15000/15000 [==============================] - 1s 62us/step - loss: 0.2031 - acc: 0.9285 - val_loss: 0.2940 - val_acc: 0.8823
Epoch 36/40
15000/15000 [==============================] - 1s 62us/step - loss: 0.1971 - acc: 0.9321 - val_loss: 0.2923 - val_acc: 0.8831
Epoch 37/40
15000/15000 [==============================] - 1s 62us/step - loss: 0.1921 - acc: 0.9338 - val_loss: 0.2910 - val_acc: 0.8834
Epoch 38/40
15000/15000 [==============================] - 1s 62us/step - loss: 0.1873 - acc: 0.9359 - val_loss: 0.2898 - val_acc: 0.8833
Epoch 39/40
15000/15000 [==============================] - 1s 62us/step - loss: 0.1821 - acc: 0.9389 - val_loss: 0.2887 - val_acc: 0.8839
Epoch 40/40
15000/15000 [==============================] - 1s 63us/step - loss: 0.1776 - acc: 0.9409 - val_loss: 0.2877 - val_acc: 0.8839

评估模型

我们来看看模型的表现如何。模型会返回两个值:损失(表示误差的数字,越低越好)和准确率。

results = model.evaluate(test_data, test_labels)

print(results)
25000/25000 [==============================] - 1s 30us/step
[0.3029762806415558, 0.87648000000000004]

创建准确率和损失随时间变化的图

model.fit() 返回一个 History 对象,该对象包含一个字典,其中包括训练期间发生的所有情况:

history_dict = history.history
history_dict.keys()
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
dict_keys(['loss', 'val_loss', 'val_acc', 'acc'])
---------------------------------------------------------------------------

NameError                                 Traceback (most recent call last)

<ipython-input-27-9431603f3d3a> in <module>()
----> 1 dict_keys(['loss', 'val_loss', 'val_acc', 'acc'])


NameError: name 'dict_keys' is not defined
import matplotlib.pyplot as plt

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1, len(acc) + 1)

# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()

plt.show()

png

plt.clf()   # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()

plt.show()

png

2THUCnews数据集探索分析

中文数据集:THUCNews
THUCNews数据子集:https://pan.baidu.com/s/1hugrfRu 密码:qfud

读取数据与分词

以测试集为例

import pandas as pd
import numpy as np
train_file = 'C:/Users/Administrator/Desktop/cnews/cnews.train.txt'
val_file = 'C:/Users/Administrator/Desktop/cnews/cnews.val.txt'
test_file  = 'C:/Users/Administrator/Desktop/cnews/cnews.test.txt'
##测试集
test_data = pd.read_csv(test_file,sep='\t',engine='python',names=['label','content'],encoding='UTF-8')
print(test_data.shape)
test_data.tail()
(10000, 2)
labelcontent
9995财经近期18只偏股基金成立 将为股市新增300亿资金兴业有机增长混合基金23日发布公告称,基金募...
9996财经银华基金杨靖聊政策性主题投资机会实录新浪财经讯 银华和谐主题基金拟任基金经理助理杨靖于3月2...
9997财经首只基金投资信心指数问世本报讯 (记者吴敏)昨日,嘉实基金宣布推出“嘉实中国基金投资者信心指...
9998财经17只阳光私募3月份火速成立证券时报记者 方 丽本报讯 阳光私募产品迎来了发行高潮。WIND...
9999财经25日股票基金全线受挫 九成半基金跌逾1%全景网3月26日讯 周三开放式基金净值普降,股票型...
from multiprocessing import Pool, cpu_count
import re
import pkuseg 


remove = re.compile('[\s\d,。?!~:“”;,.:?"!~$%^&@#¥#*()()、|/]') 
## 多核处理函数
def parallelize_dataframe(df, func):
    df_split = np.array_split(df, cpu_count())
    pool = Pool(cpu_count())
    df = pd.concat(pool.map(func, df_split))
    pool.close()
    pool.join()
    return df
seg = pkuseg.pkuseg()  ##使用北大新开源的分词器 
def pku_cut(df):
    df['content'] = df['content'].apply(lambda x: re.sub(remove, '', str(x).strip()))## 去除标点
    df['content'] = df['content'].apply(lambda x: seg.cut(x))## 分词
    return df
test_data = parallelize_dataframe(test_data, pku_cut)
test_data.tail()
---------------------------------------------------------------------------

ModuleNotFoundError                       Traceback (most recent call last)

<ipython-input-36-cae1579bf367> in <module>()
      1 from multiprocessing import Pool, cpu_count
      2 import re
----> 3 import pkuseg
      4 
      5 


ModuleNotFoundError: No module named 'pkuseg'

3学习召回率、准确率、ROC曲线、AUC、PR曲线这些基本概念

https://www.imooc.com/article/48072
召回率、准确率、ROC曲线、AUC、PR曲线https://blog.csdn.net/quiet_girl/article/details/70830796

准确率、精确率、召回率和 F 值是在鱼龙混杂的环境中,选出目标的重要评价指标。不妨看看这些指标的定义先:

TP-将正类预测为正类

FN-将正类预测为负类

FP-将负类预测位正类

TN-将负类预测位负类

准确率(正确率)=所有预测正确的样本/总的样本 (TP+TN)/总

精确率= 将正类预测为正类 / 所有预测为正类 TP/(TP+FP)

召回率 = 将正类预测为正类 / 所有正真的正类 TP/(TP+FN)

F值 = 正确率 * 召回率 * 2 / (正确率 + 召回率) (F 值即为正确率和召回率的调和平均值)
ROC(Receiver Operating Characteristic)曲线是以假正率(FP_rate)和真正率(TP_rate)为轴的曲线,ROC曲线下面的面积我们叫做AUC.
PR(Precision-Recall)曲线

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值