《Python 深度学习》刷书笔记 Chapter 6 Part-4 使用预训练的词嵌入


从原始文本到词嵌入


在这一小节中,我们将句子嵌入到向量序列中,然后将其展品,最后在上面训练一个Dense层
但是在这里我们将直接处理IMDB的原始文本数据,而不是Keras内部已经预先分词的IMDB数据

原始IMDB数据库

将下载的压缩包解压后

  • aclImdb有三个文件,一个文件是训练集,一个文件是测试集,还有一个是.DS_Store(无关紧要,不用管)
  • .DS_Store (英文全称 Desktop Services Store)是一种由苹果公司的Mac OS X操作系统所创造的隐藏文件,目的在于存贮目录的自定义属性,例如文件们的图标位置或者是背景色的选择
  • 点开训练集或者测试集里面都一样,neg: 负面评价文件夹, pos: 正面评价文件夹,下面两个.txt文档是评论的来源(网址),不用管
  • 点开neg或者pos文件夹里面的数据分别存放,每一条评论按条存储
  • 评论的存储格式均为英文

6-8 处理IMDB原始数据标签


# _*_ coding:utf-8 _*_
import os

imdb_dir = r'E:\code\PythonDeep\DataSet\aclImdb\aclImdb'
train_dir = os.path.join(imdb_dir, 'train')

labels = []
texts = []

# 标记进入正负面评价文档
for label_type in ['neg', 'pos']:
    dir_name = os.path.join(train_dir, label_type)
    for fname in os.listdir(dir_name):
        # 如果文档后缀是.txt
        if fname[-4:] == '.txt':
            # 这里打开要utf8格式
            f = open(os.path.join(dir_name, fname), encoding='utf-8')
            texts.append(f.read())
            f.close()
            
            # neg: 负面评价, pos: 正面评价
            if label_type == 'neg':
                labels.append(0)
            else:
                labels.append(1)

6-9 对IMDB数据进行分词


# 我们将文本划分为训练集和验证集
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import numpy as np

# 在100个单词后截取评论
maxlen = 100
# 在200个样本上训练
training_samples = 200
# 在10000个样本上验证
validation_samples = 10000
# 只考虑数据集中前10 000个最常见的单词
max_words = 10000

tokenizer = Tokenizer(num_words = max_words)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)

# 找到不同的词序
word_index = tokenizer.word_index
print('Found % unique tokens' % len(word_index))

data = pad_sequences(sequences, maxlen = maxlen)

labels = np.asarray(labels)
print('Shape of data tensor:', data.shape)
print('Shape of label tensor:', labels.shape)

# 将数据分为训练集以及验证集
# 由于一开始的数据样本都是排好序的
# 负面评论在前,正面评论在后
indices = np.arange(data.shape[0])
np.random.shuffle(indices)

# 通过标签投影打乱
data = data[indices]
labels = labels[indices]

# 分数据
x_train = data[:training_samples]
y_train = labels[:training_samples]
x_val = data[training_samples: training_samples + validation_samples]
y_val = labels[training_samples: training_samples + validation_samples]
Found  88582nique tokens
Shape of data tensor: (25000, 100)
Shape of label tensor: (25000,)

6-10 解析 GloVe词嵌入文件


glove_dir = r'E:\code\PythonDeep\DataSet\glove.6B'

embeddings_index = {}
f = open(os.path.join(glove_dir, 'glove.6B.100d.txt'), encoding='utf-8')

for line in f:
    values = line.split()
    word = values[0]
    coefs = np.asarray(values[1:], dtype = 'float32')
    embeddings_index[word] = coefs

f.close()
print('Found %s word vectors.' % len(embeddings_index))
Found 400000 word vectors.

6-11 准备GloVe词嵌入矩阵


embedding_dim = 100

embedding_matrix = np.zeros((max_words, embedding_dim))
for word, i in word_index.items():
    if i < max_words:
        embedding_vector = embeddings_index.get(word)
        if embedding_vector is not None:
            embedding_matrix[i] = embedding_vector

6-12 模型定义


from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense

model = Sequential()

model.add(Embedding(max_words, embedding_dim, input_length = maxlen))
model.add(Flatten())
model.add(Dense(32, activation = 'relu'))
model.add(Dense(1, activation = 'sigmoid'))
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_1 (Embedding)      (None, 100, 100)          1000000   
_________________________________________________________________
flatten_1 (Flatten)          (None, 10000)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 32)                320032    
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 33        
=================================================================
Total params: 1,320,065
Trainable params: 1,320,065
Non-trainable params: 0
_________________________________________________________________

6-13 将预训练的词加载到Embedding层中


# 加载到Embedding层中,即模型的第一层
model.layers[0].set_weights([embedding_matrix])
model.layers[0].trainable = False

6-14 训练与评估


model.compile(optimizer = 'rmsprop', 
              loss = 'binary_crossentropy', 
              metrics = ['acc'])

history = model.fit(x_train, y_train, 
                    epochs = 10, 
                    batch_size = 32, 
                    validation_data = (x_val, y_val))


model.save_weights('pre_trained_glove_model.h5')
Train on 200 samples, validate on 10000 samples
Epoch 1/10
200/200 [==============================] - 1s 7ms/step - loss: 1.5309 - acc: 0.4750 - val_loss: 0.8434 - val_acc: 0.5012
Epoch 2/10
200/200 [==============================] - 1s 3ms/step - loss: 0.7233 - acc: 0.5750 - val_loss: 0.9720 - val_acc: 0.4992
Epoch 3/10
200/200 [==============================] - 1s 3ms/step - loss: 0.4580 - acc: 0.7600 - val_loss: 0.7131 - val_acc: 0.5138
Epoch 4/10
200/200 [==============================] - 1s 4ms/step - loss: 0.3095 - acc: 0.9400 - val_loss: 1.1953 - val_acc: 0.4992
Epoch 5/10
200/200 [==============================] - 1s 5ms/step - loss: 0.3209 - acc: 0.8400 - val_loss: 0.7511 - val_acc: 0.5143
Epoch 6/10
200/200 [==============================] - 1s 3ms/step - loss: 0.1658 - acc: 0.9750 - val_loss: 1.4996 - val_acc: 0.4993
Epoch 7/10
200/200 [==============================] - 0s 2ms/step - loss: 0.1842 - acc: 0.9150 - val_loss: 0.7853 - val_acc: 0.5067
Epoch 8/10
200/200 [==============================] - 0s 2ms/step - loss: 0.0881 - acc: 1.0000 - val_loss: 0.8033 - val_acc: 0.5060
Epoch 9/10
200/200 [==============================] - 0s 2ms/step - loss: 0.0656 - acc: 0.9900 - val_loss: 1.9176 - val_acc: 0.5008
Epoch 10/10
200/200 [==============================] - 0s 2ms/step - loss: 0.2751 - acc: 0.8600 - val_loss: 0.8504 - val_acc: 0.4987

6-15 绘制结果


import matplotlib.pyplot as plt

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1, len(acc) + 1)

plt.plot(epochs, acc, 'bo', label = 'Training acc')
plt.plot(epochs, val_acc, 'b', label = 'Validation acc')
plt.title('Training and Validation accuracy')
plt.legend()

plt.figure()

plt.plot(epochs, loss, 'bo', label = 'Training loss')
plt.plot(epochs, val_loss, 'b', label = 'Validation loss')
plt.title('Training and Validation loss')
plt.legend()

plt.show()

1
2

这样的训练样本数太少,会使得模型的性能严重依赖我们选择的200个样本,我们接下来尝试一下在不使用预训练词嵌入的情况下,训练相同的模型


6-16 在不使用预训练词嵌入的情况下,训练相同的模型


from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense

model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length = maxlen))
model.add(Flatten())
model.add(Dense(32, activation = 'relu'))
model.add(Dense(1, activation = 'sigmoid'))
model.summary()

model.compile(optimizer = 'rmsprop', 
              loss = 'binary_crossentropy', 
              metrics = ['acc'])

history = model.fit(x_train, y_train, 
                    epochs = 10, 
                    batch_size = 32, 
                    validation_data = (x_val, y_val))
Model: "sequential_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_3 (Embedding)      (None, 100, 100)          1000000   
_________________________________________________________________
flatten_3 (Flatten)          (None, 10000)             0         
_________________________________________________________________
dense_5 (Dense)              (None, 32)                320032    
_________________________________________________________________
dense_6 (Dense)              (None, 1)                 33        
=================================================================
Total params: 1,320,065
Trainable params: 1,320,065
Non-trainable params: 0
_________________________________________________________________


E:\develop_tools\Anaconda\envs\py36\lib\site-packages\tensorflow_core\python\framework\indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "


Train on 200 samples, validate on 10000 samples
Epoch 1/10
200/200 [==============================] - 1s 4ms/step - loss: 0.6986 - acc: 0.4250 - val_loss: 0.6939 - val_acc: 0.4977
Epoch 2/10
200/200 [==============================] - 1s 4ms/step - loss: 0.5316 - acc: 0.9800 - val_loss: 0.6970 - val_acc: 0.5016
Epoch 3/10
200/200 [==============================] - 1s 3ms/step - loss: 0.3392 - acc: 1.0000 - val_loss: 0.7182 - val_acc: 0.5014
Epoch 4/10
200/200 [==============================] - 1s 3ms/step - loss: 0.1660 - acc: 1.0000 - val_loss: 0.7230 - val_acc: 0.4997
Epoch 5/10
200/200 [==============================] - 1s 4ms/step - loss: 0.0786 - acc: 1.0000 - val_loss: 0.7209 - val_acc: 0.4995
Epoch 6/10
200/200 [==============================] - 1s 4ms/step - loss: 0.0396 - acc: 1.0000 - val_loss: 0.7300 - val_acc: 0.4999
Epoch 7/10
200/200 [==============================] - 1s 3ms/step - loss: 0.0216 - acc: 1.0000 - val_loss: 0.7387 - val_acc: 0.4971
Epoch 8/10
200/200 [==============================] - 1s 3ms/step - loss: 0.0121 - acc: 1.0000 - val_loss: 0.7782 - val_acc: 0.5000
Epoch 9/10
200/200 [==============================] - 1s 3ms/step - loss: 0.0073 - acc: 1.0000 - val_loss: 0.7635 - val_acc: 0.4962
Epoch 10/10
200/200 [==============================] - 1s 3ms/step - loss: 0.0043 - acc: 1.0000 - val_loss: 0.7799 - val_acc: 0.4988
# 画图
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1, len(acc) + 1)

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()

plt.figure()

plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()

plt.show()

1

2


我们可以看到,验证集的精度仅仅停留在50%左右,因此,在这个例子中(小样本)预训练词嵌入的性能要优于与任务一起学习的嵌入

下面我们来增加样本数量,看看这种情况是否会有所改变


6-17 对测试集数据进行分词


test_dir = os.path.join(imdb_dir, 'test')

labels = []
texts = []

for label_type in ['neg', 'pos']:
    dir_name = os.path.join(test_dir, label_type)
    for fname in sorted(os.listdir(dir_name)):
        if fname[-4:] == '.txt':
            f = open(os.path.join(dir_name, fname), encoding='utf-8')
            texts.append(f.read())
            f.close()
        
        if label_type == 'neg':
            labels.append(0)
        else:
            labels.append(1)

sequence = tokenizer.texts_to_sequences(texts)
x_test = pad_sequences(sequences, maxlen = maxlen)
y_test = np.asarray(labels)

6-18 在测试集上评估模型


model.load_weights('pre_trained_glove_model.h5')
model.evaluate(x_test, y_test)
25000/25000 [==============================] - 1s 47us/step





[0.85545440117836, 0.5065199732780457]

写在最后

注:本文代码来自《Python 深度学习》,做成电子笔记的方式上传,仅供学习参考,作者均已运行成功,如有遗漏请练习本文作者

各位看官,都看到这里了,麻烦动动手指头给博主来个点赞8,您的支持作者最大的创作动力哟!
<(^-^)>
才疏学浅,若有纰漏,恳请斧正
本文章仅用于各位同志作为学习交流之用,不作任何商业用途,若涉及版权问题请速与作者联系,望悉知

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值