Tensorflow(十四)RNN循环神经网络

Embedding是神经网络用于NLP的关键,能够把词语转化成数字

解决问题:序列式问题

经典模型:LSTM

二 Embedding

NLP领域中,onehot编码应用不广泛,常用Dense embedding。

区别:

One-hot:Word -> index -> [0,0,1,...,0,0]

Dense emdeding:Word -> index -> [1.2, 4.2 ...] 初始化是随机的

处理变长输入:Padding -> [3,2,3,5,6] -> []3,2,3,5,6,0,0,0,0,0], 0表示Unkown,padding处理短句子,截断处理长句子。

 

Image result for embedding 处理 变长输入

输入长度不同,emdeding层则通过padding和intercept使得,每个输入长度都相同。处理完成后再进行合并。

缺点:

信息丢失,有很多padding(噪音),合并会使得词语没有主次,顺序之分。

无效计算太多。

三 为何需要循环神经网络?

解决序列式问题:

一对多:图片生成描述,一句话描述一张图片

多对一:文本分类(文本情感分析)

多对多:机器翻译,非实时的多对多问题,一句话全部输入完才会有输出

实时多对多:视频解说

四 循环神经网络

训练过程(BPTT):

预测过程:

将前一个RNN的输出,作为下一个的输入。

五 简单RNN文本生成

import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf

from tensorflow import keras

import requests

def getWebPage(url):
    try:
        urlpage = requests.get(url)
    except IOError:
        print("IOError")
    '''
    urlpage.text中包含网页的源码内容
    '''
    WebPageDownload(urlpage.text)

def  WebPageDownload(text):
    '''
    将下载的网页保存到file.txt文件中
    '''
    ff = open("shakespeare.txt",'w')
    ff.writelines(text)
    ff.close()

getWebPage(url='https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')

input_filepath = "shakespeare.txt"
text = open(input_filepath, 'r').read()

print(len(text))
print(text[0:100])

# 1. generate vocab 生成词表
# 2. build mapping char->id 建立映射
# 3. data -> id_data 数据转成id
# 4. abcd -> bcd<eos> 定义输入输出

vocab = sorted(set(text))
print(len(vocab))
print(vocab)

char2idx = {char:idx for idx, char in enumerate(vocab)}
print(char2idx)

idx2char = np.array(vocab)
print(idx2char)

# 文本的id列表
text_as_int = np.array([char2idx[c] for c in text])
print(text_as_int[0:10])
print(text[0:10])

def split_input_target(id_text):
    """
    abcde -> abcd, bcde
    """
    return id_text[0:-1], id_text[1:]

char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
seq_length = 100
# 转化为sequence
# drop_remainder 最后一个batch不够被丢弃
seq_dataset = char_dataset.batch(seq_length + 1,
                                 drop_remainder = True)
for ch_id in char_dataset.take(2):
    print(ch_id, idx2char[ch_id.numpy()])

for seq_id in seq_dataset.take(2):
    print(seq_id)
    print(repr(''.join(idx2char[seq_id.numpy()])))

# 生成输入输出
seq_dataset = seq_dataset.map(split_input_target)

for item_input, item_output in seq_dataset.take(2):
    print(item_input.numpy())
    print(item_output.numpy())

batch_size = 64
buffer_size = 10000

seq_dataset = seq_dataset.shuffle(buffer_size).batch(
    batch_size, drop_remainder=True)

# 预测下一个单词的概率,需要知道有多少个单词
vocab_size = len(vocab)
embedding_dim = 256
# RNN 宽度
rnn_units = 1024

def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
    model = keras.models.Sequential([
        keras.layers.Embedding(vocab_size, embedding_dim,
                               batch_input_shape = [batch_size, None]),
        # return_sequences返回结果是个序列
        keras.layers.SimpleRNN(units = rnn_units,
                               stateful = True,
                               recurrent_initializer = 'glorot_uniform',
                               return_sequences = True),
        keras.layers.Dense(vocab_size),
    ])
    return model

model = build_model(
    vocab_size = vocab_size,
    embedding_dim = embedding_dim,
    rnn_units = rnn_units,
    batch_size = batch_size)

model.summary()

for input_example_batch, target_example_batch in seq_dataset.take(1):
    example_batch_predictions = model(input_example_batch)
    print(example_batch_predictions.shape)

# 两种不同生成结果的策略
# random sampling. 上面代码产生了一个分布,根据分布来进行随机采样,可以得到多个序列
# greedy, random. 贪心选择最大概率的,只会产生一个序列
sample_indices = tf.random.categorical(
    logits = example_batch_predictions[0], num_samples = 1)
print(sample_indices)
# (100, 65) -> (100, 1)
# tf.squeeze压缩成一个(100, )
sample_indices = tf.squeeze(sample_indices, axis = -1)
print(sample_indices)

print("Input: ", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Output: ", repr("".join(idx2char[target_example_batch[0]])))
print()
print("Predictions: ", repr("".join(idx2char[sample_indices])))

# 自定义 loss,在model中的最后一层没有进行softmax,所以loss中进行softmax并进行计算
def loss(labels, logits):
    return keras.losses.sparse_categorical_crossentropy(
        labels, logits, from_logits=True)

model.compile(optimizer = 'adam', loss = loss)
example_loss = loss(target_example_batch, example_batch_predictions)
print(example_loss.shape)
print(example_loss.numpy().mean())

output_dir = "./text_generation_checkpoints"
if not os.path.exists(output_dir):
    os.mkdir(output_dir)
checkpoint_prefix = os.path.join(output_dir, 'ckpt_{epoch}')
checkpoint_callback = keras.callbacks.ModelCheckpoint(
    filepath = checkpoint_prefix,
    save_weights_only = True)

epochs = 100
history = model.fit(seq_dataset, epochs = epochs,
                    callbacks = [checkpoint_callback])

model2 = build_model(vocab_size,
                     embedding_dim,
                     rnn_units,
                     batch_size = 1)
# 使用saveweights来保存最佳的参数,以后方便载入
model2.load_weights(tf.train.latest_checkpoint(output_dir))
model2.build(tf.TensorShape([1, None]))
# start ch sequence A,
# A -> model -> b
# A.append(b) -> B
# B(Ab) -> model -> c
# B.append(c) -> C
# C(Abc) -> model -> ...
model2.summary()

def generate_text(model, start_string, num_generate = 1000):
    input_eval = [char2idx[ch] for ch in start_string]
    input_eval = tf.expand_dims(input_eval, 0)
    
    text_generated = []
    model.reset_states()
    
    for _ in range(num_generate):
        # 1. model inference -> predictions
        # 2. sample -> ch -> text_generated.
        # 3. update input_eval
        
        # predictions : [batch_size, input_eval_len, vocab_size]
        predictions = model(input_eval)
        # predictions : [input_eval_len, vocab_size]
        predictions = tf.squeeze(predictions, 0)
        # predicted_ids: [input_eval_len, 1]
        # a b c -> b c d
        predicted_id = tf.random.categorical(
            predictions, num_samples = 1)[-1, 0].numpy()
        text_generated.append(idx2char[predicted_id])
        # s, x -> rnn -> s', y
        input_eval = tf.expand_dims([predicted_id], 0)
    return start_string + ''.join(text_generated)

new_text = generate_text(model2, "All: ")
print(new_text)

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值