[代码]-LSTM-imdb情绪分类-Cell实现-[tensorflow2]

文章介绍了RNN的局限性,特别是记忆机制的问题,然后详细讲解了LSTM的单元结构,包括遗忘门、更新门和输出门。接着,通过TensorFlow实现了一个LSTM模型,用于IMDB数据集的情感分析,包括数据预处理、模型构建、训练和评估。模型在训练和测试集上均达到了一定的准确率。
摘要由CSDN通过智能技术生成

1.RNN的局限

不能选择性的记住有用的信息,一股脑记住全部信息,在计算量和存储量上都是一种浪费。

2.单元结构

ft:遗忘门

it:更新门

ot:输出门

(下图中ct=gt)

 3.计算过程

每个gate的输入是标量,是z向量的一个dimension

每个z都是vector,长度和cell的个数一致

 4.代码

import  os
import  tensorflow as tf
import  numpy as np
import tensorflow.keras as keras
from    tensorflow.keras import layers, losses, optimizers, Sequential
#1----------------------------------------------------------------------
tf.random.set_seed(2)
np.random.seed(2)
# 设置log输出等级:
# 0-info+warning+error+fatal
# 1-warning+error+fatal
# 2-error+fatal
# 3-fatal
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
# 适用tensorflow2的版本
assert tf.__version__.startswith('2.')
#2----------------------------------------------------------------------
#设置参数
batch_size=128
total_words=10000
max_review_len=80
embedding_len=100
#3----------------------------------------------------------------------
# 加载数据集
(x_train,y_train),(x_test,y_test)=keras.datasets.imdb.load_data(num_words=total_words)
print('训练集:\n',x_train,y_train)
print('测试集:\n',x_test,y_test)
print('一个训练样本:\n',x_train[0])
print('标签:\n',y_train[0])
#4----------------------------------------------------------------------
# 加载自带的词汇表
word_index=keras.datasets.imdb.get_word_index()
print('-------编码表--------')
for key,value in word_index.items():
    print(key,value)
word_index={k:(v+3) for k,v in word_index.items()}
word_index['<PAD>']=0
word_index['<START>']=1
word_index['<UNK>']=2
word_index['<UNUSED>']=3
reverse_word_index=dict([(value,key) for (key,value) in word_index.items()])
def decode_review(text):
    return ' '.join([reverse_word_index.get(i,'?') for i in text])
print('解码训练集:')
print(decode_review(x_train[0]))
#5----------------------------------------------------------------------
# 填充和截断句子
x_train=keras.preprocessing.sequence.pad_sequences(x_train,max_review_len)
x_test=keras.preprocessing.sequence.pad_sequences(x_test,max_review_len)
db_train=tf.data.Dataset.from_tensor_slices((x_train,y_train))
db_test=tf.data.Dataset.from_tensor_slices((x_test,y_test))
db_train=db_train.shuffle(1000)
db_test=db_test.shuffle(1000)
db_train=db_train.batch(batch_size,drop_remainder=True)
db_test=db_test.batch(batch_size,drop_remainder=True)
print('训练集形状:\n',x_train.shape)
print('测试集形状:\n',x_test.shape)
#6----------------------------------------------------------------------
class MyLSTM(keras.Model):
    def __init__(self,units):
        super(MyLSTM, self).__init__()
        self.state0=[tf.zeros([batch_size,units]),tf.zeros([batch_size,units])]
        self.state1=[tf.zeros([batch_size,units]),tf.zeros([batch_size,units])]
        self.embedding=layers.Embedding(total_words,embedding_len,input_length=max_review_len)
        self.rnn_cell0=layers.LSTMCell(units,dropout=0.5)
        self.rnn_cell1=layers.LSTMCell(units,dropout=0.5)
        self.outlayer=keras.Sequential([
            layers.Dense(units),
            layers.Dropout(rate=0.5),
            layers.ReLU(),
            layers.Dense(1)
        ])
    def call(self, inputs, training=None, mask=None):
        x=inputs
        x=self.embedding(x)
        state0=self.state0
        state1=self.state1
        # unstackhi用参数一定要加axis
        for word in tf.unstack(x,axis=1):
            out0,state0=self.rnn_cell0(word,state0,training)
            out1,state1=self.rnn_cell1(out0,state1,training)
        x=self.outlayer(out1,training)
        prob=tf.sigmoid(x)
        return prob
#7----------------------------------------------------------------------
def main():
    units=64
    epochs=2
    model=MyLSTM(units)
    model.compile(optimizer=keras.optimizers.RMSprop(0.001),loss=keras.losses.BinaryCrossentropy(),metrics=['accuracy'])
    print('训练结果:')
    model.fit(db_train,epochs=epochs,validation_data=db_test)
    print('测试结果:')
    model.evaluate(db_test)
if __name__=='__main__':
    main()

 5.运行结果(词汇表只截取部分)

geysers 52003
artbox 88582
cronyn 52004
hardboiled 52005
voorhees' 88583
35mm 16815
'l' 88584
paget 18509
expands 20597
解码训练集:
<START> this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert <UNK> is an amazing actor and now the same being director <UNK> father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for <UNK> and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also <UNK> to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the <UNK> list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all
训练集形状:
 (25000, 80)
测试集形状:
 (25000, 80)
训练结果:
Epoch 1/2
195/195 [==============================] - 62s 253ms/step - loss: 0.5063 - accuracy: 0.7400 - val_loss: 0.3664 - val_accuracy: 0.8371
Epoch 2/2
195/195 [==============================] - 60s 307ms/step - loss: 0.3504 - accuracy: 0.8565 - val_loss: 0.4014 - val_accuracy: 0.8213
测试结果:
195/195 [==============================] - 10s 50ms/step - loss: 0.4012 - accuracy: 0.8214

Process finished with exit code 0
 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值