keras构建循环神经网络

一、循环神经网络

1、输出形式有两种是因为:当构建循环神经网络的时候,通常会将多个循环层堆叠起来,这时,当前循环层的输出将会成为下一层网络层的输入,而循环层的输入要求的是samples、timesteps、input_dim的shape ,所以中间层的循环层就需要保持同样的输出形状。

model=Sequential()
model.add(LSTM(32,input_shape=(10,64),return_sequences=True))
print(model.summary())
LSTM没有指定(输出的)return_sequence
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm (LSTM)                  (None, 32)                12416     
=================================================================
Total params: 12,416
Trainable params: 12,416
Non-trainable params: 0
_________________________________________________________________
None
LSTM指定(输出的)return_sequence
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm (LSTM)                  (None, 10, 32)            12416     
=================================================================
Total params: 12,416
Trainable params: 12,416
Non-trainable params: 0
_________________________________________________________________
None
model=Sequential()
model.add(LSTM(32,input_shape=(10,64),return_sequences=True))
model.add(LSTM(10,return_sequences=True))
model.add(LSTM(3))
print(model.summary())
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm (LSTM)                  (None, 10, 32)            12416     
_________________________________________________________________
lstm_1 (LSTM)                (None, 10, 10)            1720      
_________________________________________________________________
lstm_2 (LSTM)                (None, 3)                 168       
=================================================================
Total params: 14,304
Trainable params: 14,304
Non-trainable params: 0
_________________________________________________________________
None

2、嵌入层
词嵌入 首先将文字转化为数值, 其次使用词嵌入将单词向量化,以方便计算单词间的距离。

#嵌入层输入词汇量是1000,嵌入的维度64,对于输入词汇,将其打包成一个个的序列,每次输入一个序列,每一个序列包含10个单词
from keras.layers import Embedding
model=Sequential()
model.add(Embedding(1000,64,input_length=10))
print(model.summary())
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None, 10, 64)            64000     
=================================================================
Total params: 64,000
Trainable params: 64,000
Non-trainable params: 0
_________________________________________________________________
None
from keras.models import Sequential
#导入模型 分别对应循环神经网络、长短期记忆网络、门控循环单元
from keras.layers import SimpleRNN,LSTM,GRU
'''
这三类网络层的使用都很类似,keras将所有循环层看成一个抽象类,所以每一个循环层都有共同的性质,并接受相同的关键字参数。
除去与神经网络结构本身有关的特性,每一个循环层的参数略有不同外,其他的通用参数都是一样的。
每一种循环层的输入和输出格式都一样.
输入:Batch_size timesteps input_dim 三维张量 批量数据大小、时间步长、输入数据特征数
比如:输入3200条一维数据进行训练
分成了100个batch(批次/批量)那每个batch的Batch_size为32,
如果用前3个数值预测第4个值的话 步长就是 timesteps=3
输入数据维度是1维数据,input_dim=1
所以这个模型的输入数据shape就是(32,3,1)
'''
'''
当使用循环层作为第一层时(输入),不用预先设置Batch_size,但要定义
输入维度input_dim 1
输出维度output_dim 6
滑动窗口input_length 3 也就是上边说的timesteps
'''
#当LSTM作为网络的第一层时需设置的参数 写法
# model=Sequential()
# model.add(LSTM(input_dim=1,output_dim=6,input_length=3))
# model.add(LSTM(6,input_dim=1,input_length=3))
# model.add(LSTM(6,input_shape=(3,1)))#推荐这个
'''
循环层的输出
return_sequence=True 返回一个序列 形如samples、timesteps、output_dim的三维张量
return_sequence=False 返回一个值 形如samples、output_dim的二维张量  默认为False
'''
# model.add(LSTM(32,input_shape=(10,64),return_sequences=True))
# print(model.summary())

# model=Sequential()
# model.add(LSTM(32,input_shape=(10,64),return_sequences=True))
# model.add(LSTM(10,return_sequences=True))
# model.add(LSTM(3))
# print(model.summary())
'''
嵌入层embedding  文本问题
词嵌入(将单词转化为向量)是文本任务的第一步,所以keras里Embedding层也只能作为模型的第一层使用
词嵌入 首先将文字转化为数值, 其次使用词嵌入将单词向量化,以方便计算单词间的距离
指定的参数:
input_dim 文本数据中进入嵌入层的词汇表的大小 index+1
output_dim 词向量的维度 单词被转化为向量后的嵌入空间的大小
input_length 输入序列的长度 是一个固定的整数 如果嵌入层的下一层连接了Flatten Dense 层 则这个参数是必须的
输出:
尺寸为Batch_size 、input_length、output_dim的三维张量
'''
# #嵌入层输入词汇量是1000,嵌入的维度64,对于输入词汇,将其打包成一个个的序列,每次输入一个序列,每一个序列包含10个单词
# from keras.layers import Embedding
# model=Sequential()
# model.add(Embedding(1000,64,input_length=10))
# print(model.summary())

二、IMDB文本数据实例

1、全连接神经网络

#IMDB文本数据 电影评论情感分析 正面/负面
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense,Embedding,Flatten
from keras.datasets import imdb

max_features=20000
maxlen=80
batch_size=32

import numpy as np
data=np.load('imdb.npz',allow_pickle=True)
print(data.files)
print('loading data...')
(x_train,y_train),(x_test,y_test)=imdb.load_data(num_words=max_features)#只取数据集中前20000个经常出现的单词
# x_train=data['x_train']
# y_train=data['y_train']
# x_test=data['x_test']
# y_test=data['y_test']
print(len(x_train),'train sequences')
print(len(x_test),'test sequences')
#对序列的长度进一步的截断和填充 pad_sequence 可以将多个序列截断或补齐
print('pad sequences (samples x time)')
x_train=sequence.pad_sequences(x_train,maxlen=maxlen)
x_test=sequence.pad_sequences(x_test,maxlen=maxlen)
print('x_train shape:',x_train.shape)
print('x_test shape:',x_test.shape)
#全连接网络
print('build model...')
model=Sequential()
model.add(Embedding(max_features,128,input_length=maxlen))#20000/128=250
model.add(Flatten())#80*128 =10240
model.add(Dense(250,activation='relu'))
model.add(Dense(1,activation='sigmoid'))#输出情感分类 1个神经元就够了
print(model.summary())
print('train model ...')
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.fit(x_train,y_train,batch_size=batch_size,epochs=15,validation_data=(x_test,y_test))
print('evaluate model')
score,acc=model.evaluate(x_test,y_test,batch_size=batch_size)
print('test score:',score)
print('test accuracy:',acc)
['x_test', 'x_train', 'y_train', 'y_test']
loading data...
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
17465344/17464789 [==============================] - 18s 1us/step
25000 train sequences
25000 test sequences
pad sequences (samples x time)
x_train shape: (25000, 80)
x_test shape: (25000, 80)
build model...
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None, 80, 128)           2560000   
_________________________________________________________________
flatten (Flatten)            (None, 10240)             0         
_________________________________________________________________
dense (Dense)                (None, 250)               2560250   
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 251       
=================================================================
Total params: 5,120,501
Trainable params: 5,120,501
Non-trainable params: 0
_________________________________________________________________
None
train model ...
Epoch 1/15
782/782 [==============================] - 40s 51ms/step - loss: 0.4333 - accuracy: 0.7892 - val_loss: 0.3613 - val_accuracy: 0.8396
Epoch 2/15
782/782 [==============================] - 38s 49ms/step - loss: 0.0689 - accuracy: 0.9767 - val_loss: 0.6667 - val_accuracy: 0.8026
Epoch 3/15
782/782 [==============================] - 39s 50ms/step - loss: 0.0117 - accuracy: 0.9958 - val_loss: 0.9379 - val_accuracy: 0.8076
Epoch 4/15
782/782 [==============================] - 38s 49ms/step - loss: 0.0070 - accuracy: 0.9979 - val_loss: 1.1040 - val_accuracy: 0.7960
Epoch 5/15
782/782 [==============================] - 39s 50ms/step - loss: 0.0194 - accuracy: 0.9928 - val_loss: 0.9399 - val_accuracy: 0.8030
Epoch 6/15
782/782 [==============================] - 39s 50ms/step - loss: 0.0077 - accuracy: 0.9973 - val_loss:
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值