![e59e96bc89ce282a8af72eff503dacd8.png](https://img-blog.csdnimg.cn/img_convert/e59e96bc89ce282a8af72eff503dacd8.png)
结合 CNN 和 RNN 来处理长序列
一维CNN在处理顺序敏感问题的弊端
一维卷积再处理序列文本的时候,它对时间序列的敏感度不是很强。因为这里通过上面的温度预测的模型来测试。 数据的准备参考前面的文章:
深度学习笔记26_RNN网络预测温度模型_数据准备 深度学习笔记27_RNN网络预测温度模型_搭建基准模型
# 构建模型
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
model = Sequential()
model.add(layers.Conv1D(32,5,activation='relu',input_shape = (None,float_data.shape[-1])))
model.add(layers.MaxPool1D(3))
model.add(layers.Conv1D(32,5,activation = 'relu'))
model.add(layers.MaxPool1D(3))
model.add(layers.Conv1D(32,5,activation = 'relu'))
# 对于时序数据的全局最大池化
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_4 (Conv1D) (None, None, 32) 2272
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, None, 32) 0
_________________________________________________________________
conv1d_5 (Conv1D) (None, None, 32) 5152
_________________________________________________________________
max_pooling1d_4 (MaxPooling1 (None, None, 32) 0
_________________________________________________________________
conv1d_6 (Conv1D) (None, None, 32) 5152
_________________________________________________________________
global_max_pooling1d_2 (Glob (None, 32) 0
_________________________________________________________________
dense_2 (Dense) (None, 1) 33
=================================================================
Total params: 12,609
Trainable params: 12,609
Non-trainable params: 0
_________________________________________________________________
model.compile(optimizer=RMSprop(), loss='mae')
history = model.fit_generator(train_gen,
steps_per_epoch=500,
epochs=20,
validation_data=val_gen,
validation_steps=1000)
Epoch 1/20
500/500 [==============================] - 14s 27ms/step - loss: 0.3628 - val_loss: 0.4323
Epoch 2/20
500/500 [==============================] - 13s 27ms/step - loss: 0.3398 - val_loss: 0.4596
Epoch 3/20
500/500 [