机器学习实践—基于Scikit-Learn、Keras和TensorFlow2第二版—第15章 使用RNN和CNN处理序列(Chapter 15. Processing Sequences Using

机器学习实践—基于Scikit-Learn、Keras和TensorFlow2第二版—第15章 使用RNN和CNN处理序列(Chapter 15. Processing Sequences Using RNNs and CNNs)

循环神经网络(RNNs)一般用于序列的处理。

0. 导入所需的库

import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib as mpl
from matplotlib import pyplot as plt

for i in (tf, np, mpl):
    print("{}: {}".format(i.__name__, i.__version__))

输出:

tensorflow: 2.2.0
numpy: 1.17.4
matplotlib: 3.1.2

1. 循环神经元和网络层(Recurrent Neurons and Layers)

1.1 循环神经网络结构

循环网络神经元在时刻t接收输入x(t)和上一时刻自身的输出y(t-1),因此每个循环网络神经元有两个权重参数wx和wy,分别对应x(t)和y(t-1)。对于多个样本,批量或所有样本,循环神经网络函数表达式如下:

𝑌(𝑡)=𝜙(𝑋(𝑡)𝑊𝑥+𝑌(𝑡−1)𝑊𝑦+𝑏

     =𝜙([𝑋(𝑡)+𝑌(𝑡−1)]𝑊+𝑏)

其中:

  1. Y(t)是m*l的矩阵,l为神经元个数
  2. X(t)是m*n的矩阵,其中m、n分别代表样本个数和特征数目
  3. Wx是n*l的矩阵
  4. Wy是l*l的矩阵
  5. b是大小为l的向量
  6. Wx和Wy通常纵向拼接在一起形成(n+l)*l的矩阵
  7. X(t)和Y(t-1)通常横向拼接在一起形成m*(n+l)的矩阵

1.2 记忆单元(Memory Cells)

由于t时刻循环神经网络神经元的输出是所有之前步骤的函数,因此称网络有记忆功能。

1.3 输入和输出序列(Input and Output Sequences)

循环神经网络能够同时输入一个序列并产生另一个序列,这种序列到序列的网络在预测时间序列时非常有用,这种网络叫做sequence-tosequence network。例如中英文翻译。

如果输入一个序列,而忽略中产输出,只关注最后一个神经元的输出,这种网络叫做sequence-to-vector network。例如输入一段影评文本,最终给出情感评分。这种网络模型也可以叫做编码器。

如果输入一个向量,网络输出一段序列,这种网络叫做vector-to-sequence network。例如输入一张图片,输出图片标题。这种网络模型也可以叫做解码器。

2. 训练循环神经网络(Training RNNs)

训练循环神经网络的方法是随着时间的推移将网络展开,然后使用反向传播算法即可。这种方法也叫基于时间的反向传播算法(backpropagation through time, BPTT)。

注意:循环神经网络的损失函数可能会忽略掉一些输出。

3. 预测时间序列

3.1 获得时间序列数据

def generate_time_series(batch_size, n_steps):
    freq1, freq2, offsets1, offsets2 = np.random.rand(4, batch_size, 1)
    time = np.linspace(0, 1, n_steps)
    series = 0.5 * np.sin((time - offsets1) * (freq1 * 10 + 10))  #   wave 1
    series += 0.2 * np.sin((time - offsets2) * (freq2 * 20 + 20)) # + wave 2
    series += 0.1 * (np.random.rand(batch_size, n_steps) - 0.5)   # + noise
    return series[..., np.newaxis].astype(np.float32)

np.random.seed(42)

n_steps = 50
series = generate_time_series(10000, n_steps + 1)
X_train, y_train = series[:7000, :n_steps], series[:7000, -1]
X_valid, y_valid = series[7000:9000, :n_steps], series[7000:9000, -1]
X_test, y_test = series[9000:, :n_steps], series[9000:, -1]

X_train.shape, y_train.shape

输出:

((7000, 50, 1), (7000, 1))
def plot_series(series, y=None, y_pred=None, x_label="$t$", y_label="$x(t)$"):
    plt.plot(series, ".-")
    if y is not None:
        plt.plot(n_steps, y, "bx", markersize=10)
    if y_pred is not None:
        plt.plot(n_steps, y_pred, "ro")
    plt.grid(True)
    if x_label:
        plt.xlabel(x_label, fontsize=16)
    if y_label:
        plt.ylabel(y_label, fontsize=16, rotation=0)
    plt.hlines(0, 0, 100, linewidth=1)
    plt.axis([0, n_steps + 1, -1, 1])

fig, axes = plt.subplots(nrows=1, ncols=3, sharey=True, figsize=(12, 4))
for col in range(3):
    plt.sca(axes[col])
    plot_series(X_valid[col, :, 0], y_valid[col, 0],
                y_label=("$x(t)$" if col==0 else None))

plt.tight_layout()
plt.show()

输出:

3.2 基准指标(Baseline Metrics)

y_pred = X_valid[:, -1]
np.mean(keras.losses.mean_squared_error(y_valid, y_pred))

输出:

0.020211367
plot_series(X_valid[0, :, 0], y_valid[0, 0], y_pred[0, 0])
plt.show()

输出:

使用线性模型进行预测:

np.random.seed(42)
tf.random.set_seed(42)

model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=[50,1]),
    tf.keras.layers.Dense(1)
])

model.compile(loss="mse", optimizer="adam")

history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid))

输出:

Epoch 1/20
219/219 [==============================] - 1s 2ms/step - loss: 0.1001 - val_loss: 0.0545
Epoch 2/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0379 - val_loss: 0.0266
Epoch 3/20
219/219 [==============================] - 1s 2ms/step - loss: 0.0202 - val_loss: 0.0157
Epoch 4/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0131 - val_loss: 0.0116
Epoch 5/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0103 - val_loss: 0.0098
Epoch 6/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0089 - val_loss: 0.0087
Epoch 7/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0080 - val_loss: 0.0079
Epoch 8/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0073 - val_loss: 0.0071
Epoch 9/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0066 - val_loss: 0.0066
Epoch 10/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0061 - val_loss: 0.0062
Epoch 11/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0057 - val_loss: 0.0057
Epoch 12/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0054 - val_loss: 0.0055
Epoch 13/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0052 - val_loss: 0.0052
Epoch 14/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0049 - val_loss: 0.0049
Epoch 15/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0048 - val_loss: 0.0048
Epoch 16/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0046 - val_loss: 0.0048
Epoch 17/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0045 - val_loss: 0.0045
Epoch 18/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0044 - val_loss: 0.0044
Epoch 19/20
219/219 [==============================] - 0s 2ms/step - loss: 0.0043 - val_loss: 0.0043
Epoch 20/20
219/219 [==============================] - 1s 2ms/step - loss: 0.0042 - val_loss: 0.0042
model.evaluate(X_valid, y_valid)

输出:

63/63 [==============================] - 0s 1ms/step - loss: 0.0042

0.004168085753917694
def plot_learning_curves(loss, val_loss):
    plt.plot(np.arange(len(loss)) + 0.5, loss, "b.-", label="Training loss")
    plt.plot(np.arange(len(val_loss)) + 1, val_loss, "r.-", label="Validation loss")
    plt.gca().xaxis.set_major_locator(mpl.ticker.MaxNLocator(integer=True))
    plt.axis([1, 20, 0, 0.05])
    plt.legend(fontsize=14)
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    plt.grid(True)

plot_learning_curves(history.history["loss"], history.history["val_loss"])
plt.show()

输出:

y_pred = model.predict(X_valid)
plot_series(X_valid[0, :, 0], y_valid[0, 0], y_pred[0, 0])

plt.show()

输出:

3.3 实现简单RNN

np.random.seed(42)
tf.random.set_seed(42)

model = tf.keras.Sequential([
    tf.keras.layers.SimpleRNN(1, input_shape=[None,1])
])

model.compile(loss="mse", optimizer=tf.keras.optimizers.Adam(lr=0.005))

history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid))

输出:

Epoch 1/20
219/219 [==============================] - 9s 41ms/step - loss: 0.0967 - val_loss: 0.0489
Epoch 2/20
219/219 [==============================] - 9s 40ms/step - loss: 0.0369 - val_loss: 0.0296
Epoch 3/20
219/219 [==============================] - 9s 40ms/step - loss: 0.0253 - val_loss: 0.0218
Epoch 4/20
219/219 [==============================] - 9s 40ms/step - loss: 0.0198 - val_loss: 0.0177
Epoch 5/20
219/219 [==============================] - 9s 41ms/step - loss: 0.0166 - val_loss: 0.0151
Epoch 6/20
219/219 [==============================] - 9s 41ms/step - loss: 0.0146 - val_loss: 0.0134
Epoch 7/20
219/219 [==============================] - 9s 40ms/step - loss: 0.0132 - val_loss: 0.0123
Epoch 8/20
219/219 [==============================] - 9s 41ms/step - loss: 0.0124 - val_loss: 0.0116
Epoch 9/20
219/219 [==============================] - 9s 41ms/step - loss: 0.0118 - val_loss: 0.0112
Epoch 10/20
219/219 [==============================] - 9s 41ms/step - loss: 0.0116 - val_loss: 0.0110
Epoch 11/20
219/219 [==============================] - 9s 40ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 12/20
219/219 [==============================] - 9s 41ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 13/20
219/219 [==============================] - 9s 42ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 14/20
219/219 [==============================] - 9s 40ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 15/20
219/219 [==============================] - 9s 40ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 16/20
219/219 [==============================] - 9s 40ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 17/20
219/219 [==============================] - 9s 41ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 18/20
219/219 [==============================] - 9s 40ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 19/20
219/219 [==============================] - 9s 42ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 20/20
219/219 [==============================] - 9s 41ms/step - loss: 0.0114 - val_loss: 0.0109
model.summary()

输出:

Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
simple_rnn_2 (SimpleRNN)     (None, 1)                 3         
=================================================================
Total params: 3
Trainable params: 3
Non-trainable params: 0
_________________________________________________________________
model.evaluate(X_valid, y_valid)

输出:

63/63 [==============================] - 0s 6ms/step - loss: 0.0109

0.010881561785936356
plt.figure(figsize=(12,5))

plt.subplot(121)
plot_learning_curves(history.history["loss"], history.history["val_loss"])

plt.subplot(122)
y_pred = model.predict(X_valid)
plot_series(X_valid[0, :, 0], y_valid[0, 0], y_pred[0, 0])

plt.tight_layout()
plt.show()

输出:

如上输出所示,该RNN只有一个层,且只有一个神经元,共有三个参数Wx、Wy和b。由于循环神经网络能够处理任意长度的序列,因此不需要指定输入长度。

SimpleRNN层默认使用双曲正切激活函数。

keras中循环网络层默认只输出最终的结果,如果想保留中间结果,请指定参数return_sequences=True。

如上简单的RNN模型经过20轮的训练,损失下降到0.01左右。这个结果似乎还没有线性模型好!显然这可能是由于这个RNN太简单了,只有3个参数,无法很好地拟合数据。

3.4 深度循环神经网络(Deep RNNs)

np.random.seed(42)
tf.random.set_seed(42)

model = tf.keras.Sequential([
    tf.keras.layers.SimpleRNN(20, return_sequences=True, input_shape=[None,1]),
    tf.keras.layers.SimpleRNN(20, return_sequences=True),
    tf.keras.layers.SimpleRNN(1)
])

model.compile(loss="mse", optimizer="adam")

history = model.fit(X_train, y_train, epochs=50, validation_data=(X_valid, y_valid))

输出:

Epoch 1/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0492 - val_loss: 0.0090
Epoch 2/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0070 - val_loss: 0.0065
Epoch 3/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0053 - val_loss: 0.0045
Epoch 4/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0045 - val_loss: 0.0040
Epoch 5/50
219/219 [==============================] - 26s 117ms/step - loss: 0.0042 - val_loss: 0.0040
Epoch 6/50
219/219 [==============================] - 27s 123ms/step - loss: 0.0038 - val_loss: 0.0036
Epoch 7/50
219/219 [==============================] - 27s 121ms/step - loss: 0.0038 - val_loss: 0.0040
Epoch 8/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0037 - val_loss: 0.0033
Epoch 9/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0036 - val_loss: 0.0032
Epoch 10/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0035 - val_loss: 0.0031
Epoch 11/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0034 - val_loss: 0.0030
Epoch 12/50
219/219 [==============================] - 26s 117ms/step - loss: 0.0033 - val_loss: 0.0031
Epoch 13/50
219/219 [==============================] - 26s 121ms/step - loss: 0.0034 - val_loss: 0.0031
Epoch 14/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0033 - val_loss: 0.0032
Epoch 15/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0034 - val_loss: 0.0033
Epoch 16/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0035 - val_loss: 0.0030
Epoch 17/50
219/219 [==============================] - 26s 117ms/step - loss: 0.0033 - val_loss: 0.0029
Epoch 18/50
219/219 [==============================] - 25s 116ms/step - loss: 0.0033 - val_loss: 0.0030
Epoch 19/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0032 - val_loss: 0.0029
Epoch 20/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0032 - val_loss: 0.0029
Epoch 21/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0032 - val_loss: 0.0036
Epoch 22/50
219/219 [==============================] - 26s 120ms/step - loss: 0.0032 - val_loss: 0.0029
Epoch 23/50
219/219 [==============================] - 26s 117ms/step - loss: 0.0031 - val_loss: 0.0028
Epoch 24/50
219/219 [==============================] - 25s 116ms/step - loss: 0.0031 - val_loss: 0.0031
Epoch 25/50
219/219 [==============================] - 25s 116ms/step - loss: 0.0031 - val_loss: 0.0028
Epoch 26/50
219/219 [==============================] - 25s 116ms/step - loss: 0.0031 - val_loss: 0.0028
Epoch 27/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0030 - val_loss: 0.0028
Epoch 28/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0030 - val_loss: 0.0031
Epoch 29/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0030 - val_loss: 0.0031
Epoch 30/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0030 - val_loss: 0.0028
Epoch 31/50
219/219 [==============================] - 26s 121ms/step - loss: 0.0030 - val_loss: 0.0027
Epoch 32/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0030 - val_loss: 0.0026
Epoch 33/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0029 - val_loss: 0.0027
Epoch 34/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0030 - val_loss: 0.0026
Epoch 35/50
219/219 [==============================] - 26s 120ms/step - loss: 0.0029 - val_loss: 0.0028
Epoch 36/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0029 - val_loss: 0.0026
Epoch 37/50
219/219 [==============================] - 26s 120ms/step - loss: 0.0029 - val_loss: 0.0029
Epoch 38/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0028 - val_loss: 0.0027
Epoch 39/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0029 - val_loss: 0.0025
Epoch 40/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0027 - val_loss: 0.0029
Epoch 41/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0028 - val_loss: 0.0026
Epoch 42/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0029 - val_loss: 0.0027
Epoch 43/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0027 - val_loss: 0.0027
Epoch 44/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0027 - val_loss: 0.0026
Epoch 45/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0026 - val_loss: 0.0023
Epoch 46/50
219/219 [==============================] - 26s 119ms/step - loss: 0.0026 - val_loss: 0.0026
Epoch 47/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0026 - val_loss: 0.0027
Epoch 48/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0026 - val_loss: 0.0024
Epoch 49/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0025 - val_loss: 0.0024
Epoch 50/50
219/219 [==============================] - 26s 118ms/step - loss: 0.0025 - val_loss: 0.0023
model.evaluate(X_valid, y_valid)

输出:

63/63 [==============================] - 1s 14ms/step - loss: 0.0023

0.0022553822491317987
plt.figure(figsize=(12,5))

plt.subplot(121)
plot_learning_curves(history.history["loss"], history.history["val_loss"])

plt.subplot(122)
y_pred = model.predict(X_valid)
plot_series(X_valid[0, :, 0], y_valid[0, 0], y_pred[0, 0])

plt.tight_layout()
plt.show()

输出:

如上输出所示,深度加深后,RNN效果变好很多。

最后一层换成全连接层更合适,因为激活函数有更多的选择:

np.random.seed(42)
tf.random.set_seed(42)

model = keras.models.Sequential([
    keras.layers.SimpleRNN(20, return_sequences=True, input_shape=[None, 1]),
    keras.layers.SimpleRNN(20),
    keras.layers.Dense(1)
])

model.compile(loss="mse", optimizer="adam")

history = model.fit(X_train, y_train, epochs=50, validation_data=(X_valid, y_valid))

输出:

Epoch 1/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0232 - val_loss: 0.0052
Epoch 2/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0043 - val_loss: 0.0036
Epoch 3/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0035 - val_loss: 0.0031
Epoch 4/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0033 - val_loss: 0.0033
Epoch 5/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0033 - val_loss: 0.0034
Epoch 6/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0031 - val_loss: 0.0029
Epoch 7/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0031 - val_loss: 0.0034
Epoch 8/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0032 - val_loss: 0.0028
Epoch 9/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0031 - val_loss: 0.0028
Epoch 10/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0030 - val_loss: 0.0029
Epoch 11/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0029 - val_loss: 0.0027
Epoch 12/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0030 - val_loss: 0.0031
Epoch 13/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0030 - val_loss: 0.0031
Epoch 14/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0030 - val_loss: 0.0030
Epoch 15/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0030 - val_loss: 0.0030
Epoch 16/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0030 - val_loss: 0.0027
Epoch 17/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0029 - val_loss: 0.0028
Epoch 18/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0030 - val_loss: 0.0027
Epoch 19/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0029 - val_loss: 0.0028
Epoch 20/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0029 - val_loss: 0.0026
Epoch 21/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0029 - val_loss: 0.0033
Epoch 22/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0029 - val_loss: 0.0028
Epoch 23/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0028 - val_loss: 0.0027
Epoch 24/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0029 - val_loss: 0.0030
Epoch 25/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0028 - val_loss: 0.0026
Epoch 26/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0028 - val_loss: 0.0026
Epoch 27/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0028 - val_loss: 0.0028
Epoch 28/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0028 - val_loss: 0.0029
Epoch 29/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0028 - val_loss: 0.0027
Epoch 30/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0028 - val_loss: 0.0027
Epoch 31/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0028 - val_loss: 0.0028
Epoch 32/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0028 - val_loss: 0.0026
Epoch 33/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0027 - val_loss: 0.0025
Epoch 34/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0027 - val_loss: 0.0025
Epoch 35/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0027 - val_loss: 0.0029
Epoch 36/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0027 - val_loss: 0.0025
Epoch 37/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0027 - val_loss: 0.0026
Epoch 38/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0027 - val_loss: 0.0028
Epoch 39/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0027 - val_loss: 0.0025
Epoch 40/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0026 - val_loss: 0.0026
Epoch 41/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0027 - val_loss: 0.0027
Epoch 42/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0028 - val_loss: 0.0028
Epoch 43/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0027 - val_loss: 0.0027
Epoch 44/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0026 - val_loss: 0.0026
Epoch 45/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0027 - val_loss: 0.0025
Epoch 46/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0027 - val_loss: 0.0028
Epoch 47/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0027 - val_loss: 0.0028
Epoch 48/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0026 - val_loss: 0.0025
Epoch 49/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0026 - val_loss: 0.0026
Epoch 50/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0026 - val_loss: 0.0025
model.evaluate(X_valid, y_valid)

输出:

63/63 [==============================] - 1s 10ms/step - loss: 0.0025

0.002489288104698062
plot_learning_curves(history.history["loss"], history.history["val_loss"])
plt.show()

输出:

3.5 Forecasting Several Time Steps Ahead

预测未来10个时间步,直接用训练的模型往后预测10个值:

np.random.seed(43) # not 42, as it would give the first series in the train set

series = generate_time_series(1, n_steps + 10)
X_new, Y_new = series[:, :n_steps], series[:, n_steps:]
X = X_new
for step_ahead in range(10):
    y_pred_one = model.predict(X[:, step_ahead:])[:, np.newaxis, :]
    X = np.concatenate([X, y_pred_one], axis=1)

Y_pred = X[:, n_steps:]

Y_pred.shape

输出:

(1, 10, 1)
def plot_multiple_forecasts(X, Y, Y_pred):
    n_steps = X.shape[1]
    ahead = Y.shape[1]
    plot_series(X[0, :, 0])
    plt.plot(np.arange(n_steps, n_steps + ahead), Y[0, :, 0], "ro-", label="Actual")
    plt.plot(np.arange(n_steps, n_steps + ahead), Y_pred[0, :, 0], "bx-", label="Forecast", markersize=10)
    plt.axis([0, n_steps + ahead, -1, 1])
    plt.legend(fontsize=14)

plot_multiple_forecasts(X_new, Y_new, Y_pred)

plt.show()

输出:

如上输出所示,越靠后预测的结果越不准确、偏离越大,可能是因为误差会累积。

预测未来10个值的另一种方法是训练一个能同时预测10个值的RNN:

  • 1. 使用线性模型
np.random.seed(42)

n_steps = 50
series = generate_time_series(10000, n_steps+10)
X_train, y_train = series[:7000, :n_steps], series[:7000, -10:, 0]
X_valid, y_valid = series[7000:9000, :n_steps], series[7000:9000, -10:, 0]
X_test, y_test = series[9000:, :n_steps], series[9000:, -10:, 0]

X = X_valid
for step_ahead in range(10):
    y_pred_one = model.predict(X)[:, np.newaxis, :]
    X = np.concatenate([X, y_pred_one], axis=1)
    
Y_pred = X[:, n_steps:, 0]
Y_pred.shape

输出:

(2000, 10)
np.mean(tf.keras.metrics.mean_squared_error(y_valid, Y_pred))

输出:

0.025756942
Y_naive_pred = y_valid[:, -1:]
np.mean(tf.keras.metrics.mean_squared_error(y_valid, Y_naive_pred))

输出:

0.22278848
np.random.seed(42)
tf.random.set_seed(42)

model = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape=[50,1]),
    tf.keras.layers.Dense(10)
])

model.compile(loss="mse", optimizer="adam")

history = model.fit(X_train, y_train, epochs=50, validation_data=(X_valid, y_valid))

输出:

Epoch 1/50
219/219 [==============================] - 1s 2ms/step - loss: 0.1343 - val_loss: 0.0606
Epoch 2/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0496 - val_loss: 0.0425
Epoch 3/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0385 - val_loss: 0.0353
Epoch 4/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0331 - val_loss: 0.0311
Epoch 5/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0298 - val_loss: 0.0283
Epoch 6/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0273 - val_loss: 0.0264
Epoch 7/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0256 - val_loss: 0.0249
Epoch 8/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0244 - val_loss: 0.0237
Epoch 9/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0234 - val_loss: 0.0229
Epoch 10/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0227 - val_loss: 0.0222
Epoch 11/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0220 - val_loss: 0.0216
Epoch 12/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0215 - val_loss: 0.0212
Epoch 13/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0210 - val_loss: 0.0208
Epoch 14/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0207 - val_loss: 0.0207
Epoch 15/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0203 - val_loss: 0.0202
Epoch 16/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0200 - val_loss: 0.0199
Epoch 17/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0197 - val_loss: 0.0195
Epoch 18/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0193 - val_loss: 0.0192
Epoch 19/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0191 - val_loss: 0.0189
Epoch 20/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0188 - val_loss: 0.0187
Epoch 21/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0186 - val_loss: 0.0187
Epoch 22/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0184 - val_loss: 0.0183
Epoch 23/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0181 - val_loss: 0.0182
Epoch 24/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0180 - val_loss: 0.0180
Epoch 25/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0178 - val_loss: 0.0176
Epoch 26/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0176 - val_loss: 0.0177
Epoch 27/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0175 - val_loss: 0.0175
Epoch 28/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0173 - val_loss: 0.0173
Epoch 29/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0172 - val_loss: 0.0172
Epoch 30/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0171 - val_loss: 0.0171
Epoch 31/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0169 - val_loss: 0.0170
Epoch 32/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0169 - val_loss: 0.0169
Epoch 33/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0167 - val_loss: 0.0167
Epoch 34/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0166 - val_loss: 0.0167
Epoch 35/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0166 - val_loss: 0.0167
Epoch 36/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0165 - val_loss: 0.0166
Epoch 37/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0164 - val_loss: 0.0164
Epoch 38/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0163 - val_loss: 0.0167
Epoch 39/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0163 - val_loss: 0.0166
Epoch 40/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0162 - val_loss: 0.0163
Epoch 41/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0162 - val_loss: 0.0163
Epoch 42/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0161 - val_loss: 0.0162
Epoch 43/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0161 - val_loss: 0.0162
Epoch 44/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0160 - val_loss: 0.0162
Epoch 45/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0160 - val_loss: 0.0162
Epoch 46/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0160 - val_loss: 0.0163
Epoch 47/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0159 - val_loss: 0.0162
Epoch 48/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0159 - val_loss: 0.0161
Epoch 49/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0158 - val_loss: 0.0160
Epoch 50/50
219/219 [==============================] - 0s 2ms/step - loss: 0.0158 - val_loss: 0.0160
  • 2. 使用RNN网络模型
np.random.seed(42)
tf.random.set_seed(42)

model = tf.keras.Sequential([
    tf.keras.layers.SimpleRNN(20, return_sequences=True, input_shape=[None,1]),
    tf.keras.layers.SimpleRNN(20),
    tf.keras.layers.Dense(10)
])

model.compile(loss="mse", optimizer="adam")

history = model.fit(X_train, y_train, epochs=50, validation_data=(X_valid, y_valid))

输出:

Epoch 1/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0669 - val_loss: 0.0317
Epoch 2/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0265 - val_loss: 0.0200
Epoch 3/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0183 - val_loss: 0.0160
Epoch 4/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0155 - val_loss: 0.0144
Epoch 5/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0139 - val_loss: 0.0118
Epoch 6/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0128 - val_loss: 0.0112
Epoch 7/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0122 - val_loss: 0.0110
Epoch 8/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0115 - val_loss: 0.0103
Epoch 9/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0111 - val_loss: 0.0112
Epoch 10/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0110 - val_loss: 0.0100
Epoch 11/50
219/219 [==============================] - 18s 83ms/step - loss: 0.0108 - val_loss: 0.0103
Epoch 12/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0102 - val_loss: 0.0096
Epoch 13/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0104 - val_loss: 0.0100
Epoch 14/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0098 - val_loss: 0.0103
Epoch 15/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0095 - val_loss: 0.0107
Epoch 16/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0092 - val_loss: 0.0089
Epoch 17/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0094 - val_loss: 0.0111
Epoch 18/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0095 - val_loss: 0.0094
Epoch 19/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0093 - val_loss: 0.0083
Epoch 20/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0094 - val_loss: 0.0085
Epoch 21/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0088 - val_loss: 0.0080
Epoch 22/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0086 - val_loss: 0.0084
Epoch 23/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0085 - val_loss: 0.0086
Epoch 24/50
219/219 [==============================] - 18s 82ms/step - loss: 0.0085 - val_loss: 0.0079
Epoch 25/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0088 - val_loss: 0.0096
Epoch 26/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0089 - val_loss: 0.0087
Epoch 27/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0085 - val_loss: 0.0085
Epoch 28/50
219/219 [==============================] - 17s 76ms/step - loss: 0.0085 - val_loss: 0.0085
Epoch 29/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0083 - val_loss: 0.0083
Epoch 30/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0086 - val_loss: 0.0081
Epoch 31/50
219/219 [==============================] - 17s 76ms/step - loss: 0.0086 - val_loss: 0.0084
Epoch 32/50
219/219 [==============================] - 17s 76ms/step - loss: 0.0086 - val_loss: 0.0094
Epoch 33/50
219/219 [==============================] - 17s 76ms/step - loss: 0.0080 - val_loss: 0.0086
Epoch 34/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0082 - val_loss: 0.0078
Epoch 35/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0083 - val_loss: 0.0074
Epoch 36/50
219/219 [==============================] - 17s 76ms/step - loss: 0.0081 - val_loss: 0.0081
Epoch 37/50
219/219 [==============================] - 17s 77ms/step - loss: 0.0081 - val_loss: 0.0075
Epoch 38/50
219/219 [==============================] - 17s 77ms/step - loss: 0.0082 - val_loss: 0.0080
Epoch 39/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0086 - val_loss: 0.0117
Epoch 40/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0081 - val_loss: 0.0116
Epoch 41/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0079 - val_loss: 0.0074
Epoch 42/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0081 - val_loss: 0.0084
Epoch 43/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0083 - val_loss: 0.0085
Epoch 44/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0080 - val_loss: 0.0088
Epoch 45/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0079 - val_loss: 0.0080
Epoch 46/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0078 - val_loss: 0.0078
Epoch 47/50
219/219 [==============================] - 17s 77ms/step - loss: 0.0079 - val_loss: 0.0077
Epoch 48/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0080 - val_loss: 0.0071
Epoch 49/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0080 - val_loss: 0.0076
Epoch 50/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0081 - val_loss: 0.0072
np.random.seed(43)

series = generate_time_series(1, 50 + 10)
X_new, Y_new = series[:, :50, :], series[:, -10:, :]
Y_pred = model.predict(X_new)[..., np.newaxis]

plot_multiple_forecasts(X_new, Y_new, Y_pred)
plt.show()

输出:

如上方法是基于0-49的时间步预测了50-59的值。

  • 3. 使用TimeDistributed层

使用TimeDistributed层每个时间步都预测往后10步的值,例如时间步0会预测1到10的值,时间步1会预测2到11的值,因此最后一步会预测50-59的值

np.random.seed(42)

n_steps = 50
series = generate_time_series(10000, n_steps + 10)
X_train = series[:7000, :n_steps]
X_valid = series[7000:9000, :n_steps]
X_test = series[9000:, :n_steps]
Y = np.empty((10000, n_steps, 10))
for step_ahead in range(1, 10 + 1):
    Y[..., step_ahead - 1] = series[..., step_ahead:step_ahead + n_steps, 0]
Y_train = Y[:7000]
Y_valid = Y[7000:9000]
Y_test = Y[9000:]

X_train.shape, Y_train.shape

输出:

((7000, 50, 1), (7000, 50, 10))
np.random.seed(42)
tf.random.set_seed(42)

model = keras.models.Sequential([
    keras.layers.SimpleRNN(20, return_sequences=True, input_shape=[None, 1]),
    keras.layers.SimpleRNN(20, return_sequences=True),
    keras.layers.TimeDistributed(keras.layers.Dense(10))
])

def last_time_step_mse(Y_true, Y_pred):
    return keras.metrics.mean_squared_error(Y_true[:, -1], Y_pred[:, -1])

model.compile(loss="mse", optimizer=keras.optimizers.Adam(lr=0.01), metrics=[last_time_step_mse])
history = model.fit(X_train, Y_train, epochs=50, validation_data=(X_valid, Y_valid))

输出:

Epoch 1/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0508 - last_time_step_mse: 0.0400 - val_loss: 0.0429 - val_last_time_step_mse: 0.0324
Epoch 2/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0395 - last_time_step_mse: 0.0283 - val_loss: 0.0351 - val_last_time_step_mse: 0.0243
Epoch 3/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0326 - last_time_step_mse: 0.0213 - val_loss: 0.0302 - val_last_time_step_mse: 0.0186
Epoch 4/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0295 - last_time_step_mse: 0.0182 - val_loss: 0.0277 - val_last_time_step_mse: 0.0156
Epoch 5/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0274 - last_time_step_mse: 0.0155 - val_loss: 0.0265 - val_last_time_step_mse: 0.0160
Epoch 6/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0252 - last_time_step_mse: 0.0128 - val_loss: 0.0254 - val_last_time_step_mse: 0.0133
Epoch 7/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0232 - last_time_step_mse: 0.0104 - val_loss: 0.0231 - val_last_time_step_mse: 0.0100
Epoch 8/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0218 - last_time_step_mse: 0.0089 - val_loss: 0.0214 - val_last_time_step_mse: 0.0086
Epoch 9/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0214 - last_time_step_mse: 0.0087 - val_loss: 0.0207 - val_last_time_step_mse: 0.0078
Epoch 10/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0210 - last_time_step_mse: 0.0084 - val_loss: 0.0215 - val_last_time_step_mse: 0.0092
Epoch 11/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0204 - last_time_step_mse: 0.0081 - val_loss: 0.0207 - val_last_time_step_mse: 0.0087
Epoch 12/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0201 - last_time_step_mse: 0.0079 - val_loss: 0.0190 - val_last_time_step_mse: 0.0067
Epoch 13/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0197 - last_time_step_mse: 0.0075 - val_loss: 0.0211 - val_last_time_step_mse: 0.0084
Epoch 14/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0193 - last_time_step_mse: 0.0071 - val_loss: 0.0189 - val_last_time_step_mse: 0.0065
Epoch 15/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0195 - last_time_step_mse: 0.0074 - val_loss: 0.0183 - val_last_time_step_mse: 0.0067
Epoch 16/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0188 - last_time_step_mse: 0.0067 - val_loss: 0.0193 - val_last_time_step_mse: 0.0091
Epoch 17/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0188 - last_time_step_mse: 0.0068 - val_loss: 0.0192 - val_last_time_step_mse: 0.0080
Epoch 18/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0186 - last_time_step_mse: 0.0070 - val_loss: 0.0188 - val_last_time_step_mse: 0.0070
Epoch 19/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0186 - last_time_step_mse: 0.0068 - val_loss: 0.0173 - val_last_time_step_mse: 0.0057
Epoch 20/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0183 - last_time_step_mse: 0.0068 - val_loss: 0.0200 - val_last_time_step_mse: 0.0090
Epoch 21/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0182 - last_time_step_mse: 0.0068 - val_loss: 0.0191 - val_last_time_step_mse: 0.0074
Epoch 22/50
219/219 [==============================] - 18s 82ms/step - loss: 0.0185 - last_time_step_mse: 0.0070 - val_loss: 0.0173 - val_last_time_step_mse: 0.0057
Epoch 23/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0183 - last_time_step_mse: 0.0069 - val_loss: 0.0179 - val_last_time_step_mse: 0.0067
Epoch 24/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0180 - last_time_step_mse: 0.0067 - val_loss: 0.0182 - val_last_time_step_mse: 0.0071
Epoch 25/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0180 - last_time_step_mse: 0.0066 - val_loss: 0.0173 - val_last_time_step_mse: 0.0061
Epoch 26/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0178 - last_time_step_mse: 0.0063 - val_loss: 0.0182 - val_last_time_step_mse: 0.0075
Epoch 27/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0178 - last_time_step_mse: 0.0065 - val_loss: 0.0180 - val_last_time_step_mse: 0.0069
Epoch 28/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0177 - last_time_step_mse: 0.0064 - val_loss: 0.0169 - val_last_time_step_mse: 0.0059
Epoch 29/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0175 - last_time_step_mse: 0.0062 - val_loss: 0.0180 - val_last_time_step_mse: 0.0069
Epoch 30/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0178 - last_time_step_mse: 0.0066 - val_loss: 0.0171 - val_last_time_step_mse: 0.0058
Epoch 31/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0176 - last_time_step_mse: 0.0064 - val_loss: 0.0168 - val_last_time_step_mse: 0.0055
Epoch 32/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0174 - last_time_step_mse: 0.0061 - val_loss: 0.0174 - val_last_time_step_mse: 0.0068
Epoch 33/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0175 - last_time_step_mse: 0.0063 - val_loss: 0.0181 - val_last_time_step_mse: 0.0068
Epoch 34/50
219/219 [==============================] - 18s 81ms/step - loss: 0.0180 - last_time_step_mse: 0.0068 - val_loss: 0.0171 - val_last_time_step_mse: 0.0063
Epoch 35/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0177 - last_time_step_mse: 0.0067 - val_loss: 0.0186 - val_last_time_step_mse: 0.0078
Epoch 36/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0174 - last_time_step_mse: 0.0063 - val_loss: 0.0169 - val_last_time_step_mse: 0.0061
Epoch 37/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0173 - last_time_step_mse: 0.0063 - val_loss: 0.0173 - val_last_time_step_mse: 0.0070
Epoch 38/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0171 - last_time_step_mse: 0.0060 - val_loss: 0.0172 - val_last_time_step_mse: 0.0062
Epoch 39/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0170 - last_time_step_mse: 0.0059 - val_loss: 0.0175 - val_last_time_step_mse: 0.0075
Epoch 40/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0173 - last_time_step_mse: 0.0063 - val_loss: 0.0174 - val_last_time_step_mse: 0.0059
Epoch 41/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0170 - last_time_step_mse: 0.0060 - val_loss: 0.0167 - val_last_time_step_mse: 0.0060
Epoch 42/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0170 - last_time_step_mse: 0.0060 - val_loss: 0.0170 - val_last_time_step_mse: 0.0060
Epoch 43/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0170 - last_time_step_mse: 0.0060 - val_loss: 0.0176 - val_last_time_step_mse: 0.0068
Epoch 44/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0173 - last_time_step_mse: 0.0062 - val_loss: 0.0164 - val_last_time_step_mse: 0.0055
Epoch 45/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0169 - last_time_step_mse: 0.0058 - val_loss: 0.0166 - val_last_time_step_mse: 0.0057
Epoch 46/50
219/219 [==============================] - 17s 78ms/step - loss: 0.0173 - last_time_step_mse: 0.0062 - val_loss: 0.0160 - val_last_time_step_mse: 0.0051
Epoch 47/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0169 - last_time_step_mse: 0.0060 - val_loss: 0.0177 - val_last_time_step_mse: 0.0066
Epoch 48/50
219/219 [==============================] - 17s 80ms/step - loss: 0.0170 - last_time_step_mse: 0.0061 - val_loss: 0.0171 - val_last_time_step_mse: 0.0060
Epoch 49/50
219/219 [==============================] - 17s 79ms/step - loss: 0.0168 - last_time_step_mse: 0.0060 - val_loss: 0.0166 - val_last_time_step_mse: 0.0053
Epoch 50/50
219/219 [==============================] - 18s 80ms/step - loss: 0.0169 - last_time_step_mse: 0.0059 - val_loss: 0.0174 - val_last_time_step_mse: 0.0073
np.random.seed(43)

series = generate_time_series(1, 50 + 10)
X_new, Y_new = series[:, :50, :], series[:, 50:, :]
Y_pred = model.predict(X_new)[:, -1][..., np.newaxis]

plot_multiple_forecasts(X_new, Y_new, Y_pred)
plt.show()

输出:

4. 处理长序列

4.1 应对不稳定梯度问题(Fighting the Unstable Gradients Problem)

深度神经网络关于解决梯度不稳定问题的方法也可以使用在RNN中,例如特定的参数初始化方法、更好的优化器、dropout等等。但是非饱和激活函数(例如ReLU)对RNN非但没用,而且可能使情况更糟糕。同时批归一化也不能很好地在RNN中使用,因为其不能使用在各个时间步之间,只能使用在循环层之间,也就是说只能用来记忆单元中。

层归一化(Layer Normalization):在特征维度方向上进行归一化

np.random.seed(42)
tf.random.set_seed(42)

model = keras.models.Sequential([
    keras.layers.SimpleRNN(20, return_sequences=True, input_shape=[None, 1]),
    keras.layers.BatchNormalization(),
    keras.layers.SimpleRNN(20, return_sequences=True),
    keras.layers.BatchNormalization(),
    keras.layers.TimeDistributed(keras.layers.Dense(10))
])

model.compile(loss="mse", optimizer="adam", metrics=[last_time_step_mse])
history = model.fit(X_train, Y_train, epochs=20, validation_data=(X_valid, Y_valid))

输出:

Epoch 1/20
219/219 [==============================] - 18s 82ms/step - loss: 0.1929 - last_time_step_mse: 0.1902 - val_loss: 0.0877 - val_last_time_step_mse: 0.0832
Epoch 2/20
219/219 [==============================] - 18s 80ms/step - loss: 0.0537 - last_time_step_mse: 0.0449 - val_loss: 0.0549 - val_last_time_step_mse: 0.0462
Epoch 3/20
219/219 [==============================] - 17s 79ms/step - loss: 0.0471 - last_time_step_mse: 0.0375 - val_loss: 0.0451 - val_last_time_step_mse: 0.0358
Epoch 4/20
219/219 [==============================] - 17s 79ms/step - loss: 0.0437 - last_time_step_mse: 0.0337 - val_loss: 0.0418 - val_last_time_step_mse: 0.0314
Epoch 5/20
219/219 [==============================] - 17s 80ms/step - loss: 0.0409 - last_time_step_mse: 0.0306 - val_loss: 0.0391 - val_last_time_step_mse: 0.0287
Epoch 6/20
219/219 [==============================] - 18s 82ms/step - loss: 0.0385 - last_time_step_mse: 0.0275 - val_loss: 0.0379 - val_last_time_step_mse: 0.0273
Epoch 7/20
219/219 [==============================] - 18s 82ms/step - loss: 0.0366 - last_time_step_mse: 0.0254 - val_loss: 0.0367 - val_last_time_step_mse: 0.0248
Epoch 8/20
219/219 [==============================] - 18s 82ms/step - loss: 0.0349 - last_time_step_mse: 0.0235 - val_loss: 0.0363 - val_last_time_step_mse: 0.0249
Epoch 9/20
219/219 [==============================] - 18s 81ms/step - loss: 0.0338 - last_time_step_mse: 0.0221 - val_loss: 0.0332 - val_last_time_step_mse: 0.0208
Epoch 10/20
219/219 [==============================] - 18s 82ms/step - loss: 0.0329 - last_time_step_mse: 0.0214 - val_loss: 0.0335 - val_last_time_step_mse: 0.0214
Epoch 11/20
219/219 [==============================] - 18s 81ms/step - loss: 0.0322 - last_time_step_mse: 0.0206 - val_loss: 0.0323 - val_last_time_step_mse: 0.0203
Epoch 12/20
219/219 [==============================] - 18s 80ms/step - loss: 0.0316 - last_time_step_mse: 0.0198 - val_loss: 0.0333 - val_last_time_step_mse: 0.0210
Epoch 13/20
219/219 [==============================] - 18s 81ms/step - loss: 0.0310 - last_time_step_mse: 0.0191 - val_loss: 0.0310 - val_last_time_step_mse: 0.0187
Epoch 14/20
219/219 [==============================] - 18s 81ms/step - loss: 0.0305 - last_time_step_mse: 0.0186 - val_loss: 0.0310 - val_last_time_step_mse: 0.0189
Epoch 15/20
219/219 [==============================] - 18s 80ms/step - loss: 0.0302 - last_time_step_mse: 0.0182 - val_loss: 0.0298 - val_last_time_step_mse: 0.0178
Epoch 16/20
219/219 [==============================] - 18s 82ms/step - loss: 0.0296 - last_time_step_mse: 0.0176 - val_loss: 0.0293 - val_last_time_step_mse: 0.0174
Epoch 17/20
219/219 [==============================] - 18s 81ms/step - loss: 0.0293 - last_time_step_mse: 0.0172 - val_loss: 0.0315 - val_last_time_step_mse: 0.0200
Epoch 18/20
219/219 [==============================] - 18s 82ms/step - loss: 0.0289 - last_time_step_mse: 0.0168 - val_loss: 0.0295 - val_last_time_step_mse: 0.0174
Epoch 19/20
219/219 [==============================] - 18s 81ms/step - loss: 0.0286 - last_time_step_mse: 0.0168 - val_loss: 0.0290 - val_last_time_step_mse: 0.0163
Epoch 20/20
219/219 [==============================] - 17s 80ms/step - loss: 0.0281 - last_time_step_mse: 0.0161 - val_loss: 0.0288 - val_last_time_step_mse: 0.0164

自定义归一化类:

from tensorflow.keras.layers import LayerNormalization

class LNSimpleRNNCell(keras.layers.Layer):
    def __init__(self, units, activation="tanh", **kwargs):
        super().__init__(**kwargs)
        self.state_size = units
        self.output_size = units
        self.simple_rnn_cell = keras.layers.SimpleRNNCell(units,
                                                          activation=None)
        self.layer_norm = LayerNormalization()
        self.activation = keras.activations.get(activation)
    def get_initial_state(self, inputs=None, batch_size=None, dtype=None):
        if inputs is not None:
            batch_size = tf.shape(inputs)[0]
            dtype = inputs.dtype
        return [tf.zeros([batch_size, self.state_size], dtype=dtype)]
    def call(self, inputs, states):
        outputs, new_states = self.simple_rnn_cell(inputs, states)
        norm_outputs = self.activation(self.layer_norm(outputs))
        return norm_outputs, [norm_outputs]
    
np.random.seed(42)
tf.random.set_seed(42)

model = keras.models.Sequential([
    keras.layers.RNN(LNSimpleRNNCell(20), return_sequences=True,
                     input_shape=[None, 1]),
    keras.layers.RNN(LNSimpleRNNCell(20), return_sequences=True),
    keras.layers.TimeDistributed(keras.layers.Dense(10))
])

model.compile(loss="mse", optimizer="adam", metrics=[last_time_step_mse])

history = model.fit(X_train, Y_train, epochs=20, validation_data=(X_valid, Y_valid))

输出:

Epoch 1/20
219/219 [==============================] - 52s 237ms/step - loss: 0.1574 - last_time_step_mse: 0.1493 - val_loss: 0.0702 - val_last_time_step_mse: 0.0557
Epoch 2/20
219/219 [==============================] - 52s 237ms/step - loss: 0.0619 - last_time_step_mse: 0.0453 - val_loss: 0.0554 - val_last_time_step_mse: 0.0373
Epoch 3/20
219/219 [==============================] - 54s 246ms/step - loss: 0.0527 - last_time_step_mse: 0.0348 - val_loss: 0.0496 - val_last_time_step_mse: 0.0330
Epoch 4/20
219/219 [==============================] - 54s 246ms/step - loss: 0.0472 - last_time_step_mse: 0.0307 - val_loss: 0.0447 - val_last_time_step_mse: 0.0289
Epoch 5/20
219/219 [==============================] - 54s 246ms/step - loss: 0.0431 - last_time_step_mse: 0.0276 - val_loss: 0.0412 - val_last_time_step_mse: 0.0244
Epoch 6/20
219/219 [==============================] - 52s 238ms/step - loss: 0.0397 - last_time_step_mse: 0.0239 - val_loss: 0.0380 - val_last_time_step_mse: 0.0221
Epoch 7/20
219/219 [==============================] - 53s 240ms/step - loss: 0.0373 - last_time_step_mse: 0.0220 - val_loss: 0.0354 - val_last_time_step_mse: 0.0201
Epoch 8/20
219/219 [==============================] - 53s 242ms/step - loss: 0.0350 - last_time_step_mse: 0.0200 - val_loss: 0.0340 - val_last_time_step_mse: 0.0182
Epoch 9/20
219/219 [==============================] - 53s 241ms/step - loss: 0.0342 - last_time_step_mse: 0.0188 - val_loss: 0.0330 - val_last_time_step_mse: 0.0181
Epoch 10/20
219/219 [==============================] - 52s 238ms/step - loss: 0.0324 - last_time_step_mse: 0.0178 - val_loss: 0.0316 - val_last_time_step_mse: 0.0169
Epoch 11/20
219/219 [==============================] - 52s 239ms/step - loss: 0.0314 - last_time_step_mse: 0.0174 - val_loss: 0.0304 - val_last_time_step_mse: 0.0160
Epoch 12/20
219/219 [==============================] - 53s 241ms/step - loss: 0.0304 - last_time_step_mse: 0.0165 - val_loss: 0.0295 - val_last_time_step_mse: 0.0152
Epoch 13/20
219/219 [==============================] - 52s 240ms/step - loss: 0.0296 - last_time_step_mse: 0.0160 - val_loss: 0.0293 - val_last_time_step_mse: 0.0154
Epoch 14/20
219/219 [==============================] - 53s 243ms/step - loss: 0.0292 - last_time_step_mse: 0.0154 - val_loss: 0.0287 - val_last_time_step_mse: 0.0155
Epoch 15/20
219/219 [==============================] - 53s 243ms/step - loss: 0.0287 - last_time_step_mse: 0.0154 - val_loss: 0.0282 - val_last_time_step_mse: 0.0150
Epoch 16/20
219/219 [==============================] - 52s 239ms/step - loss: 0.0284 - last_time_step_mse: 0.0154 - val_loss: 0.0275 - val_last_time_step_mse: 0.0146
Epoch 17/20
219/219 [==============================] - 53s 243ms/step - loss: 0.0276 - last_time_step_mse: 0.0148 - val_loss: 0.0268 - val_last_time_step_mse: 0.0135
Epoch 18/20
219/219 [==============================] - 53s 241ms/step - loss: 0.0270 - last_time_step_mse: 0.0142 - val_loss: 0.0267 - val_last_time_step_mse: 0.0141
Epoch 19/20
219/219 [==============================] - 53s 241ms/step - loss: 0.0267 - last_time_step_mse: 0.0140 - val_loss: 0.0257 - val_last_time_step_mse: 0.0127
Epoch 20/20
219/219 [==============================] - 53s 242ms/step - loss: 0.0261 - last_time_step_mse: 0.0135 - val_loss: 0.0252 - val_last_time_step_mse: 0.0123

自定义RNN层:

class MyRNN(keras.layers.Layer):
    def __init__(self, cell, return_sequences=False, **kwargs):
        super().__init__(**kwargs)
        self.cell = cell
        self.return_sequences = return_sequences
        self.get_initial_state = getattr(
            self.cell, "get_initial_state", self.fallback_initial_state)
    def fallback_initial_state(self, inputs):
        return [tf.zeros([self.cell.state_size], dtype=inputs.dtype)]
    @tf.function
    def call(self, inputs):
        states = self.get_initial_state(inputs)
        n_steps = tf.shape(inputs)[1]
        if self.return_sequences:
            sequences = tf.TensorArray(inputs.dtype, size=n_steps)
        outputs = tf.zeros(shape=[n_steps, self.cell.output_size], dtype=inputs.dtype)
        for step in tf.range(n_steps):
            outputs, states = self.cell(inputs[:, step], states)
            if self.return_sequences:
                sequences = sequences.write(step, outputs)
        if self.return_sequences:
            return sequences.stack()
        else:
            return outputs
        
np.random.seed(42)
tf.random.set_seed(42)

model = keras.models.Sequential([
    MyRNN(LNSimpleRNNCell(20), return_sequences=True,
          input_shape=[None, 1]),
    MyRNN(LNSimpleRNNCell(20), return_sequences=True),
    keras.layers.TimeDistributed(keras.layers.Dense(10))
])

model.compile(loss="mse", optimizer="adam", metrics=[last_time_step_mse])

history = model.fit(X_train, Y_train, epochs=20, validation_data=(X_valid, Y_valid))

输出:

Epoch 1/20
219/219 [==============================] - 54s 249ms/step - loss: 0.1949 - last_time_step_mse: 0.1919 - val_loss: 0.0772 - val_last_time_step_mse: 0.0723
Epoch 2/20
219/219 [==============================] - 54s 249ms/step - loss: 0.0693 - last_time_step_mse: 0.0628 - val_loss: 0.0632 - val_last_time_step_mse: 0.0570
Epoch 3/20
219/219 [==============================] - 56s 254ms/step - loss: 0.0591 - last_time_step_mse: 0.0511 - val_loss: 0.0540 - val_last_time_step_mse: 0.0433
Epoch 4/20
219/219 [==============================] - 54s 248ms/step - loss: 0.0508 - last_time_step_mse: 0.0394 - val_loss: 0.0478 - val_last_time_step_mse: 0.0359
Epoch 5/20
219/219 [==============================] - 54s 249ms/step - loss: 0.0460 - last_time_step_mse: 0.0343 - val_loss: 0.0441 - val_last_time_step_mse: 0.0325
Epoch 6/20
219/219 [==============================] - 54s 248ms/step - loss: 0.0420 - last_time_step_mse: 0.0295 - val_loss: 0.0398 - val_last_time_step_mse: 0.0263
Epoch 7/20
219/219 [==============================] - 54s 249ms/step - loss: 0.0381 - last_time_step_mse: 0.0249 - val_loss: 0.0357 - val_last_time_step_mse: 0.0221
Epoch 8/20
219/219 [==============================] - 54s 248ms/step - loss: 0.0350 - last_time_step_mse: 0.0215 - val_loss: 0.0334 - val_last_time_step_mse: 0.0203
Epoch 9/20
219/219 [==============================] - 54s 249ms/step - loss: 0.0329 - last_time_step_mse: 0.0194 - val_loss: 0.0320 - val_last_time_step_mse: 0.0178
Epoch 10/20
219/219 [==============================] - 54s 247ms/step - loss: 0.0313 - last_time_step_mse: 0.0178 - val_loss: 0.0303 - val_last_time_step_mse: 0.0164
Epoch 11/20
219/219 [==============================] - 54s 248ms/step - loss: 0.0302 - last_time_step_mse: 0.0168 - val_loss: 0.0294 - val_last_time_step_mse: 0.0161
Epoch 12/20
219/219 [==============================] - 58s 263ms/step - loss: 0.0294 - last_time_step_mse: 0.0164 - val_loss: 0.0288 - val_last_time_step_mse: 0.0158
Epoch 13/20
219/219 [==============================] - 54s 249ms/step - loss: 0.0287 - last_time_step_mse: 0.0157 - val_loss: 0.0282 - val_last_time_step_mse: 0.0155
Epoch 14/20
219/219 [==============================] - 54s 247ms/step - loss: 0.0281 - last_time_step_mse: 0.0151 - val_loss: 0.0278 - val_last_time_step_mse: 0.0154
Epoch 15/20
219/219 [==============================] - 55s 249ms/step - loss: 0.0277 - last_time_step_mse: 0.0148 - val_loss: 0.0273 - val_last_time_step_mse: 0.0141
Epoch 16/20
219/219 [==============================] - 54s 248ms/step - loss: 0.0273 - last_time_step_mse: 0.0146 - val_loss: 0.0271 - val_last_time_step_mse: 0.0142
Epoch 17/20
219/219 [==============================] - 54s 248ms/step - loss: 0.0269 - last_time_step_mse: 0.0141 - val_loss: 0.0266 - val_last_time_step_mse: 0.0134
Epoch 18/20
219/219 [==============================] - 54s 246ms/step - loss: 0.0265 - last_time_step_mse: 0.0138 - val_loss: 0.0262 - val_last_time_step_mse: 0.0134
Epoch 19/20
219/219 [==============================] - 54s 248ms/step - loss: 0.0262 - last_time_step_mse: 0.0135 - val_loss: 0.0264 - val_last_time_step_mse: 0.0134
Epoch 20/20
219/219 [==============================] - 54s 248ms/step - loss: 0.0258 - last_time_step_mse: 0.0131 - val_loss: 0.0258 - val_last_time_step_mse: 0.0130

Keras中除了keras.layers.RNN外其它方法和类都提供了dropout和recurrent_dropout超参数,前者用于输入,后者用于隐状态。

4.2 解决短期记忆问题

数据在经过RNN网络时一些信息可能会丢失,经过一段时间后RNN输出几乎可能没有输入数据的相信信息,为了解决这个问题,发明了LSTM单元。

4.3 LSTM单元

LSTM(Long Short-Term Memory)由Sepp Hochreiter和Jürgen Schmidhuber在1997年提出,其训练收敛速度快,可以检测到较长时间的数据信息记忆。LSTM单元结构图如下:

  1. LSTM的关键思想是:网络能够学习到需要存储哪些信息到长期状态,丢弃哪些信息并且从中读取信息。
  2. c(t)表示长期状态,首先经过遗忘门忘记一些信息,然后通过加操作新增一些新信息,然后复制一份,一份直接传入下一单元,一份经过tanh函数,结果通过输出门被过滤,结果即为输出y(t),复制一份保存为短期状态h(t)。
  3. c(t)在每个时间步都有信息丢弃和新增。
  4. 输入x(t)和前一个短期状态h(t-1)经过四个不同的全连接层,四个网络层都有不同的作用。最主要的层是输出为g(t)的层,g(t)经过输入门保存最重要的信息,其它信息丢弃。
  5. 其它三个层为门控制器,均使用了逻辑回归激活函数,输出范围0到1。其输出送到逐元素乘法的操作中,因此0表示关闭门,1表示打开门。

LSTM计算公式如下:

其中:

  1. Wxi, Wxf, Wxo, Wxg为输入x(t)对应的权重参数
  2. Whi, Whf, Who, Whg为前一单元短期状态h(t-1)对应的权重参数
  3. bi, bf, bo, bg是每层对应的偏置。注意:TensorFlow中将bf初始化为1,主要是为了防止训练开始时将所有都忘记。
np.random.seed(42)
tf.random.set_seed(42)

model = keras.models.Sequential([
    keras.layers.LSTM(20, return_sequences=True, input_shape=[None, 1]),
    keras.layers.LSTM(20, return_sequences=True),
    keras.layers.TimeDistributed(keras.layers.Dense(10))
])

model.compile(loss="mse", optimizer="adam", metrics=[last_time_step_mse])

history = model.fit(X_train, Y_train, epochs=20, validation_data=(X_valid, Y_valid))

输出:

Epoch 1/20
219/219 [==============================] - 3s 14ms/step - loss: 0.0760 - last_time_step_mse: 0.0615 - val_loss: 0.0554 - val_last_time_step_mse: 0.0364
Epoch 2/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0480 - last_time_step_mse: 0.0283 - val_loss: 0.0427 - val_last_time_step_mse: 0.0222
Epoch 3/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0391 - last_time_step_mse: 0.0181 - val_loss: 0.0367 - val_last_time_step_mse: 0.0157
Epoch 4/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0350 - last_time_step_mse: 0.0151 - val_loss: 0.0334 - val_last_time_step_mse: 0.0132
Epoch 5/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0325 - last_time_step_mse: 0.0133 - val_loss: 0.0314 - val_last_time_step_mse: 0.0121
Epoch 6/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0308 - last_time_step_mse: 0.0122 - val_loss: 0.0298 - val_last_time_step_mse: 0.0112
Epoch 7/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0297 - last_time_step_mse: 0.0118 - val_loss: 0.0291 - val_last_time_step_mse: 0.0120
Epoch 8/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0286 - last_time_step_mse: 0.0109 - val_loss: 0.0278 - val_last_time_step_mse: 0.0099
Epoch 9/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0280 - last_time_step_mse: 0.0108 - val_loss: 0.0278 - val_last_time_step_mse: 0.0113
Epoch 10/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0273 - last_time_step_mse: 0.0105 - val_loss: 0.0268 - val_last_time_step_mse: 0.0101
Epoch 11/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0269 - last_time_step_mse: 0.0102 - val_loss: 0.0263 - val_last_time_step_mse: 0.0096
Epoch 12/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0264 - last_time_step_mse: 0.0101 - val_loss: 0.0263 - val_last_time_step_mse: 0.0105
Epoch 13/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0259 - last_time_step_mse: 0.0097 - val_loss: 0.0257 - val_last_time_step_mse: 0.0100
Epoch 14/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0257 - last_time_step_mse: 0.0096 - val_loss: 0.0252 - val_last_time_step_mse: 0.0091
Epoch 15/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0253 - last_time_step_mse: 0.0095 - val_loss: 0.0251 - val_last_time_step_mse: 0.0092
Epoch 16/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0251 - last_time_step_mse: 0.0095 - val_loss: 0.0248 - val_last_time_step_mse: 0.0089
Epoch 17/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0248 - last_time_step_mse: 0.0094 - val_loss: 0.0248 - val_last_time_step_mse: 0.0098
Epoch 18/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0245 - last_time_step_mse: 0.0093 - val_loss: 0.0246 - val_last_time_step_mse: 0.0091
Epoch 19/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0242 - last_time_step_mse: 0.0091 - val_loss: 0.0238 - val_last_time_step_mse: 0.0085
Epoch 20/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0239 - last_time_step_mse: 0.0089 - val_loss: 0.0238 - val_last_time_step_mse: 0.0086
model.evaluate(X_valid, Y_valid)

输出:

63/63 [==============================] - 0s 6ms/step - loss: 0.0238 - last_time_step_mse: 0.0086

[0.023788688704371452, 0.00856080837547779]
plot_learning_curves(history.history["loss"], history.history["val_loss"])
plt.show()

输出:

np.random.seed(43)

series = generate_time_series(1, 50 + 10)
X_new, Y_new = series[:, :50, :], series[:, 50:, :]
Y_pred = model.predict(X_new)[:, -1][..., np.newaxis]

plot_multiple_forecasts(X_new, Y_new, Y_pred)
plt.show()

输出:

4.4 窥孔连接(Peephole connections)

在上述LSTM结构中,门控制器只由输入x(t)和前面的短期状态h(t-1)决定,而如果将长期状态c(t-1)也加入到决定门控制器状态的信息是个很好的想法。

这种方法大部分情况下能提升模型性能,但也有例外,但究竟哪个任务需要窥孔连接,而哪个任务不需要窥孔连接便无法得知。

在Keras中LSTM层是基于keras.layers.LSTMCell,因此不支持peephole。可以使用实验性的tf.keras.experimental.PeepholeLSTMCell,可以创建一个keras.layers.RNN层并传入PeepHoleLSTMCell参数给构造器。

4.5 GRU单元(GRU cells)

GRU是LSTM最流行的变体之一,全称Gated Recurrent Unit,即门控单元。GRU由Kyunghyun Cho在2014年提出。

GRU是简化版的LSTM,结构图如下所示:

如上所示为GRU的结构图,相比于LSTM有如下不同点:

  1. 将LSTM中两个状态向量合并为一个h(t)
  2. 门控制器z(t)同时控制遗忘门和输入门。如果z(t)输出1,则遗忘门关闭,输入门打开(1-1=0)。也就是说如果要新增一些信息,就先遗忘一些东西。
  3. 没有输出门。
  4. 新增一个r(t)门控制器,用于控制前面信息h(t-1)流入主要层g(t)。

GRU计算公式如下:

Keras中提供了keras.layers.GRU层:

np.random.seed(42)
tf.random.set_seed(42)

model = keras.models.Sequential([
    keras.layers.GRU(20, return_sequences=True, input_shape=[None, 1]),
    keras.layers.GRU(20, return_sequences=True),
    keras.layers.TimeDistributed(keras.layers.Dense(10))
])

model.compile(loss="mse", optimizer="adam", metrics=[last_time_step_mse])
history = model.fit(X_train, Y_train, epochs=20, validation_data=(X_valid, Y_valid))

输出:

Epoch 1/20
219/219 [==============================] - 3s 14ms/step - loss: 0.0738 - last_time_step_mse: 0.0655 - val_loss: 0.0538 - val_last_time_step_mse: 0.0450
Epoch 2/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0476 - last_time_step_mse: 0.0367 - val_loss: 0.0441 - val_last_time_step_mse: 0.0326
Epoch 3/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0417 - last_time_step_mse: 0.0301 - val_loss: 0.0390 - val_last_time_step_mse: 0.0275
Epoch 4/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0368 - last_time_step_mse: 0.0243 - val_loss: 0.0339 - val_last_time_step_mse: 0.0202
Epoch 5/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0326 - last_time_step_mse: 0.0180 - val_loss: 0.0312 - val_last_time_step_mse: 0.0164
Epoch 6/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0306 - last_time_step_mse: 0.0155 - val_loss: 0.0294 - val_last_time_step_mse: 0.0143
Epoch 7/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0295 - last_time_step_mse: 0.0145 - val_loss: 0.0300 - val_last_time_step_mse: 0.0162
Epoch 8/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0283 - last_time_step_mse: 0.0135 - val_loss: 0.0278 - val_last_time_step_mse: 0.0130
Epoch 9/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0276 - last_time_step_mse: 0.0130 - val_loss: 0.0273 - val_last_time_step_mse: 0.0127
Epoch 10/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0269 - last_time_step_mse: 0.0125 - val_loss: 0.0264 - val_last_time_step_mse: 0.0121
Epoch 11/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0265 - last_time_step_mse: 0.0121 - val_loss: 0.0268 - val_last_time_step_mse: 0.0135
Epoch 12/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0263 - last_time_step_mse: 0.0123 - val_loss: 0.0261 - val_last_time_step_mse: 0.0123
Epoch 13/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0258 - last_time_step_mse: 0.0116 - val_loss: 0.0254 - val_last_time_step_mse: 0.0116
Epoch 14/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0256 - last_time_step_mse: 0.0117 - val_loss: 0.0254 - val_last_time_step_mse: 0.0116
Epoch 15/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0253 - last_time_step_mse: 0.0114 - val_loss: 0.0250 - val_last_time_step_mse: 0.0112
Epoch 16/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0251 - last_time_step_mse: 0.0114 - val_loss: 0.0250 - val_last_time_step_mse: 0.0114
Epoch 17/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0248 - last_time_step_mse: 0.0112 - val_loss: 0.0249 - val_last_time_step_mse: 0.0118
Epoch 18/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0245 - last_time_step_mse: 0.0110 - val_loss: 0.0244 - val_last_time_step_mse: 0.0108
Epoch 19/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0243 - last_time_step_mse: 0.0108 - val_loss: 0.0240 - val_last_time_step_mse: 0.0105
Epoch 20/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0240 - last_time_step_mse: 0.0106 - val_loss: 0.0238 - val_last_time_step_mse: 0.0103
model.evaluate(X_valid, Y_valid)

输出:

63/63 [==============================] - 0s 5ms/step - loss: 0.0238 - last_time_step_mse: 0.0103

[0.023785501718521118, 0.0102628068998456]
plot_learning_curves(history.history["loss"], history.history["val_loss"])
plt.show()

输出:

np.random.seed(43)

series = generate_time_series(1, 50 + 10)
X_new, Y_new = series[:, :50, :], series[:, 50:, :]
Y_pred = model.predict(X_new)[:, -1][..., np.newaxis]

plot_multiple_forecasts(X_new, Y_new, Y_pred)
plt.show()

输出:

LSTM和GRU是RNN成功的主要原因之一。

虽然LSTM和GRU能够处理比简单RNN更长的序列,但它们还是只有相当有限的短期记忆,并且很难以100个或更多时间步长的序列学习模式,例如音频采样、长时间序列、长句子。解决这类问题需要剪短输入序列,例如使用1D卷积层。

4.6 利用1D卷积层处理序列

1D卷积层通过滑动不同的卷积核,每个卷积核产生不同的1D特征图,每个核尝试学习检测较短的序列模式

核大小为4、步长为2、非全0填充的1D卷积层:

              |-----2-----|     |-----5---...------|     |-----23----|
        |-----1-----|     |-----4-----|   ...      |-----22----|
  |-----0----|      |-----3-----|     |---...|-----21----|
X: 0  1  2  3  4  5  6  7  8  9  10 11 12 ... 42 43 44 45 46 47 48 49
Y: 1  2  3  4  5  6  7  8  9  10 11 12 13 ... 43 44 45 46 47 48 49 50
  /10 11 12 13 14 15 16 17 18 19 20 21 22 ... 52 53 54 55 56 57 58 59

输出:

X:     0/3   2/5   4/7   6/9   8/11 10/13 .../43 42/45 44/47 46/49
Y:     4/13  6/15  8/17 10/19 12/21 14/23 .../53 46/55 48/57 50/59
np.random.seed(42)
tf.random.set_seed(42)

model = keras.models.Sequential([
    keras.layers.Conv1D(filters=20, kernel_size=4, strides=2, padding="valid",
                        input_shape=[None, 1]),
    keras.layers.GRU(20, return_sequences=True),
    keras.layers.GRU(20, return_sequences=True),
    keras.layers.TimeDistributed(keras.layers.Dense(10))
])

model.compile(loss="mse", optimizer="adam", metrics=[last_time_step_mse])
history = model.fit(X_train, Y_train[:, 3::2], epochs=20, validation_data=(X_valid, Y_valid[:, 3::2]))

输出:

Epoch 1/20
219/219 [==============================] - 3s 13ms/step - loss: 0.0681 - last_time_step_mse: 0.0601 - val_loss: 0.0477 - val_last_time_step_mse: 0.0396
Epoch 2/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0414 - last_time_step_mse: 0.0340 - val_loss: 0.0367 - val_last_time_step_mse: 0.0285
Epoch 3/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0338 - last_time_step_mse: 0.0257 - val_loss: 0.0307 - val_last_time_step_mse: 0.0218
Epoch 4/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0282 - last_time_step_mse: 0.0184 - val_loss: 0.0259 - val_last_time_step_mse: 0.0152
Epoch 5/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0249 - last_time_step_mse: 0.0143 - val_loss: 0.0246 - val_last_time_step_mse: 0.0141
Epoch 6/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0234 - last_time_step_mse: 0.0125 - val_loss: 0.0227 - val_last_time_step_mse: 0.0115
Epoch 7/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0226 - last_time_step_mse: 0.0117 - val_loss: 0.0225 - val_last_time_step_mse: 0.0116
Epoch 8/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0220 - last_time_step_mse: 0.0111 - val_loss: 0.0216 - val_last_time_step_mse: 0.0105
Epoch 9/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0216 - last_time_step_mse: 0.0108 - val_loss: 0.0217 - val_last_time_step_mse: 0.0109
Epoch 10/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0213 - last_time_step_mse: 0.0106 - val_loss: 0.0210 - val_last_time_step_mse: 0.0102
Epoch 11/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0210 - last_time_step_mse: 0.0102 - val_loss: 0.0208 - val_last_time_step_mse: 0.0100
Epoch 12/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0208 - last_time_step_mse: 0.0102 - val_loss: 0.0208 - val_last_time_step_mse: 0.0102
Epoch 13/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0205 - last_time_step_mse: 0.0098 - val_loss: 0.0206 - val_last_time_step_mse: 0.0101
Epoch 14/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0204 - last_time_step_mse: 0.0099 - val_loss: 0.0204 - val_last_time_step_mse: 0.0099
Epoch 15/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0202 - last_time_step_mse: 0.0097 - val_loss: 0.0199 - val_last_time_step_mse: 0.0093
Epoch 16/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0200 - last_time_step_mse: 0.0097 - val_loss: 0.0201 - val_last_time_step_mse: 0.0095
Epoch 17/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0196 - last_time_step_mse: 0.0093 - val_loss: 0.0197 - val_last_time_step_mse: 0.0091
Epoch 18/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0194 - last_time_step_mse: 0.0090 - val_loss: 0.0192 - val_last_time_step_mse: 0.0086
Epoch 19/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0190 - last_time_step_mse: 0.0088 - val_loss: 0.0188 - val_last_time_step_mse: 0.0084
Epoch 20/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0186 - last_time_step_mse: 0.0083 - val_loss: 0.0184 - val_last_time_step_mse: 0.0080

如上输出所示,添加1D卷积层后模型效果达到目前最好的结果。

4.7 WaveNet

Aaron van den Oord和其它DeepMind的研究于2016年提出了WaveNet。他们堆叠了1D卷积层,第1层只关注两个时间步的输入,第2层关注4个时间步,第3层关注8个时间步,以此类推。因此较代的层能够学习到短期模式,而较高的层能学习到长期模式。这种翻倍的dilation rate,使得WaveNet能够非常有效地处理很长的序列。

np.random.seed(42)
tf.random.set_seed(42)

model = keras.models.Sequential()
model.add(keras.layers.InputLayer(input_shape=[None, 1]))
for rate in (1, 2, 4, 8) * 2:
    model.add(keras.layers.Conv1D(filters=20, kernel_size=2, padding="causal",activation="relu", dilation_rate=rate))
model.add(keras.layers.Conv1D(filters=10, kernel_size=1))

model.compile(loss="mse", optimizer="adam", metrics=[last_time_step_mse])

history = model.fit(X_train, Y_train, epochs=20, validation_data=(X_valid, Y_valid))

输出:

Epoch 1/20
219/219 [==============================] - 3s 12ms/step - loss: 0.0668 - last_time_step_mse: 0.0543 - val_loss: 0.0365 - val_last_time_step_mse: 0.0230
Epoch 2/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0323 - last_time_step_mse: 0.0192 - val_loss: 0.0294 - val_last_time_step_mse: 0.0166
Epoch 3/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0283 - last_time_step_mse: 0.0156 - val_loss: 0.0269 - val_last_time_step_mse: 0.0145
Epoch 4/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0261 - last_time_step_mse: 0.0136 - val_loss: 0.0254 - val_last_time_step_mse: 0.0130
Epoch 5/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0248 - last_time_step_mse: 0.0124 - val_loss: 0.0245 - val_last_time_step_mse: 0.0122
Epoch 6/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0240 - last_time_step_mse: 0.0117 - val_loss: 0.0233 - val_last_time_step_mse: 0.0108
Epoch 7/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0233 - last_time_step_mse: 0.0112 - val_loss: 0.0229 - val_last_time_step_mse: 0.0108
Epoch 8/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0227 - last_time_step_mse: 0.0106 - val_loss: 0.0227 - val_last_time_step_mse: 0.0105
Epoch 9/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0222 - last_time_step_mse: 0.0101 - val_loss: 0.0222 - val_last_time_step_mse: 0.0104
Epoch 10/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0219 - last_time_step_mse: 0.0100 - val_loss: 0.0213 - val_last_time_step_mse: 0.0091
Epoch 11/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0213 - last_time_step_mse: 0.0094 - val_loss: 0.0210 - val_last_time_step_mse: 0.0092
Epoch 12/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0210 - last_time_step_mse: 0.0091 - val_loss: 0.0215 - val_last_time_step_mse: 0.0101
Epoch 13/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0207 - last_time_step_mse: 0.0088 - val_loss: 0.0202 - val_last_time_step_mse: 0.0082
Epoch 14/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0204 - last_time_step_mse: 0.0085 - val_loss: 0.0202 - val_last_time_step_mse: 0.0084
Epoch 15/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0201 - last_time_step_mse: 0.0082 - val_loss: 0.0198 - val_last_time_step_mse: 0.0080
Epoch 16/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0199 - last_time_step_mse: 0.0082 - val_loss: 0.0197 - val_last_time_step_mse: 0.0081
Epoch 17/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0197 - last_time_step_mse: 0.0079 - val_loss: 0.0194 - val_last_time_step_mse: 0.0078
Epoch 18/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0193 - last_time_step_mse: 0.0075 - val_loss: 0.0192 - val_last_time_step_mse: 0.0076
Epoch 19/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0192 - last_time_step_mse: 0.0075 - val_loss: 0.0188 - val_last_time_step_mse: 0.0072
Epoch 20/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0190 - last_time_step_mse: 0.0073 - val_loss: 0.0187 - val_last_time_step_mse: 0.0072
class GatedActivationUnit(keras.layers.Layer):
    def __init__(self, activation="tanh", **kwargs):
        super().__init__(**kwargs)
        self.activation = keras.activations.get(activation)
    def call(self, inputs):
        n_filters = inputs.shape[-1] // 2
        linear_output = self.activation(inputs[..., :n_filters])
        gate = keras.activations.sigmoid(inputs[..., n_filters:])
        return self.activation(linear_output) * gate
    
def wavenet_residual_block(inputs, n_filters, dilation_rate):
    z = keras.layers.Conv1D(2 * n_filters, kernel_size=2, padding="causal",
                            dilation_rate=dilation_rate)(inputs)
    z = GatedActivationUnit()(z)
    z = keras.layers.Conv1D(n_filters, kernel_size=1)(z)
    return keras.layers.Add()([z, inputs]), z

keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)

n_layers_per_block = 3 # 10 in the paper
n_blocks = 1 # 3 in the paper
n_filters = 32 # 128 in the paper
n_outputs = 10 # 256 in the paper

inputs = keras.layers.Input(shape=[None, 1])
z = keras.layers.Conv1D(n_filters, kernel_size=2, padding="causal")(inputs)
skip_to_last = []
for dilation_rate in [2**i for i in range(n_layers_per_block)] * n_blocks:
    z, skip = wavenet_residual_block(z, n_filters, dilation_rate)
    skip_to_last.append(skip)
z = keras.activations.relu(keras.layers.Add()(skip_to_last))
z = keras.layers.Conv1D(n_filters, kernel_size=1, activation="relu")(z)
Y_proba = keras.layers.Conv1D(n_outputs, kernel_size=1, activation="softmax")(z)

model = keras.models.Model(inputs=[inputs], outputs=[Y_proba])

model.compile(loss="mse", optimizer="adam", metrics=[last_time_step_mse])

history = model.fit(X_train, Y_train, epochs=2, validation_data=(X_valid, Y_valid))

输出:

Epoch 1/2
219/219 [==============================] - 3s 12ms/step - loss: 0.1300 - last_time_step_mse: 0.1260 - val_loss: 0.1229 - val_last_time_step_mse: 0.1199
Epoch 2/2
219/219 [==============================] - 2s 10ms/step - loss: 0.1222 - last_time_step_mse: 0.1178 - val_loss: 0.1217 - val_last_time_step_mse: 0.1189

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值