2020-12-27 吴恩达-C5 序列模型-w1 循环序列模型(课后编程1-Building your Recurrent Neural Network - Step by Step搭建RNN)

274 篇文章 24 订阅
233 篇文章 0 订阅

原文链接
如果打不开,也可以复制链接到https://nbviewer.jupyter.org中打开。

欢迎来到课程5序列模型的第一个作业。在这个作业中,你将在numpy中实现你的第一个循环神经网络。

循环神经网络(RNN)具有“记忆性”,在自然语言处理和其他序列任务中非常有效。他们可以一次读取一个输入 x ⟨ t ⟩ x^{\langle t\rangle} xt(例如单词),并通过从一个时间步传递到下一个时间步的隐藏层激活来记住一些信息/上下文。这允许单向RNN从过去获取信息以处理以后的输入。双向RNN可以同时从过去和未来获取上下文。

符号约定:

  • 上标 [ l ] [l] [l]表示与第 l l l层关联的对象。例如:
    • a [ 4 ] a^{[4]} a[4]是第 4 t h 4^{th} 4th层的激活值。
    • W [ 5 ] W^{[5]} W[5] b [ 5 ] b^{[5]} b[5]是第 5 t h 5^{th} 5th层的参数。
  • 上标 ( i ) (i) (i)表示与第 i i i个样本关联的对象。例如: x ( i ) x^{(i)} x(i)是第 i i i个输入的训练样本。
  • 上标 ⟨ t ⟩ \langle t\rangle t表示位于第 t t t个时间步的对象。例如:
    • x ⟨ t ⟩ x^{\langle t \rangle} xt是输入x在第 t t t个时间步
    • x ( i ) ⟨ t ⟩ x^{(i)\langle t \rangle} x(i)t是输入在第 t t t个时间步的样本 i i i
  • 下标 i i i表示向量的第 i i i项。 例如: a [ l ] i a^{[l]}i a[l]i表示 l l l层中激活值的第 i i i项。

我们假设你已经熟悉numpy和/或已经完成了先前课程。让我们开始吧!

首先导入库

import numpy as np
from rnn_utils import *

1-基本循环神经网络的前向传播

本周晚些时候,你将使用RNN生成音乐。你将要实现的基本RNN具有以下结构。在本例中, T x = T y T_x=T_y Tx=Ty
在这里插入图片描述
下面是告诉你如何实现RNN:

  • 实现RNN的一个时间步所需的计算。
  • T x T_x Tx时间步上执行一个循环,以便一次处理一个输入。

1-1RNN单元

循环神经网络可以看作是单个单元的重复。首先要实现单个时间步的计算。下图描述了RNN单元的单个时间步的操作。
在这里插入图片描述
练习:实现上图描述的RNN单元

指导:

  1. 使用tanh函数计算隐藏状态的激活值: a ⟨ t ⟩ = tanh ⁡ ( W a a a ⟨ t − 1 ⟩ + W a x x ⟨ t ⟩ + b a ) a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a) at=tanh(Waaat1+Waxxt+ba)
  2. 使用新的隐藏状态激活值 a ⟨ t ⟩ a^{\langle t\rangle} at,计算预测值 y ^ ⟨ t ⟩ = s o f t m a x ( W y a a ⟨ t ⟩ + b y ) \hat{y}^{\langle t\rangle}=softmax(W_{ya}a^{\langle t\rangle}+b_y) y^t=softmax(Wyaat+by)。我们为你提供了一个函数:softmax。
  3. ( a ⟨ t ⟩ , a ⟨ t − 1 ⟩ , x ⟨ t ⟩ , p a r a m e t e r s ) (a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters) (at,at1,xt,parameters)保存在cache中
  4. 返回 a ⟨ t ⟩ a^{\langle t \rangle} at , y ⟨ t ⟩ y^{\langle t \rangle} yt和cache

我们将向量化 m m m 个样本。因此, x ⟨ t ⟩ x^{\langle t \rangle} xt的维度是 ( n x , m ) (n_x,m) (nx,m) , a ⟨ t ⟩ ,a^{\langle t \rangle} at的维度是 ( n a , m ) (n_a,m) (na,m)

实现代码如下

# GRADED FUNCTION: rnn_cell_forward

def rnn_cell_forward(xt, a_prev, parameters):
    """
    Implements a single forward step of the RNN-cell as described in Figure (2)
    根据图2实现RNN单元的单步前向传播

    Arguments:
    xt -- your input data at timestep "t", numpy array of shape (n_x, m).
    时间步“t”输入的数据,维度为(n_x, m)

    a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
    时间步“t - 1”的隐藏隐藏状态,维度为(n_a, m)

    parameters -- python dictionary containing:
                        Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
                        矩阵,输入乘以权重,维度为(n_a, n_x)

                        Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
                        矩阵,隐藏状态乘以权重,维度为(n_a, n_a)

                        Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
                        矩阵,隐藏状态与输出相关的权重矩阵,维度为(n_y, n_a)

                        ba --  Bias, numpy array of shape (n_a, 1)
                        偏置,维度为(n_a, 1)

                        by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
                        偏置,隐藏状态与输出相关的偏置,维度为(n_y, 1)

    Returns:
    a_next -- next hidden state, of shape (n_a, m) 下一个隐藏状态,维度为(n_a, m)
    yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m) 在时间步“t”的预测,维度为(n_y, m)
    cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
    反向传播需要的元组,包含了(a_next, a_prev, xt, parameters)

    """
    
    # Retrieve parameters from "parameters"
    # 从“parameters”获取参数
    Wax = parameters["Wax"]
    Waa = parameters["Waa"]
    Wya = parameters["Wya"]
    ba = parameters["ba"]
    bu = parameters["by"]
    
    
    ### START CODE HERE ### (≈2 lines)
    # 使用上面的公式计算下一个激活值
    # compute next activation state using the formula given above
    a_next = np.tanh(np.dot(Waa, a_prev) + np.dot(Wax, xt) + ba)

    # compute output of the current cell using the formula given above
    # 使用上面的公式计算当前单元的输出
    yt_pred = softmax(np.dot(Wya, a_next) + by)
    ### END CODE HERE ###

    # 保存反向传播需要的值
    cache = (a_next, a_prev, xt, parameters)
    # store values you need for backward propagation in cache
    
    return a_next, yt_pred, cache

测试一下

np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}

a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", a_next.shape)
print("yt_pred[1] =", yt_pred[1])
print("yt_pred.shape = ", yt_pred.shape)

结果

a_next[4] =  [ 0.59584544  0.18141802  0.61311866  0.99808218  0.85016201  0.99980978
 -0.18887155  0.99815551  0.6531151   0.82872037]
a_next.shape =  (5, 10)
yt_pred[1] = [0.9888161  0.01682021 0.21140899 0.36817467 0.98988387 0.88945212
 0.36920224 0.9966312  0.9982559  0.17746526]
yt_pred.shape =  (2, 10)

1-2RNN的前向传播

你可以看到RNN就是你刚刚构建的单元的重复(连接)。如果你的数据输入序列经过了10个时间步,那么你将复制RNN单元10次。每个单元格将前一个单元格的隐藏状态( a ⟨ t − 1 ⟩ a^{\langle t-1\rangle} at1)和当前时间步的输入数据( x ⟨ t ⟩ x^{\langle t\rangle} xt)作为输入。它为当前这个时间步输出一个隐藏状态( a ⟨ t ⟩ a^{\langle t\rangle} at)和一个预测( y ⟨ t ⟩ y^{\langle t\rangle} yt)。
在这里插入图片描述

练习:实现上图的RNN前向传播。

指导:

  • 创建一个零向量( a a a),它将存储由RNN计算的所有隐藏状态。
  • 将“next”隐藏状态初始化为 a 0 a_0 a0(初始隐藏状态)。
  • 开始循环每个时间步,增量索引为 t t t
    • 使用rnn_cell_forward函数来更新“next”隐藏状态与cache。
    • 将“next”隐藏状态存储在 a a a(第 t t t个位置)
    • 预测值保存在y
    • 把cache保存到“caches”列表中。
  • 返回 a a a, y y y和caches

实现代码

# GRADED FUNCTION: rnn_forward

def rnn_forward(x, a0, parameters):
    """
    Implement the forward propagation of the recurrent neural network described in Figure (3).
    根据图3来实现循环神经网络的前向传播

    Arguments:
    x -- Input data for every time-step, of shape (n_x, m, T_x). 输入的全部数据,维度为(n_x, m, T_x)
    a0 -- Initial hidden state, of shape (n_a, m) 初始化隐藏状态,维度为 (n_a, m)
    parameters -- python dictionary containing:
                        Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
	        矩阵,输入乘以权重,维度为(n_a, n_x)

                        Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
	        矩阵,隐藏状态乘以权重,维度为(n_a, n_a)

                        Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
	        矩阵,隐藏状态与输出相关的权重矩阵,维度为(n_y, n_a)

                        ba --  Bias numpy array of shape (n_a, 1) 偏置,维度为(n_a, 1)
                        by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
	        偏置,隐藏状态与输出相关的偏置,维度为(n_y, 1)

    Returns:
    a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
    所有时间步的隐藏状态,维度为(n_a, m, T_x)

    y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
    所有时间步的预测,维度为(n_y, m, T_x)

    caches -- tuple of values needed for the backward pass, contains (list of caches, x)
    为反向传播的保存的元组,维度为(【列表类型】cache, x)"""
    
    # Initialize "caches" which will contain the list of all caches
    # 初始化“caches”,它将以列表类型包含所有的cache
    caches = []
    
    # Retrieve dimensions from shapes of x and Wy
    # 获取 x 与 Wya 的维度信息
    n_x, m, T_x = x.shape
    n_y, n_a = parameters["Wya"].shape
    ### START CODE HERE ###
    
    # initialize "a" and "y" with zeros (≈2 lines)
    # 使用0来初始化“a” 与“y”
    a = np.zeros([n_a, m, T_x])
    y_pred = np.zeros([n_y, m, T_x])

    # Initialize a_next (≈1 line)# 初始化“next”
    a_next = a0
    
    # loop over all time-steps # 遍历所有时间步
    for t in range(T_x):
        # Update next hidden state, compute the prediction, get the cache (≈1 line)
        ## 1.使用rnn_cell_forward函数来更新“next”隐藏状态与cache。
        a_next, yt_pred, cache = rnn_cell_forward(x[:, :, t], a_next, parameters)

        # Save the value of the new "next" hidden state in a (≈1 line)
        ## 2.使用 a 来保存“next”隐藏状态(第 t )个位置。
        a[:, :, t] = a_next

        # Save the value of the prediction in y (≈1 line)
        ## 3.使用 y 来保存预测值。
        y_pred[:, :, t] = yt_pred

        # Append "cache" to "caches" (≈1 line)
        ## 4.把cache保存到“caches”列表中。
        caches.append(cache)
        
    ### END CODE HERE ###
    # store values needed for backward propagation in cache
    # 保存反向传播所需要的参数
    caches = (caches, x)

    return a,y_pred,caches

测试一下

np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}

a, y_pred, caches = rnn_forward(x, a0, parameters)
print("a[4][1] = ", a[4][1])
print("a.shape = ", a.shape)
print("y_pred[1][3] =", y_pred[1][3])
print("y_pred.shape = ", y_pred.shape)
print("caches[1][1][3] =", caches[1][1][3])
print("len(caches) = ", len(caches))

运行结果

a[4][1] =  [-0.99999375  0.77911235 -0.99861469 -0.99833267]
a.shape =  (5, 10, 4)
y_pred[1][3] = [0.79560373 0.86224861 0.11118257 0.81515947]
y_pred.shape =  (2, 10, 4)
caches[1][1][3] = [-1.1425182  -0.34934272 -0.20889423  0.58662319]
len(caches) =  2

恭喜你!你已经成功地从零开始建立了循环神经网络的正向传播。这对于某些应用来说已经足够好了,但是它存在消失梯度问题。因此,当每个输出 y ⟨ t ⟩ y ^{\langle t\rangle} yt是使用局部上下文(即来自输入 x ⟨ t ′ ⟩ x^{\langle t'\rangle} xt的信息,其中 t ′ t' t t t t不太远)进行预测时,它的效果最好。

在下一部分中,你将构建一个更复杂的LSTM模型,它更擅长于处理消失的梯度。LSTM将更好地记住一条信息,并在多个时间步中保存。

2-长短时记忆(Long Short-Term Memory (LSTM))网络

下图显示了LSTM单元的操作。
在这里插入图片描述
与上面的RNN示例类似,你将从为单个时间步实现LSTM单元开始。然后你可以在for循环内部迭代调用它,让它处理 T x T_x Tx时间步的输入。

2-0关于门*

  1. 遗忘门

为了说明清楚,让我们假设我们正在读一段文本中的单词,并想使用LSTM来跟踪语法结构,比如主语是单数还是复数。如果主语从单数词变为复数词,我们需要找到一种方法来摆脱我们之前存储的单数/复数状态的记忆值。

在LSTM中,遗忘门允许我们这样做:
Γ f ⟨ t ⟩ ​ = σ ( W f ​ [ a ⟨ t − 1 ⟩ , x ⟨ t ⟩ ] + b f ​ ) (1) Γ^{⟨t⟩}_f​=σ(W_f​[a^{⟨t−1⟩},x^{⟨t⟩}]+b_f​) \tag{1} Γft=σ(Wf[at1,xt]+bf)(1)
这里, W f W_f Wf是控制遗忘门行为的权重。我们将 [ a ⟨ t − 1 ⟩ , x ⟨ t ⟩ ] [a^{\langle t-1\rangle},x^{\langle t\rangle}] [at1xt]连接起来,然后乘以 W f W_f Wf

上面的公式得到一个向量 Γ f ⟨ t ⟩ \Gamma_f^{\langle t\rangle} Γft,值介于0和1之间。这个遗忘门向量将按元素乘以前一个单元状态 c ⟨ t − 1 ⟩ c^{\langle t-1\rangle} ct1。因此,

  • 如果 Γ f ⟨ t ⟩ \Gamma_f^{\langle t\rangle} Γft的值之一为0(或接近0),则意味着LSTM应该删除 c ⟨ t − 1 ⟩ c^{\langle t-1\rangle} ct1的相应组件(corresponding component)中的那条信息(例如,单数主题)。
  • 如果 Γ f ⟨ t ⟩ \Gamma_f^{\langle t\rangle} Γft其中一个值为1,则LSTM将保留 c ⟨ t − 1 ⟩ c^{\langle t-1\rangle} ct1的信息。
  1. 更新门

一旦我们忘记了所讨论的主语是单数,我们需要找到一种方法来更新它,以反映新的主语现在是复数的。以下是更新门的公式:
Γ u ⟨ t ⟩ ​ = σ ( W u ​ [ a ⟨ t − 1 ⟩ , x ⟨ t ⟩ ] + b u ​ ) (2) Γ_u^{⟨t⟩}​=σ(W_u​[a^{⟨t−1⟩},x^{⟨t⟩}]+b_u​) \tag{2} Γut=σ(Wu[at1,xt]+bu)(2)

与遗忘门类似,这里的 Γ u ⟨ t ⟩ \Gamma_u^{\langle t\rangle} Γut也是一个0到1之间的值向量。它将按元素与 c ~ ⟨ t ⟩ \tilde{c}^{\langle t\rangle} c~t相乘,以计算 c ⟨ t ⟩ c^{\langle t\rangle} ct

  1. 更新单元

为了更新新的主语,我们需要创建一个新的数字向量,我们可以将其添加到以前的单元格状态中。我们使用的公式是:
c ~ ⟨ t ⟩ = t a n h ( W c ​ [ a ⟨ t − 1 ⟩ , x ⟨ t ⟩ ] + b c ​ ) (3) \tilde c^{⟨t⟩}=tanh(W_c​[a^{⟨t−1⟩},x^{⟨t⟩}]+b_c​) \tag{3} c~t=tanh(Wc[at1,xt]+bc)(3)
最后,新单元的状态是
c ⟨ t ⟩ = Γ f ⟨ t ⟩ ∗ c ⟨ t − 1 ⟩ + Γ u ⟨ t ⟩ ∗ c ~ ⟨ t ⟩ (4) c^{\langle t \rangle} = \Gamma_f^{\langle t \rangle}\ast c^{\langle t - 1 \rangle} + \Gamma_u^{\langle t \rangle} \ast \tilde c^{\langle t \rangle} \tag{4} ct=Γftct1+Γutc~t(4)

  1. 输出门

为了决定我们将使用哪些输出,我们将使用以下两个公式:
Γ o ⟨ t ⟩ ​ = σ ( W o ​ [ a ⟨ t − 1 ⟩ , x ⟨ t ⟩ ] + b o ​ ) (5) Γ_o^{⟨t⟩}​=σ(W_o​[a^{⟨t−1⟩},x^{⟨t⟩}]+b_o​) \tag{5} Γot=σ(Wo[at1,xt]+bo)(5)
a ⟨ t ⟩ = Γ o ⟨ t ⟩ ​ ∗ t a n h ( c ⟨ t ⟩ ) (6) a^{⟨t⟩}=Γ_o^{⟨t⟩}​∗tanh(c^{⟨t⟩}) \tag{6} at=Γottanh(ct)(6)

2-1LSTM单元

练习:实现上图的LSTM单元
指导
1.在单个矩阵中连接 a ⟨ t − 1 ⟩ a ^{\langle t-1\rangle} at1 x ⟨ t ⟩ x ^{\langle t\rangle} xt,得到: c o n t a c t = [ a ⟨ t − 1 ⟩ x ⟨ t ⟩ ] ​ contact=\begin{bmatrix} a^{⟨t−1⟩} \\ x^{⟨t⟩} \\ \end{bmatrix}​ contact=[at1xt]
2.你可以使用sigmoid()(在rnn_utils内)与np.tanh()来计算上一节中的6个公式。
3.计算预测 y ⟨ t ⟩ y^{\langle t \rangle} yt,你可以使用softmax()(在rnn_utils内)。

实现代码

# GRADED FUNCTION: lstm_cell_forward

def lstm_cell_forward(xt, a_prev, c_prev, parameters):
    """
    Implement a single forward step of the LSTM-cell as described in Figure (4)
    根据上图实现一个LSTM单元的前向传播。

    Arguments:
    xt -- your input data at timestep "t", numpy array of shape (n_x, m).
    在时间步“t”输入的数据,维度为(n_x, m)

    a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
    上一个时间步“t-1”的隐藏状态,维度为(n_a, m)

    c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
    上一个时间步“t-1”的记忆状态,维度为(n_a, m)

    parameters -- python dictionary containing:
                        Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
	        遗忘门的权值,维度为(n_a, n_a + n_x)

                        bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
	        遗忘门的偏置,维度为(n_a, 1)

                        Wi -- Weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
	        更新门的权值,维度为(n_a, n_a + n_x)

                        bi -- Bias of the save gate, numpy array of shape (n_a, 1)
	        更新门的偏置,维度为(n_a, 1)

                        Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
	        第一个“tanh”的权值,维度为(n_a, n_a + n_x)

                        bc --  Bias of the first "tanh", numpy array of shape (n_a, 1)
	        第一个“tanh”的偏置,维度为(n_a, n_a + n_x)

                        Wo -- Weight matrix of the focus gate, numpy array of shape (n_a, n_a + n_x)
	        输出门的权值,维度为(n_a, n_a + n_x)

                        bo --  Bias of the focus gate, numpy array of shape (n_a, 1)
	        输出门的偏置,维度为(n_a, 1)

                        Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
	        隐藏状态与输出相关的权值,维度为(n_y, n_a)

                        by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
	        隐藏状态与输出相关的偏置,维度为(n_y, 1)
                        
    Returns:
    a_next -- next hidden state, of shape (n_a, m) 下一个隐藏状态,维度为(n_a, m)
    c_next -- next memory state, of shape (n_a, m) 下一个记忆状态,维度为(n_a, m)
    yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
    在时间步“t”的预测,维度为(n_y, m)

    cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
    包含了反向传播所需要的参数,包含了(a_next, c_next, a_prev, c_prev, xt, parameters)
    
    Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilda),
          c stands for the memory value
    ft/it/ot表示遗忘/更新/输出门,cct表示候选值(c tilda),c表示记忆值。
    """

    # Retrieve parameters from "parameters"# 从“parameters”中获取相关值
    Wf = parameters["Wf"]
    bf = parameters["bf"]
    Wi = parameters["Wi"]
    bi = parameters["bi"]
    Wc = parameters["Wc"]
    bc = parameters["bc"]
    Wo = parameters["Wo"]
    bo = parameters["bo"]
    Wy = parameters["Wy"]
    by = parameters["by"]
    
    # Retrieve dimensions from shapes of xt and Wy# 获取 xt 与 Wy 的维度信息
    n_x, m = xt.shape
    n_y, n_a = Wy.shape

    ### START CODE HERE ###
    # Concatenate a_prev and xt (≈3 lines)# 1.连接 a_prev 与 xt
    concat = np.zeros([n_a + n_x, m])
    concat[: n_a, :] = a_prev
    concat[n_a :, :] = xt

    # 根据公式计算ft、it、cct、c_next、ot、a_next
    # Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)
    ft = sigmoid(np.dot(Wf, concat) +  bf) ## 遗忘门,公式1
    it = sigmoid(np.dot(Wi, concat) +  bi)## 更新门,公式2
    
    ct = np.tanh(np.dot(Wc, concat) + bc) ## 更新单元,公式3
    
    c_next = ft*c_prev + it*ct ## 更新单元,公式4
    
    ot = sigmoid(np.dot(Wo, concat) +  bo) ## 输出门,公式5
    
    a_next = ot * np.tanh(c_next) ## 输出门,公式6

     #计算LSTM单元的预测值
    # Compute prediction of the LSTM cell (≈1 line)
    yt_pred = softmax(np.dot(Wy, a_next))
    
    ### END CODE HERE ###

    # 保存包含了反向传播所需要的参数
    cache = (a_next, c_next, a_prev, c_prev, ft, it, ct, ot, xt, parameters)

    # store values needed for backward propagation in cache
    return a_next, c_next, yt_pred, cache

测试一下

np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)

parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}

a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", c_next.shape)
print("c_next[2] = ", c_next[2])
print("c_next.shape = ", c_next.shape)
print("yt[1] =", yt[1])
print("yt.shape = ", yt.shape)
print("cache[1][3] =", cache[1][3])
print("len(cache) = ", len(cache))

运行结果

a_next[4] =  [-0.66408471  0.0036921   0.02088357  0.22834167 -0.85575339  0.00138482
  0.76566531  0.34631421 -0.00215674  0.43827275]
a_next.shape =  (5, 10)
c_next[2] =  [ 0.63267805  1.00570849  0.35504474  0.20690913 -1.64566718  0.11832942
  0.76449811 -0.0981561  -0.74348425 -0.26810932]
c_next.shape =  (5, 10)
yt[1] = [0.88515863 0.2693483  0.35881369 0.26375541 0.9845904  0.46704976
 0.01810876 0.21934384 0.55723129 0.14129154]
yt.shape =  (2, 10)
cache[1][3] = [-0.16263996  1.03729328  0.72938082 -0.54101719  0.02752074 -0.30821874
  0.07651101 -1.03752894  1.41219977 -0.37647422]
len(cache) =  10

2-2LSTM的前向传播

你已经实现了LSTM的一个步骤,现在可以使用for循环来迭代处理 T x T_x Tx输入序列。
在这里插入图片描述
练习: 实现lstm_forward(),在 T x T_x Tx时间步上运行LSTM。
注意: c ⟨ 0 ⟩ c^{\langle 0 \rangle} c0使用0来初始化。

实现代码

# GRADED FUNCTION: lstm_forward

def lstm_forward(x, a0, parameters):
    """
    Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3).
    根据上图来实现LSTM单元组成的的循环神经网络

    Arguments:
    x -- Input data for every time-step, of shape (n_x, m, T_x).
    所有时间步的输入数据,维度为(n_x, m, T_x)

    a0 -- Initial hidden state, of shape (n_a, m) 初始化隐藏状态,维度为(n_a, m)

    parameters -- python dictionary containing:
                        Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
	        遗忘门的权值,维度为(n_a, n_a + n_x)

                        bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
 	        遗忘门的偏置,维度为(n_a, 1)

                        Wi -- Weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
	        更新门的权值,维度为(n_a, n_a + n_x)

                        bi -- Bias of the save gate, numpy array of shape (n_a, 1)
	        更新门的偏置,维度为(n_a, 1)

                        Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
	        第一个“tanh”的权值,维度为(n_a, n_a + n_x)

                        bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
	        第一个“tanh”的偏置,维度为(n_a, n_a + n_x)

                        Wo -- Weight matrix of the focus gate, numpy array of shape (n_a, n_a + n_x)
	        输出门的权值,维度为(n_a, n_a + n_x)

                        bo -- Bias of the focus gate, numpy array of shape (n_a, 1)
	        输出门的偏置,维度为(n_a, 1)

                        Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
	        隐藏状态与输出相关的权值,维度为(n_y, n_a)

                        by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
	        隐藏状态与输出相关的偏置,维度为(n_y, 1)
                        
    Returns:
    a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
    所有时间步的隐藏状态,维度为(n_a, m, T_x)

    y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
    所有时间步的预测值,维度为(n_y, m, T_x)

    caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
    为反向传播的保存的元组,维度为(【列表类型】cache, x)"""

    # Initialize "caches", which will track the list of all the caches # 初始化“caches”
    caches = []
    
    ### START CODE HERE ###
    # Retrieve dimensions from shapes of xt and Wy (≈2 lines)
    # 获取 xt 与 Wy 的维度信息
    n_x, m, T_x = x.shape
    n_y, n_a = parameters['Wy'].shape
    
    # initialize "a", "c" and "y" with zeros (≈3 lines)# 使用0来初始化“a”、“c”、“y”
    a = np.zeros([n_a, m, T_x])
    c = np.zeros([n_a, m, T_x])
    y = np.zeros([n_y, m, T_x])
    
    # Initialize a_next and c_next (≈2 lines)
    # 初始化“a_next”、“c_next”
    a_next = a0
    c_next = np.zeros([n_a, m])
    
    # loop over all time-steps  # 遍历所有的时间步
    for t in range(T_x):
        # Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
        # 更新下一个隐藏状态,下一个记忆状态,计算预测值,获取cache
        a_next, c_next, yt, cache = lstm_cell_forward(x[:, :, t], a_next, c_next, parameters)

        # Save the value of the new "next" hidden state in a (≈1 line)
        # 保存新的下一个隐藏状态到变量a中
        a[:,:,t] = a_next

        # Save the value of the prediction in y (≈1 line)
        # 保存预测值到变量y中
        y[:,:,t] = yt

        # Save the value of the next cell state (≈1 line)
        # 保存下一个单元状态到变量c中
        c[:,:,t]  = c_next

        # Append the cache into caches (≈1 line)# 把cache添加到caches中
        caches.append(cache)
        
    ### END CODE HERE ###
    
    # store values needed for backward propagation in cache
    # 保存反向传播需要的参数
    caches = (caches, x)

    return a, y, c, caches

测试一下

np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)

parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}

a, y, c, caches = lstm_forward(x, a0, parameters)
print("a[4][3][6] = ", a[4][3][6])
print("a.shape = ", a.shape)
print("y[1][4][3] =", y[1][4][3])
print("y.shape = ", y.shape)
print("caches[1][1[1]] =", caches[1][1][1])
print("c[1][2][1]", c[1][2][1])
print("len(caches) = ", len(caches))

结果

a[4][3][6] =  0.17211776753291672
a.shape =  (5, 10, 7)
y[1][4][3] = 0.7879502769093494
y.shape =  (2, 10, 7)
caches[1][1[1]] = [ 0.82797464  0.23009474  0.76201118 -0.22232814 -0.20075807  0.18656139
  0.41005165]
c[1][2][1] -0.8555449167181981
len(caches) =  2

3-循环神经网络的反向传播(可选)

在现代的DL框架中,你只需要实现前向传播,而框架负责反向传播,因此大多数DL工程师不需要为反向传播的细节操心。但是,如果你是微积分方面的专家,并且希望在RNN中查看反向传播的详细信息,那么你可以浏览以下部分。

在前面的课程中,当你实现一个简单的(完全连接的)神经网络时,你使用反向传播来计算更新参数的损失/成本相关的导数。类似地,在循环神经网络中,为了更新参数,可以计算与成本相关的导数。反向传播方程非常复杂,我们在课程中没有推导它们。不过,我们将在下面简要介绍它们。

3-1基本RNN的反向传播

我们将从计算基本RNN单元的反向开始。
在这里插入图片描述

上图是RNN单元的反向传播。就像在全连接的神经网络中,代价函数 J J J的导数通过遵循链式规则从RNN进行反向传播,链式法则也用于计算 ( ∂ J ∂ W a x , ∂ J ∂ W a a , ∂ J ∂ b ) (\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b}) (WaxJ,WaaJ,bJ)来更新参数 ( W a x , W a a , b a ) (W_{ax}, W_{aa}, b_a) (Wax,Waa,ba)

推导单步反向函数:

要向后计算rnn_cell_backward,需要计算以下公式。手工推导是一个很好的练习。

函数 tanh ⁡ \tanh tanh的导数是 1 − tanh ⁡ ( x ) 2 1-\tanh(x)^2 1tanh(x)2。证明过程参见链接。注意: s e c ( x ) 2 = 1 − t a n h ( x ) 2 sec(x)^2=1-tanh(x)^2 sec(x)2=1tanh(x)2
类似的,对于 ( ∂ J ∂ W a x , ∂ J ∂ W a a , ∂ J ∂ b ) (\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b}) (WaxJ,WaaJ,bJ)而言, t a n h ( u ) tanh(u) tanh(u)的导数是 ( 1 − tanh ⁡ ( u ) 2 ) d u (1-\tanh(u)^2)du (1tanh(u)2)du

上图中最后两个方程也遵循相同的规则,并使用 tanh ⁡ \tanh tanh导数导出。请注意,这样做是为了获得相同的尺寸匹配。

单步反向传播rnn_cell_backward实现代码如下

def rnn_cell_backward(da_next, cache):
    """
    Implements the backward pass for the RNN-cell (single time-step).
    实现基本的RNN单元的单步反向传播

    Arguments:
    da_next -- Gradient of loss with respect to next hidden state
    下一个隐藏状态的损失的梯度。

    cache -- python dictionary containing useful values (output of rnn_step_forward())
    字典类型,rnn_step_forward()的输出

    Returns:
    gradients -- python dictionary containing:
                        dx -- Gradients of input data, of shape (n_x, m) 输入数据的梯度,维度为(n_x, m)
                        da_prev -- Gradients of previous hidden state, of shape (n_a, m)
	        上一隐藏层的隐藏状态,维度为(n_a, m)

                        dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
	        输入到隐藏状态的权重的梯度,维度为(n_a, n_x)

                        dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
	        隐藏状态到隐藏状态的权重的梯度,维度为(n_a, n_a)

                        dba -- Gradients of bias vector, of shape (n_a, 1)
	        偏置向量的梯度,维度为(n_a, 1)
    """
    
    # Retrieve values from cache # 获取cache 的值
    (a_next, a_prev, xt, parameters) = cache
    
    # Retrieve values from parameters # 从 parameters 中获取参数
    Wax = parameters["Wax"]
    Waa = parameters["Waa"]
    Wya = parameters["Wya"]
    ba = parameters["ba"]
    by = parameters["by"]

    ### START CODE HERE ###
    # 计算tanh相对于a_next的梯度.
    # compute the gradient of tanh with respect to a_next (≈1 line)
    dtanh = (1 - np.square(a_next)) * da_next

    # compute the gradient of the loss with respect to Wax (≈2 lines)
    # 计算关于Wax损失的梯度
    dxt = np.dot(Wax.T, dtanh)
    dWax = np.dot(dtanh, xt.T)

    # compute the gradient with respect to Waa (≈2 lines)
    # 计算关于Waa损失的梯度
    da_prev = np.dot(Waa.T, dtanh)
    dWaa = np.dot(dtanh, a_prev.T)

    # compute the gradient with respect to b (≈1 line)
    # 计算关于b损失的梯度
    dba = np.sum(dtanh, keepdims=True, axis=-1)

    ### END CODE HERE ###
    
    # Store the gradients in a python dictionary
    # 保存这些梯度到字典内
    gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
    
    return gradients

测试一下

np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
b = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}

a_next, yt, cache = rnn_cell_forward(xt, a_prev, parameters)

da_next = np.random.randn(5,10)
gradients = rnn_cell_backward(da_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)

运行结果

gradients["dxt"][1][2] = -0.4605641030588796
gradients["dxt"].shape = (3, 10)
gradients["da_prev"][2][3] = 0.08429686538067718
gradients["da_prev"].shape = (5, 10)
gradients["dWax"][3][1] = 0.3930818739219303
gradients["dWax"].shape = (5, 3)
gradients["dWaa"][1][2] = -0.2848395578696067
gradients["dWaa"].shape = (5, 5)
gradients["dba"][4] = [0.80517166]
gradients["dba"].shape = (5, 1)

整个RNN的反向传播

在每个时间步 t t t计算关于 a ⟨ t ⟩ a^{\langle t\rangle} at的成本梯度是有用的,因为它有助于把梯度反向传播到前一个RNN单元。因此,你需要遍历从末尾开始的所有时间步,在每个时间步中,你都会增加 d b a db_a dba d W a a dW_{aa} dWaa d W a x dW_{ax} dWax,并存储 d x dx dx

指导:
实现rnn_backward函数。首先用零初始化返回变量,然后在所有时间步中循环,同时在每个时间步调用rnn_cell_backward ,相应地更新其他变量。

实现代码如下

def rnn_backward(da, caches):
    """
    Implement the backward pass for a RNN over an entire sequence of input data.
    在整个输入数据序列上实现RNN的反向传播

    Arguments:
    da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
    所有隐藏状态的梯度,维度为(n_a, m, T_x)

    caches -- tuple containing information from the forward pass (rnn_forward)
    包含向前传播的信息的元组
    
    Returns:
    gradients -- python dictionary containing:
                        dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
	        关于输入数据的梯度,维度为(n_x, m, T_x)

                        da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
	        关于初始化隐藏状态的梯度,维度为(n_a, m)

                        dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
	        关于输入权重的梯度,维度为(n_a, n_x)

                        dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)
	        关于隐藏状态的权值的梯度,维度为(n_a, n_a)

                        dba -- Gradient w.r.t the bias, of shape (n_a, 1)
	        关于偏置的梯度,维度为(n_a, 1)
    """
        
    ### START CODE HERE ###
    caches, x = caches
    
    # Retrieve values from the first cache (t=1) of caches (≈2 lines)
    # 从caches中获取第一个cache(t=1)的值
    a1, a0, x1, parameters = caches[0]
    
    # Retrieve dimensions from da's and x1's shapes (≈2 lines)
    # 获取da与x1的维度信息
    n_a, m, T_x = da.shape
    n_x, m = x1.shape
    
    # initialize the gradients with the right sizes (≈6 lines)
    # 初始化梯度
    dx   = np.zeros([n_x, m, T_x])
    dWax = np.zeros([n_a, n_x])
    dWaa = np.zeros([n_a, n_a])
    dba  = np.zeros([n_a, 1])
    da0  = np.zeros([n_a, m])
    da_prevt = np.zeros([n_a, m])
    
    # Loop through all the time steps
    # 处理所有时间步
    for t in reversed(range(T_x)):
        # Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
        # 计算时间步“t”时的梯度
        gradients = rnn_cell_backward(da[:, :, t] + da_prevt, caches[t]) 

        # Retrieve derivatives from gradients (≈ 1 line)
        #从梯度中获取导数
        dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]

        # 通过在时间步t添加它们的导数来增加关于全局导数的参数
        # Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
        dx[:, :, t] = dxt
        dWax += dWaxt
        dWaa += dWaat
        dba  += dbat
        
    # Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line) 
    #将 da0设置为a的梯度,该梯度已通过所有时间步骤进行反向传播
    da0 = da_prevt
    ### END CODE HERE ###

    # Store the gradients in a python dictionary
    #保存这些梯度到字典内
    gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
    
    return gradients

测试一下

np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a, y, caches = rnn_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = rnn_backward(da, caches)

print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)

测试结果

gradients["dx"][1][2] = [-2.07101689 -0.59255627  0.02466855  0.01483317]
gradients["dx"].shape = (3, 10, 4)
gradients["da0"][2][3] = -0.31494237512664996
gradients["da0"].shape = (5, 10)
gradients["dWax"][3][1] = 11.264104496527777
gradients["dWax"].shape = (5, 3)
gradients["dWaa"][1][2] = 2.303333126579893
gradients["dWaa"].shape = (5, 5)
gradients["dba"][4] = [-0.74747722]
gradients["dba"].shape = (5, 1)

3-2LSTM的反向传播

3-2-1单步反向传播

LSTM反向传播比前向传播稍微复杂一些。我们为你提供了下面的LSTM反向传播的所有方程式。(如果你喜欢微积分练习,你可以自己从零开始练习。)

3-2-2门的导数

d Γ o ⟨ t ⟩ ​ = d a n e x t ​ ∗ t a n h ( c n e x t ​ ) ∗ Γ o ⟨ t ⟩ ​ ∗ ( 1 − Γ o ⟨ t ⟩ ​ ) (7) dΓ_o^{⟨t⟩}​=da_{next}​∗tanh(c_{next}​)∗Γ_o^{⟨t⟩}​∗(1−Γ_o^{⟨t⟩}​)\tag{7} dΓot=danexttanh(cnext)Γot(1Γot)(7)

d c ~ ⟨ t ⟩ = d c n e x t ∗ Γ i ⟨ t ⟩ + Γ o ⟨ t ⟩ ( 1 − tanh ⁡ ( c n e x t ) 2 ) ∗ i t ∗ d a n e x t ∗ c ~ ⟨ t ⟩ ∗ ( 1 − tanh ⁡ ( c ~ ) 2 ) (8) d\tilde c^{\langle t \rangle} = dc_{next}*\Gamma_i^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * i_t * da_{next} * \tilde c^{\langle t \rangle} * (1-\tanh(\tilde c)^2) \tag{8} dc~t=dcnextΓit+Γot(1tanh(cnext)2)itdanextc~t(1tanh(c~)2)(8)

d Γ u ⟨ t ⟩ = d c n e x t ∗ c ~ ⟨ t ⟩ + Γ o ⟨ t ⟩ ( 1 − tanh ⁡ ( c n e x t ) 2 ) ∗ c ~ ⟨ t ⟩ ∗ d a n e x t ∗ Γ u ⟨ t ⟩ ∗ ( 1 − Γ u ⟨ t ⟩ ) (9) d\Gamma_u^{\langle t \rangle} = dc_{next}*\tilde c^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * \tilde c^{\langle t \rangle} * da_{next}*\Gamma_u^{\langle t \rangle}*(1-\Gamma_u^{\langle t \rangle})\tag{9} dΓut=dcnextc~t+Γot(1tanh(cnext)2)c~tdanextΓut(1Γut)(9)

d Γ f ⟨ t ⟩ = d c n e x t ∗ c ~ p r e v + Γ o ⟨ t ⟩ ( 1 − tanh ⁡ ( c n e x t ) 2 ) ∗ c p r e v ∗ d a n e x t ∗ Γ f ⟨ t ⟩ ∗ ( 1 − Γ f ⟨ t ⟩ ) (10) d\Gamma_f^{\langle t \rangle} = dc_{next}*\tilde c_{prev} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * c_{prev} * da_{next}*\Gamma_f^{\langle t \rangle}*(1-\Gamma_f^{\langle t \rangle})\tag{10} dΓft=dcnextc~prev+Γot(1tanh(cnext)2)cprevdanextΓft(1Γft)(10)

3-2-3参数的导数

d W f ​ = d Γ f ⟨ t ⟩ ​ ∗ ( a p r e v ​ x t ) T (11) dW_f​=dΓ_f^{⟨t⟩}​ ∗ \begin{pmatrix} a_{prev}​ \\ x_t \\ \end{pmatrix}^T\tag{11} dWf=dΓft(aprevxt)T(11)
d W u = d Γ u ⟨ t ⟩ ∗ ( a p r e v ​ x t ) T (12) dW_u = d\Gamma_u^{\langle t \rangle} * \begin{pmatrix} a_{prev}​ \\ x_t \\ \end{pmatrix}^T\tag{12} dWu=dΓut(aprevxt)T(12)
d W c = d c ~ ⟨ t ⟩ ∗ ( a p r e v ​ x t ) T (13) dW_c = d\tilde c^{\langle t \rangle} * \begin{pmatrix} a_{prev}​ \\ x_t \\ \end{pmatrix}^T\tag{13} dWc=dc~t(aprevxt)T(13)
d W o = d Γ o ⟨ t ⟩ ∗ ( a p r e v ​ x t ) T (14) dW_o = d\Gamma_o^{\langle t \rangle} * \begin{pmatrix} a_{prev}​ \\ x_t \\ \end{pmatrix}^T\tag{14} dWo=dΓot(aprevxt)T(14)
要计算 d b f , d b u , d b c , d b o db_f,db_u,db_c,db_o dbfdbudbcdbo,只需分别在 d Γ f ⟨ t ⟩ , d Γ u ⟨ t ⟩ , d c ~ ⟨ t ⟩ , d Γ o ⟨ t ⟩ d\Gamma_f^{\langle t\rangle},d\Gamma_u^{\langle t\rangle},d\tilde c^{\langle t\rangle},d\Gamma_o^{\langle t\rangle} dΓftdΓutdc~tdΓot上使用axis= 1(横轴)求和。请注意,你应该使用keep_dims=True选项。
最后,你将计算关于先前隐藏状态、先前内存状态和输入的导数。
d a p r e v ​ = W f T ​ ∗ d Γ f ⟨ t ⟩ ​ + W u T ​ ∗ d Γ u ⟨ t ⟩ ​ + W c T ​ ∗ d c ~ ⟨ t ⟩ + W o T ​ ∗ d Γ o ⟨ t ⟩ (15) da_{prev}​=W_f^T​∗dΓ_f^{⟨t⟩}​+W_u^T​∗dΓ_u^{⟨t⟩}​+W_c^T​∗d\tilde c^{⟨t⟩}+W_o^T​∗dΓ_o^{⟨t⟩}\tag{15} daprev=WfTdΓft+WuTdΓut+WcTdc~t+WoTdΓot(15)

这里,方程13的权重是第一个 n_a, (比如 W f = W f [ : n a , : ] W_f = W_f[:n_a,:] Wf=Wf[:na,:]等等)
d c p r e v ​ = d c n e x t ​ Γ f ⟨ t ⟩ ​ + Γ o ⟨ t ⟩ ​ ∗ ( 1 − t a n h ( c n e x t ​ ) 2 ) ∗ Γ f ⟨ t ⟩ ​ ∗ d a n e x t (​16) dc_{prev}​=dc_{next​}Γ_f^{⟨t⟩}​+Γ_o^{⟨t⟩}​∗(1−tanh(c_{next}​)^2)∗Γ_f^{⟨t⟩}​∗da_{next}\tag{​16} dcprev=dcnextΓft+Γot(1tanh(cnext)2)Γftdanext(16)

d x ⟨ t ⟩ = W f T ∗ d Γ f ⟨ t ⟩ + W u T ∗ d Γ u ⟨ t ⟩ + W c T ∗ d c ~ t + W o T ∗ d Γ o ⟨ t ⟩ (17) dx^{\langle t \rangle} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c_t + W_o^T * d\Gamma_o^{\langle t \rangle}\tag{17} dxt=WfTdΓft+WuTdΓut+WcTdc~t+WoTdΓot(17)
方程15的权值是从n_a到结尾, (比如 W f = W f [ n a : , : ] W_f = W_f[n_a:,:] Wf=Wf[na:,:]等等)

练习:在函数lstm_cell_backward中实现公式7-17。

实现代码

def lstm_cell_backward(da_next, dc_next, cache):
    """
    Implement the backward pass for the LSTM-cell (single time-step).
    实现LSTM的单步反向传播

    Arguments:
    da_next -- Gradients of next hidden state, of shape (n_a, m)
    下一个隐藏状态的梯度,维度为(n_a, m)

    dc_next -- Gradients of next cell state, of shape (n_a, m)
    下一个单元状态的梯度,维度为(n_a, m)

    cache -- cache storing information from the forward pass 来自前向传播的一些参数

    Returns:
    gradients -- python dictionary containing:
                        dxt -- Gradient of input data at time-step t, of shape (n_x, m)
	        输入数据的梯度,维度为(n_x, m)

                        da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
	        先前的隐藏状态的梯度,维度为(n_a, m)

                        dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
	        先前的记忆状态的梯度,维度为(n_a, m, T_x)

                        dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
	        遗忘门的权值的梯度,维度为(n_a, n_a + n_x)

                        dWi -- Gradient w.r.t. the weight matrix of the input gate, numpy array of shape (n_a, n_a + n_x)
		更新门的权值的梯度,维度为(n_a, n_a + n_x)

                        dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
		第一个“tanh”的权值的梯度,维度为(n_a, n_a + n_x)

                        dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
		输出门的权值的梯度,维度为(n_a, n_a + n_x)

                        dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
		遗忘门的偏置的梯度,维度为(n_a, 1)

                        dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
		更新门的偏置的梯度,维度为(n_a, 1)

                        dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
		第一个“tanh”的偏置的梯度,维度为(n_a, n_a + n_x)

                        dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
		输出门的偏置的梯度,维度为(n_a, 1)
    """

    # Retrieve information from "cache" # 从cache中获取信息
    (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
    
    ### START CODE HERE ###
    # Retrieve dimensions from xt's and a_next's shape (≈2 lines)
    # 获取xt与a_next的维度信息
    n_x, m = xt.shape
    n_a, m = a_next.shape
    
    # Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
	# 根据公式7-10来计算门的导数
    dot = da_next * np.tanh(c_next) * ot * (1 - ot)
    dcct = (dc_next * it + ot * (1 - np.square(np.tanh(c_next))) * it * da_next) * (1 - np.square(cct))
    dit = (dc_next * cct + ot * (1 - np.square(np.tanh(c_next))) * cct * da_next) * it * (1 - it)
    dft = (dc_next * c_prev + ot * (1 - np.square(np.tanh(c_next))) * c_prev * da_next) * ft * (1 - ft)
    
    ## Code equations (7) to (10) (≈4 lines)
	# 根据公式11-14计算参数的导数
    ##dit = None
    ##dft = None
    ##dot = None
    ##dcct = None

    # Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)
	# 根据公式11-14计算参数的导数
    concat = np.concatenate((a_prev, xt), axis=0).T
    dWf = np.dot(dft, concat)
    dWi = np.dot(dit, concat)
    dWc = np.dot(dcct, concat)
    dWo = np.dot(dot, concat)
    dbf = np.sum(dft,axis=1,keepdims=True)  
    dbi = np.sum(dit,axis=1,keepdims=True)  
    dbc = np.sum(dcct,axis=1,keepdims=True)  
    dbo = np.sum(dot,axis=1,keepdims=True)  

    # Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)
	# 使用公式15-17计算洗起来了隐藏状态、先前记忆状态、输入的导数。
    da_prev = np.dot(parameters["Wf"][:, :n_a].T, dft) + np.dot(parameters["Wc"][:, :n_a].T, dcct) + np.dot(parameters["Wi"][:, :n_a].T, dit) + np.dot(parameters["Wo"][:, :n_a].T, dot)
    dc_prev = dc_next*ft+ot*(1-np.square(np.tanh(c_next)))*ft*da_next
    dxt = np.dot(parameters["Wf"][:, n_a:].T, dft) + np.dot(parameters["Wc"][:, n_a:].T, dcct) + np.dot(parameters["Wi"][:, n_a:].T, dit) + np.dot(parameters["Wo"][:, n_a:].T, dot)
    ### END CODE HERE ###
    
    # Save gradients in dictionary # 保存梯度信息到字典
    gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
                "dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}

    return gradients

测试一下

np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)

parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}

a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)

da_next = np.random.randn(5,10)
dc_next = np.random.randn(5,10)
gradients = lstm_cell_backward(da_next, dc_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dc_prev\"][2][3] =", gradients["dc_prev"][2][3])
print("gradients[\"dc_prev\"].shape =", gradients["dc_prev"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)

结果

gradients["dxt"][1][2] = -0.4605641030588796
gradients["dxt"].shape = (3, 10)
gradients["da_prev"][2][3] = 0.08429686538067718
gradients["da_prev"].shape = (5, 10)
gradients["dWax"][3][1] = 0.3930818739219303
gradients["dWax"].shape = (5, 3)
gradients["dWaa"][1][2] = -0.2848395578696067
gradients["dWaa"].shape = (5, 5)
gradients["dba"][4] = [0.80517166]
gradients["dba"].shape = (5, 1)
gradients["dx"][1][2] = [-2.07101689 -0.59255627  0.02466855  0.01483317]
gradients["dx"].shape = (3, 10, 4)
gradients["da0"][2][3] = -0.31494237512664996
gradients["da0"].shape = (5, 10)
gradients["dWax"][3][1] = 11.264104496527777
gradients["dWax"].shape = (5, 3)
gradients["dWaa"][1][2] = 2.303333126579893
gradients["dWaa"].shape = (5, 5)
gradients["dba"][4] = [-0.74747722]
gradients["dba"].shape = (5, 1)
gradients["dxt"][1][2] = 3.2305591151091884
gradients["dxt"].shape = (3, 10)
gradients["da_prev"][2][3] = -0.06396214197109241
gradients["da_prev"].shape = (5, 10)
gradients["dc_prev"][2][3] = 0.7975220387970015
gradients["dc_prev"].shape = (5, 10)
gradients["dWf"][3][1] = -0.1479548381644968
gradients["dWf"].shape = (5, 8)
gradients["dWi"][1][2] = 1.0574980552259903
gradients["dWi"].shape = (5, 8)
gradients["dWc"][3][1] = 2.3045621636876668
gradients["dWc"].shape = (5, 8)
gradients["dWo"][1][2] = 0.3313115952892109
gradients["dWo"].shape = (5, 8)
gradients["dbf"][4] = [0.18864637]
gradients["dbf"].shape = (5, 1)
gradients["dbi"][4] = [-0.40142491]
gradients["dbi"].shape = (5, 1)
gradients["dbc"][4] = [0.25587763]
gradients["dbc"].shape = (5, 1)
gradients["dbo"][4] = [0.13893342]
gradients["dbo"].shape = (5, 1)

3-3LSTM网络的反向传播

这一部分与上面实现的rnn_backward函数非常相似。

  • 你将首先创建与返回变量具有相同维度的变量。
  • 然后,遍历从末尾开始的所有时间步,并在每次迭代时调用为LSTM实现的单步函数。
  • 然后将通过单独求和来更新参数。
  • 最后返回带有新梯度的词典。

指导:实现lstm_backward函数。创建一个从 T x T_x Tx开始向后的for循环。每个循环内调用lstm_cell_backward,并加上旧梯度来更新新梯度。注意,dxt不更新,但是要保存。

实现代码

def lstm_backward(da, caches):
    
    """
    Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
    实现LSTM网络的反向传播

    Arguments:
    da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
    关于隐藏状态的梯度,维度为(n_a, m, T_x)

    dc -- Gradients w.r.t the memory states, numpy-array of shape (n_a, m, T_x)
	关于记忆状态的梯度,维度为(n_a, m, T_x)

    caches -- cache storing information from the forward pass (lstm_forward)
    前向传播保存的信息

    Returns:
    gradients -- python dictionary containing:
                        dx -- Gradient of inputs, of shape (n_x, m, T_x)
		输入数据的梯度,维度为(n_x, m,T_x)

                        da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
		先前的隐藏状态的梯度,维度为(n_a, m)

                        dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
		遗忘门的权值的梯度,维度为(n_a, n_a + n_x)

                        dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
		更新门的权值的梯度,维度为(n_a, n_a + n_x)

                        dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
		记忆门的权值的梯度,维度为(n_a, n_a + n_x)

                        dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
		输出门的权值的梯度,维度为(n_a, n_a + n_x)

                        dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
		遗忘门的偏置的梯度,维度为(n_a, 1)

                        dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
		更新门的偏置的梯度,维度为(n_a, 1)

                        dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
		记忆门的偏置的梯度,维度为(n_a, n_a + n_x)

                        dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
		输出门的偏置的梯度,维度为(n_a, 1)
    """

    # Retrieve values from the first cache (t=1) of caches.
	# 从caches中获取第一个cache(t=1)的值
    (caches, x) = caches
    (a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
    
    ### START CODE HERE ###
    # Retrieve dimensions from da's and x1's shapes (≈2 lines)
	# 获取da与x1的维度信息
    n_a, m, T_x = da.shape
    n_x, m = x1.shape
    
    # initialize the gradients with the right sizes (≈12 lines)
	# 初始化梯度
    dx = np.zeros([n_x, m, T_x])
    da0 = np.zeros([n_a, m])
    da_prevt = np.zeros([n_a, m])
    dc_prevt = np.zeros([n_a, m])
    dWf = np.zeros([n_a, n_a + n_x])
    dWi = np.zeros([n_a, n_a + n_x])
    dWc = np.zeros([n_a, n_a + n_x])
    dWo = np.zeros([n_a, n_a + n_x])
    dbf = np.zeros([n_a, 1])
    dbi = np.zeros([n_a, 1])
    dbc = np.zeros([n_a, 1])
    dbo = np.zeros([n_a, 1])
    
    # loop back over the whole sequence
	# 处理所有时间步
    for t in reversed(range(T_x)):
        # Compute all gradients using lstm_cell_backward
	# 使用lstm_cell_backward函数计算所有梯度
        gradients = lstm_cell_backward(da[:,:,t],dc_prevt,caches[t])

        # da_prevt, dc_prevt = gradients['da_prev'], gradients["dc_prev"]
        # Store or add the gradient to the parameters' previous step's gradient
	# 保存相关参数
        dx[:,:,t] = gradients['dxt']
        dWf = dWf+gradients['dWf']
        dWi = dWi+gradients['dWi']
        dWc = dWc+gradients['dWc']
        dWo = dWo+gradients['dWo']
        dbf = dbf+gradients['dbf']
        dbi = dbi+gradients['dbi']
        dbc = dbc+gradients['dbc']
        dbo = dbo+gradients['dbo']

    # Set the first activation's gradient to the backpropagated gradient da_prev.
	# 将第一个激活的梯度设置为反向传播的梯度da_prev。
    da0 = gradients['da_prev']
    
    ### END CODE HERE ###

    # Store the gradients in a python dictionary
	# 保存所有梯度到字典变量内
    gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
                "dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
    
    return gradients

测试一下

np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)

parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}

a, y, c, caches = lstm_forward(x, a0, parameters)

da = np.random.randn(5, 10, 4)
gradients = lstm_backward(da, caches)

print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)

结果

gradients["dx"][1][2] = [-0.00173313  0.08287442 -0.30545663 -0.43281115]
gradients["dx"].shape = (3, 10, 4)
gradients["da0"][2][3] = -0.09591150195400465
gradients["da0"].shape = (5, 10)
gradients["dWf"][3][1] = -0.06981985612744009
gradients["dWf"].shape = (5, 8)
gradients["dWi"][1][2] = 0.10237182024854771
gradients["dWi"].shape = (5, 8)
gradients["dWc"][3][1] = -0.062498379492745226
gradients["dWc"].shape = (5, 8)
gradients["dWo"][1][2] = 0.04843891314443013
gradients["dWo"].shape = (5, 8)
gradients["dbf"][4] = [-0.0565788]
gradients["dbf"].shape = (5, 1)
gradients["dbi"][4] = [-0.15399065]
gradients["dbi"].shape = (5, 1)
gradients["dbc"][4] = [-0.29691142]
gradients["dbc"].shape = (5, 1)
gradients["dbo"][4] = [-0.29798344]
gradients["dbo"].shape = (5, 1)

恭喜你完成这项任务。你现在明白了循环神经网络是如何工作的了!

让我们继续下一个练习,在这里你将使用RNN构建字符级语言模型。

4-全代码

链接

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值