一步一步建立自己的神经网络

欢迎来到课程5的第一个作业!在此作业中,您将在numpy中构建第一个RNN。
完整代码请参见
递归神经网络(RNN)对于自然语言处理和其他序列任务非常有效,因为它们具有“记忆”。他们可以阅读输入 x ⟨ t ⟩ x^{\langle t \rangle} xt(例如单词)一次一个( (such as words) one at a time),并通过隐藏层激活(hidden layer activations)记住一些信息/上下文,这些激活从一个时间步骤传递到下一个时间步骤(get passed from one time-step to the next)。这允许单向(uni-directional)RNN从过去获取信息以处理稍后的输入(take information from the past to process later inputs)。双向(bidirection)RNN可以从过去和未来获取上下文(take context from both the past and the future)。

Notation:

  • 上标 [ l ] [l] [l] 表示第 l t h l^{th} lth 层.

    • 例如: a [ 4 ] a^{[4]} a[4] 是第 4 t h 4^{th} 4th 层的激活. W [ 5 ] W^{[5]} W[5] b [ 5 ] b^{[5]} b[5] 是第 5 t h 5^{th} 5th 层的参数.
  • 上标 ( i ) (i) (i) 表示第 i t h i^{th} ith 个样本.

    • 例如: x ( i ) x^{(i)} x(i) 是第 i t h i^{th} ith 个训练样本.
  • 上标 ⟨ t ⟩ \langle t \rangle t 表示第 t t h t^{th} tth 个时间步骤.

    • Example: x ⟨ t ⟩ x^{\langle t \rangle} xt is the input x at the t t h t^{th} tth time-step. x ( i ) ⟨ t ⟩ x^{(i)\langle t \rangle} x(i)t is the input at the t t h t^{th} tth timestep of example i i i.
  • 下标 i i i 表示第 i t h i^{th} ith 个entry的向量.

    • 例如: a i [ l ] a^{[l]}_i ai[l] 表示第 i t h i^{th} ith 条目在第 l l l层的激活值.

我们假设您已熟悉numpy和/或已完成以前的课程。让我们开始吧!

让我们首先导入在此任务期间您需要的所有包。

import numpy as np
from rnn_utils import *

rnn_utils.py下载链接

1. 基本递归神经网络的前向传播

Later this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example,本周后面,您将使用RNN生成音乐。您将实现具有以下结构的基本RNN。在这个例子中, T x = T y T_x = T_y Tx=Ty.
在这里插入图片描述
图1 基本的 RNN 模型

以下是如何实施RNN:
步骤:

  1. 实施RNN的一个时间步骤所需的计算。
  2. T x T_x Tx 步上实现一个循环,以便一次处理一个输入。

让我们开始吧!

1.1 RNN cell

A Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell.

可以将递归神经网络RNN视为单个cell的重复。您首先要实现单个时间步的计算。下图描述了RNN cell的单个时间步的操作。
在这里插入图片描述
图 2: 基本RNN单元. x ⟨ t ⟩ x^{\langle t \rangle} xt是当前输入, a ⟨ t − 1 ⟩ a^{\langle t - 1\rangle} at1 是上一层的输出(包含过去的信息), a ⟨ t ⟩ a^{\langle t \rangle} at 是当前层的输出(作为下一层的输入,也可以用来计算当前层的预测值 y ⟨ t ⟩ y^{\langle t \rangle} yt

练习: 实现图2描述的RNN基本单元
要求:

  1. 使用tanh函数计算隐藏层状态(hidden state): a ⟨ t ⟩ = tanh ⁡ ( W a a a ⟨ t − 1 ⟩ + W a x x ⟨ t ⟩ + b a ) a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a) at=tanh(Waaat1+Waxxt+ba).
  2. 使用当前隐藏层状态 a ⟨ t ⟩ a^{\langle t \rangle} at, 计算 y ^ ⟨ t ⟩ = s o f t m a x ( W y a a ⟨ t ⟩ + b y ) \hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y) y^t=softmax(Wyaat+by). 我们提供函数: softmax.
  3. 在cache(缓存)中保存 ( a ⟨ t ⟩ , a ⟨ t − 1 ⟩ , x ⟨ t ⟩ , 参 数 ) (a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, 参数) (at,at1,xt,)
  4. 返回 a ⟨ t ⟩ a^{\langle t \rangle} at , y ⟨ t ⟩ y^{\langle t \rangle} yt 和cache

我们将向量化 m m m 个样本. 因此, x ⟨ t ⟩ x^{\langle t \rangle} xt 的维度是 ( n x , m ) (n_x,m) (nx,m), a ⟨ t ⟩ a^{\langle t \rangle} at 的维度是 ( n a , m ) (n_a,m) (na,m).

# GRADED FUNCTION: rnn_cell_forward

def rnn_cell_forward(xt, a_prev, parameters):
    """
    Implements a single forward step of the RNN-cell as described in Figure (2)

    Arguments:
    xt -- your input data at timestep "t", numpy array of shape (n_x, m).
    a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
    parameters -- python dictionary containing:
                        Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
                        Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
                        Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
                        ba --  Bias, numpy array of shape (n_a, 1)
                        by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
    Returns:
    a_next -- next hidden state, of shape (n_a, m)
    yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
    cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
    """
    
    # Retrieve parameters from "parameters"
    Wax = parameters["Wax"]
    Waa = parameters["Waa"]
    Wya = parameters["Wya"]
    ba = parameters["ba"]
    by = parameters["by"]
    
    ### START CODE HERE ### (≈2 lines)
    # compute next activation state using the formula given above
    a_next = np.tanh(np.dot(Waa, a_prev) + np.dot(Wax, xt) + ba)
    # compute output of the current cell using the formula given above
    yt_pred = softmax(np.dot(Wya, a_next) + by)   
    ### END CODE HERE ###
    
    # store values you need for backward propagation in cache
    cache = (a_next, a_prev, xt, parameters)
    
    return a_next, yt_pred, cache

1.2 RNN 的前向传播

您可以看到RNN是您刚刚构建的RNN cell的重复。如果输入的数据序列是通过10个时间步进行的,那么您将复制RNN单元10次。每个单元格的输入包括 ( a ⟨ t − 1 ⟩ a^{\langle t-1 \rangle} at1 前一个单元格的状态(激活值)) 和 当前时间步的数据输入 ( x ⟨ t ⟩ x^{\langle t \rangle} xt). 它输出隐藏层的状态 ( a ⟨ t ⟩ a^{\langle t \rangle} at) 和这一步的预测值( y ⟨ t ⟩ y^{\langle t \rangle} yt) .
在这里插入图片描述

图 3: Basic RNN. 序列 x = ( x ⟨ 1 ⟩ , x ⟨ 2 ⟩ , . . . , x ⟨ T x ⟩ ) x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle}) x=(x1,x2,...,xTx) 依次在 T x T_x Tx 时间步中被输入. 神经网络输出是 y = ( y ⟨ 1 ⟩ , y ⟨ 2 ⟩ , . . . , y ⟨ T x ⟩ ) y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle}) y=(y1,y2,...,yTx).

练习: 实现图3中的RNN前向传播

要求:

  1. 创建一个零向量 ( a a a) ,它会RNN的所有隐藏层状态
  2. 初始化下一隐藏层状态为 a 0 a_0 a0 (初始化隐藏层状态).
  3. 开始循环每一个时间步, 你的递增变量是 t t t :
    • 通过函数rnn_step_forward来更新下一隐藏层状态和cache
    • a a a ( t t h t^{th} tth position) 中保存下一隐藏层状态
    • 在y中保存预测值
    • 将cache添加到caches列表(list)
  4. 返回 a a a, y y y 和 caches
# GRADED FUNCTION: rnn_forward

def rnn_forward(x, a0, parameters):
    """
    Implement the forward propagation of the recurrent neural network described in Figure (3).

    Arguments:
    x -- Input data for every time-step, of shape (n_x, m, T_x).
    a0 -- Initial hidden state, of shape (n_a, m)
    parameters -- python dictionary containing:
                        Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
                        Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
                        Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
                        ba --  Bias numpy array of shape (n_a, 1)
                        by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)

    Returns:
    a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
    y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
    caches -- tuple of values needed for the backward pass, contains (list of caches, x)
    """
    
    # Initialize "caches" which will contain the list of all caches
    caches = []
    
    # Retrieve dimensions from shapes of x and Wy
    n_x, m, T_x = x.shape
    n_y, n_a = parameters["Wya"].shape
    
    ### START CODE HERE ###
    
    # initialize "a" and "y" with zeros (≈2 lines)
    a = np.zeros([n_a, m, T_x])
    y_pred = np.zeros([n_y, m, T_x])
    
    # Initialize a_next (≈1 line)
    a_next = a0
    
    # loop over all time-steps
    for t in range(T_x):
        # Update next hidden state, compute the prediction, get the cache (≈1 line)
        a_next, yt_pred, cache = rnn_cell_forward(x[:, :, t], a_next, parameters)
        # Save the value of the new "next" hidden state in a (≈1 line)
        a[:,:,t] = a_next
        # Save the value of the prediction in y (≈1 line)
        y_pred[:,:,t] = yt_pred
        # Append "cache" to "caches" (≈1 line)
        caches.append(cache)
        
    ### END CODE HERE ###
    
    # store values needed for backward propagation in cache
    caches = (caches, x)
    
    return a, y_pred, caches

2 - Long Short-Term Memory (LSTM) 网络

下图显示了LSTM单元的操作。
在这里插入图片描述
图 4: LSTM-单元. 在每一时间步,它跟踪(tracks)和更新((updates) “cell state-单元状态” 或者记忆变量( memory variable) c ⟨ t ⟩ c^{\langle t \rangle} ct , 这和 a ⟨ t ⟩ a^{\langle t \rangle} at有些不同.

与上面的RNN示例类似,您将从单个时间步骤实现LSTM单元开始。然后你可以从for循环中迭代地调用它,让它用 T x T_x Tx时间步处理输入。

关于门(About the gates)

- 遗忘门(forget gate)

为了便于说明(For the sake of this illustration),我们假设我们正在阅读一段文字(a piece of text)中的单词,并希望使用LSTM来跟踪语法结构(grammatical structures),例如主语是单数(singular)还是复数(plural)。如果主语(subject)从单数词变为复数词,我们需要找到一种方法来摆脱我们先前存储的单数/复数状态的记忆值。在LSTM中,遗忘门让我们这样做:

(1) Γ f ⟨ t ⟩ = σ ( W f [ a ⟨ t − 1 ⟩ , x ⟨ t ⟩ ] + b f ) \Gamma_f^{\langle t \rangle} = \sigma(W_f[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_f)\tag{1} Γft=σ(Wf[at1,xt]+bf)(1)

这里, W f W_f Wf是控制遗忘门行为的权重 .我们把 a ⟨ t − 1 ⟩ , x ⟨ t ⟩ a^{\langle t-1 \rangle}, x^{\langle t \rangle} at1,xt 连接成 [ a ⟨ t − 1 ⟩ , x ⟨ t ⟩ ] [a^{\langle t-1 \rangle}, x^{\langle t \rangle}] [at1,xt] 并且和 W f W_f Wf相乘. 上面等式的结果值在0和1之间的向量 Γ f ⟨ t ⟩ \Gamma_f^{\langle t \rangle} Γft . 这个遗忘门向量 Γ f ⟨ t ⟩ \Gamma_f^{\langle t \rangle} Γft将和上一个单元状态 c ⟨ t − 1 ⟩ c^{\langle t-1 \rangle} ct1逐元素相乘. 因此,只要 Γ f ⟨ t ⟩ \Gamma_f^{\langle t \rangle} Γft 的值中有 0 (或者接近0) ,那么这意味着LSTM应该删除 c ⟨ t − 1 ⟩ c^{\langle t-1 \rangle} ct1的相应组件中的那条信息(例如主语是单数形式)。如果其中一个值为1,则它将保留信息。

- Update gate(更新门)

一旦我们忘记了所讨论的主语是单数形式,我们需要找到一种方法来更新它以反映新主语现在是复数形式。以下是更新门的公式:

(2) Γ u ⟨ t ⟩ = σ ( W u [ a ⟨ t − 1 ⟩ , x { t } ] + b u ) \Gamma_u^{\langle t \rangle} = \sigma(W_u[a^{\langle t-1 \rangle}, x^{\{t\}}] + b_u)\tag{2} Γut=σ(Wu[at1,x{t}]+bu)(2)

与遗忘门类似, 这里 Γ u ⟨ t ⟩ \Gamma_u^{\langle t \rangle} Γut 也是一个值在0和1之间的向量. 为了计算 c ⟨ t ⟩ c^{\langle t \rangle} ct,他将会和 c ~ ⟨ t ⟩ \tilde{c}^{\langle t \rangle} c~t逐元素相乘。

- Updating the cell (更新单元)

要更新新主语,我们需要创建一个新的数字向量(create a new vector of numbers),我们可以将其添加到之前的单元格状态(cell state)。我们使用的等式是:

(3) c ~ ⟨ t ⟩ = tanh ⁡ ( W c [ a ⟨ t − 1 ⟩ , x ⟨ t ⟩ ] + b c ) \tilde{c}^{\langle t \rangle} = \tanh(W_c[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_c)\tag{3} c~t=tanh(Wc[at1,xt]+bc)(3)

最后, 新单元格状态(the new cell state is):

(4) c ⟨ t ⟩ = Γ f ⟨ t ⟩ ∗ c ⟨ t − 1 ⟩ + Γ u ⟨ t ⟩ ∗ c ~ ⟨ t ⟩ c^{\langle t \rangle} = \Gamma_f^{\langle t \rangle}* c^{\langle t-1 \rangle} + \Gamma_u^{\langle t \rangle} *\tilde{c}^{\langle t \rangle} \tag{4} ct=Γftct1+Γutc~t(4)

- Output gate(输出门)

要确定我们将使用哪些输出,我们将使用以下两个公式(formulas):

(5) Γ o ⟨ t ⟩ = σ ( W o [ a ⟨ t − 1 ⟩ , x ⟨ t ⟩ ] + b o ) \Gamma_o^{\langle t \rangle}= \sigma(W_o[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_o)\tag{5} Γot=σ(Wo[at1,xt]+bo)(5)
(6) a ⟨ t ⟩ = Γ o ⟨ t ⟩ ∗ tanh ⁡ ( c ⟨ t ⟩ ) a^{\langle t \rangle} = \Gamma_o^{\langle t \rangle}* \tanh(c^{\langle t \rangle})\tag{6} at=Γottanh(ct)(6)

在等式5中,您决定使用sigmoid函数输出什么,在等式6中,您将其乘以前一状态的 tanh ⁡ \tanh tanh

2.1 - LSTM cell(LSTM 单元)

练习: 实现图3中描述的LSTM cell(单元).

要求:

  1. a ⟨ t − 1 ⟩ a^{\langle t-1 \rangle} at1 x ⟨ t ⟩ x^{\langle t \rangle} xt 连接成单个矩阵: c o n c a t = [ a ⟨ t − 1 ⟩ x ⟨ t ⟩ ] concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix} concat=[at1xt]
  2. 计算公式 2-6. 你可以使用 sigmoid() (提供的) 和 np.tanh().
  3. 计算预测 y ⟨ t ⟩ y^{\langle t \rangle} yt. 你可以使用 softmax() (提供的).
# GRADED FUNCTION: lstm_cell_forward

def lstm_cell_forward(xt, a_prev, c_prev, parameters):
    """
    Implement a single forward step of the LSTM-cell as described in Figure (4)

    Arguments:
    xt -- your input data at timestep "t", numpy array of shape (n_x, m).
    a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
    c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
    parameters -- python dictionary containing:
                        Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
                        bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
                        Wi -- Weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
                        bi -- Bias of the save gate, numpy array of shape (n_a, 1)
                        Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
                        bc --  Bias of the first "tanh", numpy array of shape (n_a, 1)
                        Wo -- Weight matrix of the focus gate, numpy array of shape (n_a, n_a + n_x)
                        bo --  Bias of the focus gate, numpy array of shape (n_a, 1)
                        Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
                        by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
                        
    Returns:
    a_next -- next hidden state, of shape (n_a, m)
    c_next -- next memory state, of shape (n_a, m)
    yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
    cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
    
    Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilda),
          c stands for the memory value
    """

    # Retrieve parameters from "parameters"
    Wf = parameters["Wf"]
    bf = parameters["bf"]
    Wi = parameters["Wi"]
    bi = parameters["bi"]
    Wc = parameters["Wc"]
    bc = parameters["bc"]
    Wo = parameters["Wo"]
    bo = parameters["bo"]
    Wy = parameters["Wy"]
    by = parameters["by"]
    
    # Retrieve dimensions from shapes of xt and Wy
    n_x, m = xt.shape
    n_y, n_a = Wy.shape

    ### START CODE HERE ###
    # Concatenate a_prev and xt (≈3 lines)
    concat = np.zeros([n_a + n_x, m])
    concat[: n_a, :] = a_prev
    concat[n_a :, :] = xt

    # Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)
    ft = sigmoid(np.dot(Wf, concat) + bf)
    it = sigmoid(np.dot(Wi, concat) + bi)
    cct = np.tanh(np.dot(Wc, concat) + bc)
    c_next = ft * c_prev + it * cct
    ot = sigmoid(np.dot(Wo, concat) + bo)
    a_next = ot * np.tanh(c_next)
    
    # Compute prediction of the LSTM cell (≈1 line)
    yt_pred = softmax(np.dot(Wy, a_next) + by)
    ### END CODE HERE ###

    # store values needed for backward propagation in cache
    cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)

    return a_next, c_next, yt_pred, cache

2.2 - Forward pass for LSTM(LSTM的前向传播)

现在您已经实现了LSTM的一个步骤,现在可以使用for循环对此进行迭代以处理 T x T_x Tx 输入的序列。
在这里插入图片描述
图 4: 多个时间步的LSTM.

练习: 实现 lstm_forward(),在 T x T_x Tx 时间步上去运行LSTM.

注意: c ⟨ 0 ⟩ c^{\langle 0 \rangle} c0 被初始化为零向量.

# GRADED FUNCTION: lstm_forward

def lstm_forward(x, a0, parameters):
    """
    Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3).

    Arguments:
    x -- Input data for every time-step, of shape (n_x, m, T_x).
    a0 -- Initial hidden state, of shape (n_a, m)
    parameters -- python dictionary containing:
                        Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
                        bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
                        Wi -- Weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
                        bi -- Bias of the save gate, numpy array of shape (n_a, 1)
                        Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
                        bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
                        Wo -- Weight matrix of the focus gate, numpy array of shape (n_a, n_a + n_x)
                        bo -- Bias of the focus gate, numpy array of shape (n_a, 1)
                        Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
                        by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
                        
    Returns:
    a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
    y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
    caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
    """

    # Initialize "caches", which will track the list of all the caches
    caches = []
    
    ### START CODE HERE ###
    # Retrieve dimensions from shapes of xt and Wy (≈2 lines)
    n_x, m, T_x = x.shape
    n_y, n_a = parameters['Wy'].shape
    
    # initialize "a", "c" and "y" with zeros (≈3 lines)
    a = np.zeros([n_a, m, T_x])
    c = np.zeros([n_a, m, T_x])
    y = np.zeros([n_y, m, T_x])
    
    # Initialize a_next and c_next (≈2 lines)
    a_next = a0
    c_next = np.zeros([n_a, m])
    
    # loop over all time-steps
    for t in range(T_x):
        # Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
        a_next, c_next, yt, cache = lstm_cell_forward(x[:, :, t], a_next, c_next, parameters)
        # Save the value of the new "next" hidden state in a (≈1 line)
        a[:,:,t] = a_next
        # Save the value of the prediction in y (≈1 line)
        y[:,:,t] = yt
        # Save the value of the next cell state (≈1 line)
        c[:,:,t]  = c_next
        # Append the cache into caches (≈1 line)
        caches.append(cache)
        
    ### END CODE HERE ###
    
    # store values needed for backward propagation in cache
    caches = (caches, x)

    return a, y, c, caches

3 - RNN的反向传播 (OPTIONAL / UNGRADED)

在现代深度学习框架中,您只需要实现前向传递,并且框架负责向后传递,因此大多数深度学习工程师不需要为后向传递的细节而烦恼。但是,如果您是微积分方面的专家并希望在RNN中查看backprop的详细信息,则可以使用笔记本的这个可选部分。

在早期课程中,您实现了一个简单(完全连接)的神经网络,您使用反向传播来计算与更新参数的成本相关的导数。类似地,在递归神经网络中,您可以计算与成本相关的导数,以便更新参数。 backprop方程非常复杂,我们没有在课程中得出它们。但是,我们将在下面简要介绍它们。

3.1 - 基本的RNN的反向传播

我们将从计算基本RNN单元的反向传递开始。
在这里插入图片描述
Figure 5: RNN单元的反向传递. 就像在完全连接的神经网络中一样,损失函数 J J J的导数通过RNN遵循计算中的链式规则来反向传播。链式规则用来计算 ( ∂ J ∂ W a x , ∂ J ∂ W a a , ∂ J ∂ b ) (\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b}) (WaxJ,WaaJ,bJ) 来更新参数 ( W a x , W a a , b a ) (W_{ax}, W_{aa}, b_a) (Wax,Waa,ba).

Deriving the one step backward functions: (向后推导一步函数)

要计算rnn_cell_backward,您需要计算以下等式。手工推导它们是一项很好的练习。

tanh ⁡ \tanh tanh 的导数是 1 − tanh ⁡ ( x ) 2 1-\tanh(x)^2 1tanh(x)2. 你可以在这里找到完整的证明 注意. 注意: sec ⁡ ( x ) 2 = 1 − tanh ⁡ ( x ) 2 \sec(x)^2 = 1 - \tanh(x)^2 sec(x)2=1tanh(x)2

类似于 ∂ a ⟨ t ⟩ ∂ W a x , ∂ a ⟨ t ⟩ ∂ W a a , ∂ a ⟨ t ⟩ ∂ b \frac{ \partial a^{\langle t \rangle} } {\partial W_{ax}}, \frac{ \partial a^{\langle t \rangle} } {\partial W_{aa}}, \frac{ \partial a^{\langle t \rangle} } {\partial b} Waxat,Waaat,bat, tanh ⁡ ( u ) \tanh(u) tanh(u) 的导数是 ( 1 − tanh ⁡ ( u ) 2 ) d u (1-\tanh(u)^2)du (1tanh(u)2)du.

最后两个方程式也遵循相同的规则,并使用 tanh ⁡ \tanh tanh激活函数。请注意,安排的方式是使相同的尺寸匹配。

def rnn_cell_backward(da_next, cache):
    """
    Implements the backward pass for the RNN-cell (single time-step).

    Arguments:
    da_next -- Gradient of loss with respect to next hidden state
    cache -- python dictionary containing useful values (output of rnn_step_forward())

    Returns:
    gradients -- python dictionary containing:
                        dx -- Gradients of input data, of shape (n_x, m)
                        da_prev -- Gradients of previous hidden state, of shape (n_a, m)
                        dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
                        dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
                        dba -- Gradients of bias vector, of shape (n_a, 1)
    """
    
    # Retrieve values from cache
    (a_next, a_prev, xt, parameters) = cache
    
    # Retrieve values from parameters
    Wax = parameters["Wax"]
    Waa = parameters["Waa"]
    Wya = parameters["Wya"]
    ba = parameters["ba"]
    by = parameters["by"]

    ### START CODE HERE ###
    # compute the gradient of tanh with respect to a_next (≈1 line)
    dtanh = (1-a_next * a_next) * da_next  

    # compute the gradient of the loss with respect to Wax (≈2 lines)
    dxt = np.dot(Wax.T,dtanh)
    dWax = np.dot(dtanh, xt.T)

    # compute the gradient with respect to Waa (≈2 lines)
    da_prev = np.dot(Waa.T,dtanh)
    dWaa = np.dot(dtanh, a_prev.T)

    # compute the gradient with respect to b (≈1 line)
    dba = np.sum(dtanh, keepdims=True, axis=-1)

    ### END CODE HERE ###
    
    # Store the gradients in a python dictionary
    gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
    
    return gradients
Backward pass through the RNN

计算损失函数关于 a ⟨ t ⟩ a^{\langle t \rangle} at 在每一时间步 t t t 的梯度是非常有用的,因为它有助于梯度反向传播到先前的RNN单元. 为此,您需要遍历从结束开始的所有时间步骤 d b a db_a dba, d W a a dW_{aa} dWaa, d W a x dW_{ax} dWax and you store d x dx dx.

要求:

实现rnn_backward函数。首先用零初始化返回变量,然后在每次timetep调用rnn_cell_backward时循环遍历所有时间步,相应地更新其他变量。

def rnn_backward(da, caches):
    """
    Implement the backward pass for a RNN over an entire sequence of input data.

    Arguments:
    da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
    caches -- tuple containing information from the forward pass (rnn_forward)
    
    Returns:
    gradients -- python dictionary containing:
                        dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
                        da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
                        dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
                        dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)
                        dba -- Gradient w.r.t the bias, of shape (n_a, 1)
    """
        
    ### START CODE HERE ###
    
    # Retrieve values from the first cache (t=1) of caches (≈2 lines)
    (caches, x) = caches
    (a1, a0, x1, parameters) = caches[0]
    
    # Retrieve dimensions from da's and x1's shapes (≈2 lines)
    n_a, m, T_x = da.shape
    n_x, m = x1.shape
    
    # initialize the gradients with the right sizes (≈6 lines)
    dx = np.zeros([n_x, m, T_x])
    dWax = np.zeros([n_a, n_x])
    dWaa = np.zeros([n_a, n_a])
    dba = np.zeros([n_a, 1])
    da0 = np.zeros([n_a, m])
    da_prevt = np.zeros([n_a, m])
    
    # Loop through all the time steps
    for t in reversed(range(T_x)):
        # Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
        gradients = rnn_cell_backward(da[:, :, t] + da_prevt, caches[t])
        # Retrieve derivatives from gradients (≈ 1 line)
        dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
        # Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
        dx[:, :, t] = dxt
        dWax += dWaxt
        dWaa += dWaat
        dba += dbat
        
    # Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line) 
    da0 = da_prevt
    ### END CODE HERE ###

    # Store the gradients in a python dictionary
    gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
    
    return gradients

3.2 - LSTM backward pass

3.2.1 One Step backward

LSTM向后传球比前向传播略微复杂。我们为您提供了下面LSTM反向传递的所有方程式。 (如果你喜欢微积分练习,可以自己尝试从头开始学习这些。)

3.2.2 gate derivatives

(7) d Γ o ⟨ t ⟩ = d a n e x t ∗ tanh ⁡ ( c n e x t ) ∗ Γ o ⟨ t ⟩ ∗ ( 1 − Γ o ⟨ t ⟩ ) d \Gamma_o^{\langle t \rangle} = da_{next}*\tanh(c_{next}) * \Gamma_o^{\langle t \rangle}*(1-\Gamma_o^{\langle t \rangle})\tag{7} dΓot=danexttanh(cnext)Γot(1Γot)(7)

(8) d c ~ ⟨ t ⟩ = d c n e x t ∗ Γ i ⟨ t ⟩ + Γ o ⟨ t ⟩ ( 1 − tanh ⁡ ( c n e x t ) 2 ) ∗ i t ∗ d a n e x t ∗ c ~ ⟨ t ⟩ ∗ ( 1 − tanh ⁡ ( c ~ ) 2 ) d\tilde c^{\langle t \rangle} = dc_{next}*\Gamma_i^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * i_t * da_{next} * \tilde c^{\langle t \rangle} * (1-\tanh(\tilde c)^2) \tag{8} dc~t=dcnextΓit+Γot(1tanh(cnext)2)itdanextc~t(1tanh(c~)2)(8)

(9) d Γ u ⟨ t ⟩ = d c n e x t ∗ c ~ ⟨ t ⟩ + Γ o ⟨ t ⟩ ( 1 − tanh ⁡ ( c n e x t ) 2 ) ∗ c ~ ⟨ t ⟩ ∗ d a n e x t ∗ Γ u ⟨ t ⟩ ∗ ( 1 − Γ u ⟨ t ⟩ ) d\Gamma_u^{\langle t \rangle} = dc_{next}*\tilde c^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * \tilde c^{\langle t \rangle} * da_{next}*\Gamma_u^{\langle t \rangle}*(1-\Gamma_u^{\langle t \rangle})\tag{9} dΓut=dcnextc~t+Γot(1tanh(cnext)2)c~tdanextΓut(1Γut)(9)

(10) d Γ f ⟨ t ⟩ = d c n e x t ∗ c ~ p r e v + Γ o ⟨ t ⟩ ( 1 − tanh ⁡ ( c n e x t ) 2 ) ∗ c p r e v ∗ d a n e x t ∗ Γ f ⟨ t ⟩ ∗ ( 1 − Γ f ⟨ t ⟩ ) d\Gamma_f^{\langle t \rangle} = dc_{next}*\tilde c_{prev} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * c_{prev} * da_{next}*\Gamma_f^{\langle t \rangle}*(1-\Gamma_f^{\langle t \rangle})\tag{10} dΓft=dcnextc~prev+Γot(1tanh(cnext)2)cprevdanextΓft(1Γft)(10)

3.2.3 parameter derivatives

(11) d W f = d Γ f ⟨ t ⟩ ∗ ( a p r e v x t ) T dW_f = d\Gamma_f^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{11} dWf=dΓft(aprevxt)T(11)
(12) d W u = d Γ u ⟨ t ⟩ ∗ ( a p r e v x t ) T dW_u = d\Gamma_u^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{12} dWu=dΓut(aprevxt)T(12)
(13) d W c = d c ~ ⟨ t ⟩ ∗ ( a p r e v x t ) T dW_c = d\tilde c^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{13} dWc=dc~t(aprevxt)T(13)
(14) d W o = d Γ o ⟨ t ⟩ ∗ ( a p r e v x t ) T dW_o = d\Gamma_o^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{14} dWo=dΓot(aprevxt)T(14)

为了计算 d b f , d b u , d b c , d b o db_f, db_u, db_c, db_o dbf,dbu,dbc,dbo 你只需要分别在水平轴(轴= 1)上求和 d Γ f ⟨ t ⟩ , d Γ u ⟨ t ⟩ , d c ~ ⟨ t ⟩ , d Γ o ⟨ t ⟩ d\Gamma_f^{\langle t \rangle}, d\Gamma_u^{\langle t \rangle}, d\tilde c^{\langle t \rangle}, d\Gamma_o^{\langle t \rangle} dΓft,dΓut,dc~t,dΓot . 请注意,您应该具有keep_dims = True选项。

最后,您将计算相对于先前隐藏状态,先前存储器状态和输入的导数。

(15) d a p r e v = W f T ∗ d Γ f ⟨ t ⟩ + W u T ∗ d Γ u ⟨ t ⟩ + W c T ∗ d c ~ ⟨ t ⟩ + W o T ∗ d Γ o ⟨ t ⟩ da_{prev} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c^{\langle t \rangle} + W_o^T * d\Gamma_o^{\langle t \rangle} \tag{15} daprev=WfTdΓft+WuTdΓut+WcTdc~t+WoTdΓot(15)
这里,等式13的权重是第一个n_a,(i.e. W f = W f [ : n a , : ] W_f = W_f[:n_a,:] Wf=Wf[:na,:] etc…)

(16) d c p r e v = d c n e x t Γ f ⟨ t ⟩ + Γ o ⟨ t ⟩ ∗ ( 1 − tanh ⁡ ( c n e x t ) 2 ) ∗ Γ f ⟨ t ⟩ ∗ d a n e x t dc_{prev} = dc_{next}\Gamma_f^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} * (1- \tanh(c_{next})^2)*\Gamma_f^{\langle t \rangle}*da_{next} \tag{16} dcprev=dcnextΓft+Γot(1tanh(cnext)2)Γftdanext(16)
(17) d x ⟨ t ⟩ = W f T ∗ d Γ f ⟨ t ⟩ + W u T ∗ d Γ u ⟨ t ⟩ + W c T ∗ d c ~ t + W o T ∗ d Γ o ⟨ t ⟩ dx^{\langle t \rangle} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c_t + W_o^T * d\Gamma_o^{\langle t \rangle}\tag{17} dxt=WfTdΓft+WuTdΓut+WcTdc~t+WoTdΓot(17)
等式15的权重是从n_a到结尾,(i.e. W f = W f [ n a : , : ] W_f = W_f[n_a:,:] Wf=Wf[na:,:] etc…)

练习: 通过实现 7 − 17 7-17 717以下的公式实现lstm_cell_backward。祝好运! ?

def lstm_cell_backward(da_next, dc_next, cache):
    """
    Implement the backward pass for the LSTM-cell (single time-step).

    Arguments:
    da_next -- Gradients of next hidden state, of shape (n_a, m)
    dc_next -- Gradients of next cell state, of shape (n_a, m)
    cache -- cache storing information from the forward pass

    Returns:
    gradients -- python dictionary containing:
                        dxt -- Gradient of input data at time-step t, of shape (n_x, m)
                        da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
                        dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
                        dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
                        dWi -- Gradient w.r.t. the weight matrix of the input gate, numpy array of shape (n_a, n_a + n_x)
                        dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
                        dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
                        dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
                        dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
                        dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
                        dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
    """

    # Retrieve information from "cache"
    (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
    
    ### START CODE HERE ###
    # Retrieve dimensions from xt's and a_next's shape (≈2 lines)
    n_x, m = xt.shape
    n_a, m = a_next.shape
    
    # Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
    dot = da_next * np.tanh(c_next) * ot * (1 - ot)
    dcct = (dc_next * it + ot * (1 - np.square(np.tanh(c_next))) * it * da_next) * (1 - np.square(cct))
    dit = (dc_next * cct + ot * (1 - np.square(np.tanh(c_next))) * cct * da_next) * it * (1 - it)
    dft = (dc_next * c_prev + ot * (1 - np.square(np.tanh(c_next))) * c_prev * da_next) * ft * (1 - ft)
    
    ## Code equations (7) to (10) (≈4 lines)
    ##dit = None
    ##dft = None
    ##dot = None
    ##dcct = None
    ##
    # Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)
    concat = np.concatenate((a_prev, xt), axis=0).T
    dWf = np.dot(dft, concat)
    dWi = np.dot(dit, concat)
    dWc = np.dot(dcct, concat)
    dWo = np.dot(dot, concat)
    dbf = np.sum(dft,axis=1,keepdims=True)  
    dbi = np.sum(dit,axis=1,keepdims=True)  
    dbc = np.sum(dcct,axis=1,keepdims=True)  
    dbo = np.sum(dot,axis=1,keepdims=True)  

    # Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)
    da_prev = np.dot(parameters["Wf"][:, :n_a].T, dft) + np.dot(parameters["Wc"][:, :n_a].T, dcct) + np.dot(parameters["Wi"][:, :n_a].T, dit) + np.dot(parameters["Wo"][:, :n_a].T, dot)
    dc_prev = dc_next*ft+ot*(1-np.square(np.tanh(c_next)))*ft*da_next
    dxt = np.dot(parameters["Wf"][:, n_a:].T, dft) + np.dot(parameters["Wc"][:, n_a:].T, dcct) + np.dot(parameters["Wi"][:, n_a:].T, dit) + np.dot(parameters["Wo"][:, n_a:].T, dot)
    ### END CODE HERE ###
    
    # Save gradients in dictionary
    gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
                "dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}

    return gradients

3.3 Backward pass through the LSTM RNN

这部分非常类似于上面实现的rnn_backward函数。您将首先创建与返回变量具有相同维度的变量。然后,您将遍历从结束开始的所有时间步骤,并调用您在每次迭代时为LSTM实现的一步功能。然后,您将通过单独求和来更新参数。最后返回一个包含新梯度的字典。

要求: 实现lstm_backward函数。从 T x T_x Tx开始创建一个for循环并向后移动。对于每个步骤,调用lstm_cell_backward并通过向它们添加新梯度来更新旧梯度。请注意,dxt不会更新,但会被存储。

def lstm_backward(da, caches):
    
    """
    Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).

    Arguments:
    da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
    dc -- Gradients w.r.t the memory states, numpy-array of shape (n_a, m, T_x)
    caches -- cache storing information from the forward pass (lstm_forward)

    Returns:
    gradients -- python dictionary containing:
                        dx -- Gradient of inputs, of shape (n_x, m, T_x)
                        da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
                        dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
                        dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
                        dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
                        dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
                        dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
                        dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
                        dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
                        dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
    """

    # Retrieve values from the first cache (t=1) of caches.
    (caches, x) = caches
    (a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
    
    ### START CODE HERE ###
    # Retrieve dimensions from da's and x1's shapes (≈2 lines)
    n_a, m, T_x = da.shape
    n_x, m = x1.shape
    
    # initialize the gradients with the right sizes (≈12 lines)
    dx = np.zeros([n_x, m, T_x])
    da0 = np.zeros([n_a, m])
    da_prevt = np.zeros([n_a, m])
    dc_prevt = np.zeros([n_a, m])
    dWf = np.zeros([n_a, n_a + n_x])
    dWi = np.zeros([n_a, n_a + n_x])
    dWc = np.zeros([n_a, n_a + n_x])
    dWo = np.zeros([n_a, n_a + n_x])
    dbf = np.zeros([n_a, 1])
    dbi = np.zeros([n_a, 1])
    dbc = np.zeros([n_a, 1])
    dbo = np.zeros([n_a, 1])
    
    # loop back over the whole sequence
    for t in reversed(range(T_x)):
        # Compute all gradients using lstm_cell_backward
        gradients = lstm_cell_backward(da[:,:,t],dc_prevt,caches[t])
        # da_prevt, dc_prevt = gradients['da_prev'], gradients["dc_prev"]
        # Store or add the gradient to the parameters' previous step's gradient
        dx[:,:,t] = gradients['dxt']
        dWf = dWf+gradients['dWf']
        dWi = dWi+gradients['dWi']
        dWc = dWc+gradients['dWc']
        dWo = dWo+gradients['dWo']
        dbf = dbf+gradients['dbf']
        dbi = dbi+gradients['dbi']
        dbc = dbc+gradients['dbc']
        dbo = dbo+gradients['dbo']
    # Set the first activation's gradient to the backpropagated gradient da_prev.
    da0 = gradients['da_prev']
    
    ### END CODE HERE ###

    # Store the gradients in a python dictionary
    gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
                "dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
    
    return gradients

Congratulations !

恭喜您完成此作业。您现在了解RNN的工作原理!

让我们继续下一个练习,您将使用RNN构建一个字符级语言模型。

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值