吴恩达 deep learning 第二课第一周编程作业 Gradient Checking 3

本编程作业中,你将通过梯度检查确保深度学习模型的反向传播正确实现。首先了解梯度检查的工作原理,然后在1D和N维情况下应用它,以验证欺诈检测模型的梯度计算是否准确。
摘要由CSDN通过智能技术生成

Gradient Checking

Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.

You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.

But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".

Let's do it!

# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector

1) How does gradient checking work?

Backpropagation computes the gradients ∂J∂θ∂J∂θ, where θθ denotes the parameters of the model. JJ is computed using forward propagation and your loss function.

Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost JJ correctly. Thus, you can use your code for computing JJ to verify the code for computing ∂J∂θ∂J∂θ.

Let's look back at the definition of a derivative (or gradient):

∂J∂θ=limε→0J(θ+ε)−J(θ−ε)2ε(1)(1)∂J∂θ=limε→0J(θ+ε)−J(θ−ε)2ε

If you're not familiar with the "limε→0limε→0" notation, it's just a way of saying "when εε is really really small."

We know the following:

  • ∂J∂θ∂J∂θ is what you want to make sure you're computing correctly.
  • You can compute J(θ+ε)J(θ+ε) and J(θ−ε)J(θ−ε) (in the case that θθ is a real number), since you're confident your implementation for JJ is correct.

Lets use equation (1) and a small value for εε to convince your CEO that your code for computing ∂J∂θ∂J∂θis correct!

2) 1-dimensional gradient checking

Consider a 1D linear function J(θ)=θxJ(θ)=θx. The model contains only a single real-valued parameter θθ, and takes xx as input.

You will implement code to compute J(.)J(.) and its derivative ∂J∂θ∂J∂θ. You will then use gradient checking to make sure your derivative computation for JJ is correct.

The diagram above shows the key computation steps: First start with xx, then evaluate the function J(x)J(x) ("forward propagation"). Then compute the derivative ∂J∂θ∂J∂θ ("backward propagation").

Exercise: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both J(.)J(.)("forward propagation") and its derivative with respect to θθ ("backward propagation"), in two separate functions.

# GRADED FUNCTION: forward_propagation

def forward_propagation(x, theta):
    """
    Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
    
    Arguments:
    x -- a real-valued input
    theta -- our parameter, a real number as well
    
    Returns:
    J -- the value of function J, computed using the formula J(theta) = theta * x
    """
    
    ### START CODE HERE ### (approx. 1 line)
    J = theta * x
    ### END CODE HERE ###
    
    return J

 

x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
J = 8

Exercise: Now, implement the backward propagatio

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值