吴恩达深度学习课程 Class 2 Week 1 assignment1&2&3 学习记录

本次作业相对简单,稍微比较复杂的是grad_check的部分,代码贴上来如下:

def gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7):
    """
    Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n

    Arguments:
    parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
    grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
    x -- input datapoint, of shape (input size, 1)
    y -- true "label"
    epsilon -- tiny shift to the input to compute approximated gradient with formula(1)

    Returns:
    difference -- difference (2) between the approximated gradient and the backward propagation gradient
    """

    # Set-up variables
    parameters_values, _ = dictionary_to_vector(parameters)
    grad = gradients_to_vector(gradients)
    num_parameters = parameters_values.shape[0]
    J_plus = np.zeros((num_parameters, 1))
    J_minus = np.zeros((num_parameters, 1))
    gradapprox = np.zeros((num_parameters, 1))


    # Compute gradapprox
    for i in range(num_parameters):

    # Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
    # "_" is used because the function you have to outputs two parameters but we only care about the first one
    ### START CODE HERE ### (approx. 3 lines)
        new_theta_plus = np.copy(parameters_values)
        new_theta_plus[i, 0] = new_theta_plus[i, 0] + epsilon
        J_plus[i, 0], _ = forward_propagation_n(X, Y, vector_to_dictionary(new_theta_plus))

    ### END CODE HERE ###

    # Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
    ### START CODE HERE ### (approx. 3 lines)
        new_theta_minus = np.copy(parameters_values)
        new_theta_minus[i, 0] = new_theta_minus[i, 0] - epsilon
        J_minus[i, 0], _ = forward_propagation_n(X, Y, vector_to_dictionary(new_theta_minus))

    ### END CODE HERE ###

    # Compute gradapprox[i]
    ### START CODE HERE ### (approx. 1 line)
        gradapprox[i, 0] = (J_plus[i, 0] - J_minus[i, 0]) / (2 * epsilon)
    ### END CODE HERE ###

    # Compare gradapprox to backward propagation gradients by computing difference.
    ### START CODE HERE ### (approx. 1 line)
    # Step 1'
    denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox)
    # Step 2'
    numerator = np.linalg.norm(grad - gradapprox)
    # Step 3'
    difference = numerator / denominator
    ### END CODE HERE ###

    if difference > 1.2e-7:
        print(
            "\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
    else:
        print(
            "\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")

    return difference

运行后会发现产生了0.28的difference,修改了back_propagation中故意设置的两个错误后,偏差如下:

1.1890417878779317e-07

最开始我改了back_propagation中的两个错误后,发现仍提示difference过大,经上网查证,发现这个difference是对的,只是大家都把difference的阈值给做了调整。。。所以我也把阈值改成了1.2e-7(作业提供的代码中阈值为1e-7)。但这个difference和epsilon已经是一个量级了,偏差不算小,但应该也在可接受的范围内。

另外一个很迷惑的问题是,相同的代码在我的电脑上得到的偏差为1.189blabla,别人得到的就是1.188blabla,我在pycharm上和jupyter notebook上都进行了验证,发现和编译器好像没有关系。。难道这玩意和CPU有关吗。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值