Coursera-吴恩达-深度学习-改善深层神经网络:超参数调试、正则化以及优化-week1-编程作业

本篇博客基于Coursera上的吴恩达深度学习课程,讨论了初始化(如零初始化、随机初始化、He初始化)对神经网络的影响,解释了超参数调试和正则化的重要性,包括L2正则化和Dropout的实现。通过编程作业,强调了在训练中避免过拟合和梯度消失问题,以及在正则化和Dropout使用中的注意事项。
摘要由CSDN通过智能技术生成

 

 

本文章内容:

Coursera吴恩达深度学习课程,

第二课改善深层神经网络:超参数调试、正则化以及优化(Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization)

第一周:深度学习的实用层面(Practical aspects of Deep Learning)

包含三个部分,Initialization, L2 regularization , gradient checking

编程作业,记错本。

 

Initialization

 the first assignment of "Improving Deep Neural Networks".

By completing this assignment you will:

- Understand that different regularization methods that could help your model.

- Implement dropout and see it work on data.

- Recognize that a model without regularization gives you a better accuracy on the training set but nor necessarily on the test set.

- Understand that you could use both dropout and regularization on your model.

 

2 - Zero initialization

Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. (防止对称权重错误,it fails to "break symmetry",)

 

我的:

for l in range(1, L):
        ### START CODE HERE ### (≈ 2 lines of code)
        parameters['W' + str(l)] = np.random.rand(np.shape(layers_dims[L], layers_dims[L-1]))
        parameters['b' + str(l)] = np.random.rand(np.shape(layers_dims[L], 1))
        ### END CODE HERE ###
    return parameters

正确:

parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))
        parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))

反思:

1.用循环内部的值小L,不是外面的那个大L

2. np.zeros的用法,内部直接设置维度。

3 - Random initialization

To break symmetry, lets intialize the weights randomly.

我的:

parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1]) * 10

        parameters['b' + str(l)] = np.zeros(layers_dims[l],1)

正确:

parameters['b' + str(l)] = np.zeros((layers_dims[l],1))

反思:

np.zeros((维度))

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值