Coursera | Andrew Ng (02-week-1-1.13)—梯度检验

该系列仅在原课程基础上部分知识点添加个人学习笔记,或相关推导补充等。如有错误,还请批评指教。在学习了 Andrew Ng 课程的基础上,为了更方便的查阅复习,将其整理成文字。因本人一直在学习英语,所以该系列以英文为主,同时也建议读者以英文为主,中文辅助,以便后期进阶时,为学习相关领域的学术论文做铺垫。- ZJ

Coursera 课程 |deeplearning.ai |网易云课堂


转载请注明作者和出处:ZJ 微信公众号-「SelfImprovementLab」

知乎https://zhuanlan.zhihu.com/c_147249273

CSDNhttp://blog.csdn.net/junjun_zhao/article/details/79081573


1.13 Gradient Checking (梯度检验)

(字幕来源:网易云课堂)

这里写图片描述

Gradient checking is a technique that’s helped me save tons of time, and helped me find bugs in my implementations of back propagation many times. Let’s see how you could use it too to debug, or to verify that your implementation and back props correct. So your new network will have some sort of parameters, W1, B1 and so on up to WL bL. So to implement gradient checking, the first thing you should do is take all your parameters and reshape them into a giant vector data. So what you should do is take W which is a matrix, and reshape it into a vector. You gotta take all of these Ws and reshape them into vectors, and then concatenate all of these things, so that you have a giant vector theta. Giant vector parameter as theta. So we say that the cos function J being a function of the Ws and Bs, You would now have the cost function J being just a function of theta.

这里写图片描述

梯度检验帮我节省了很多时间,也多次帮我发现 backprop 实施过程中的 bug,接下来 我们看看如何利用它来调试或检验 backpro的实施是否正确,假设你的网络中含有下列参数 W[1] b[1] …. W[L] b[L] ,为了执行梯度检验 首先要做的就是,把所有参数转换成一个巨大的向量数据,你要做的就是把矩阵 W 转换成一个向量,把所有 W 矩阵转换成向量之后,做连接运算 得到一个巨型向量 θ ,该向量表示为参数θ,代价函数 J 是所有 W 和 b 的函数,现在你得到了一个θ的代价函数 J

Next, with W and B ordered the same way, you can also take dW[1] , db[1] and so on, and initiate them into big giant vector d theta of the same dimension as theta. So same as before, we shape dW[1] into the matrix, db[1] is already a vector. We shape dW[L] ,all of the dW’s which are matrices. Remember, dW1 has the same dimension as W1. db1 has the same dimension as b1. So the same sort of reshaping and concatenation operation, you can then reshape all of these derivatives into a giant vector d theta. Which has the same dimension as theta. So the question is,now,is the theta the gradient or the slope of the cos function J ? So here’s how you implement gradient checking, and often abbreviate gradient checking to grad check.

这里写图片描述

接着 你得到与 W 和 b 顺序相同的数据,W[1] db[1] …,用它们来初始化大向量 dθ 它与 θ 具有相同维度,同样地 把dW[1]转换成矩阵 db[1] 已经是一个向量了,直到把 dW[L] 转换成矩阵 这样所有的 dW 都已是矩阵,注意 dW[1] 与 W1 具有相同维度,db1与 b1 具有相同维度,经过相同的转换和连接运算操作之后,你可以把所有导数转换成一个大向量 dθ ,它与 θ 具有相同维度,现在的问题是dθ,代价函数 J 的梯度或坡度有什么关系,这就是实施梯度检验的过程,英语里通常简称为“grad check”。

So first we remember that J Is now a function of the giant parameter, theta, right? So expands to j is a function of theta 1,theta 2, theta 3, and so on. Whatever’s the dimension of this giant parameter vector theta. So to implement grad check, what you’re going to do is implements a loop so that for each I, so for each component of theta, let’s compute D theta approx i to b. And I want to take a two sided difference. So I’ll take J of theta. Theta 1, theta 2, up to theta i. And we’re going to nudge theta i to add epsilon to this. So just increase theta i by epsilon, and keep everything else the same.And because we’re taking a two sided difference,we’re going to do the same on the other side with theta i, but now minus epsilon. And then all of the other elements of theta are left alone. And then we’ll take this, and we’ll divide it by 2 theta.And what we saw from the previous video is that this should be approximately equal to d theta [i], which is supposed to be the partial derivative of J or of respect to, I guess theta i, if d theta i is the derivative of the cost function J .So what you gonna do is you’re gonna compute to this for every value of i.And at the end, you now end up with two vectors.You end up with this d theta approx, and and this is going to be the same dimension as d theta.And both of these are in turn the same dimension as theta.And what you want to do is check if these vectors are approximately equal to each other.

这里写图片描述

首先 我们要清楚,J是超级参数 θ 的一个函数,你也可以将J函数展开为 J(θ1,θ2,θ3....) ,不论超级参数向量 θ 的维度是多少,为了实施梯度检验 你要做的就是循环执行,从而对每个 i 也就是对每个θ组成元素,计算 dθ[i]approx 的值,我使用双边误差,也就是 θ J函数, J(θ1,θ2,.θi) θi 调整为 θi+ε ,只对 θi 增加ε 其它项保持不变,因为我们使用的是双边误差,对另一边做同样的操作 只不过是减去 ε, θ 其它项全都不保持不变,最后得到 J(θ1,θ2,.θi+ε,)J(θ1,θ2,.θiε,)/2ε,从上节课中我们了解到,这个值应该逼近 dθi ,等于偏导数 J 比上θi 即, dθi 是代价函数的偏导数,然后你需要对 i 的每个值都执行这个运算,最后得到两个向量,得到 dθ 的逼近值 dθapprox ,它与 dθ 具有相同维度,它们两个与 θ 具有相同维度,你要做的就是验证这些向量是否彼此接近。

So, in detail, well, how do you define whether or not two vectors are really reasonably close to each other? What I do is the following. I would compute the distance between these two vectors, d theta approx minus d theta, so just the o2 norm of this. Notice there’s no square on top, so this is the sum of squares of elements of the differences,and then you take a square root, as you get the Euclidean distance.And then just to normalize by the lengths of these vectors, divide by dθ approx plus dθ . Just take the Euclidean lengths of these vectors. And the row for the denominator is just in case any of these vectors are really small or really large, the denominator turns this formula into a ratio. So we implement this in practice,I use epsilon equals maybe 10 to the minus 7, so minus 7. And with this range of epsilon, if you find that this formula gives you a value like 10 to the minus 7 or smaller, then that’s great. It means that your derivative approximation is very likely correct. This is just a very small value. If it’s maybe on the range of 10 to the -5, I would take a careful look. Maybe this is okay. But I might double-check the components of this vector, and make sure that none of the components are too large. And if some of the components of this difference are very large, then maybe you have a bug somewhere.And if this formula on the left is on the other is -3, then I would worry, I would be much more concerned that maybe there’s a bug somewhere. But you should really be getting values much smaller than 10 minus 3, or if there is any bigger than 10 to minus 3, then I would be quite concerned. I would seriously worry about whether or not there might be a bug.And I would then, you should then look at the individual components of theta to see if there’s a specific value of i for which d theta approx i is very different from d theta i,and use that to try to track down whether or not some of your derivative computations might be incorrect.

这里写图片描述

具体来说,如何定义两个向量是否真的接近彼此,我一般做下列运算,计算这两个向量的距离, dθapproxdθ 的欧几里得范数,注意 这里没有平方,它是误差平方之和,然后求平方根 得到欧式距离,然后用向量长度做归一化 结果为 ||dθapproxdθ||2||dθapprox||2+||dθ||2 使用向量长度的欧几里得范数,分母只是用于预防这些向量太小或太大,分母使得这个方程式变成比率,我们实际执行这个方程式,ε 值可能为 10 的 -7 次方,使用这个取值范围内的 ε 如果你发现,计算方程式得到的值为 10 的 -7 次方或更小 这就很好,这意味着导数逼近很有可能是正确的,它的值非常小,如果它的值在 10-5 范围内 我就要小心了,也许这个值没问题,但我会再次检查这个向量的所有项,确保没有一项误差过大,如果有一项误差非常大,可能这里有 bug,如果左边这个方程式结果是 10 的 -3 次方,我就会担心是否存在 bug,计算结果应该比 10 的 -3 次方小很多,如果比 10-3 次方大很多 我就会很担心,担心是否存在 bug,这时应该仔细检查所有 θ 项,看是否有一个具体的i值,使得dθapprox与d θ 大不相同,并用它来追踪一些求导计算是否正确。

And after some amounts of debugging, it finally, it ends up being this kind of very small value, then you probably have a correct implementation.So when implementing a neural network, what often happens is I’ll implement foreprop , implement backprop.And then I might find that this Grad check has a relatively big value. And then I will suspect that there must be a bug, go in debug, debug, debug.And after debugging for a while, If I find that it passes grad check with a small value,then you can be much more confident that it’s then correct. So you now know how gradient checking works. This has helped me find lots of bugs in my implementations of neural nets,and I hope it’ll help you too.In the next video, I want to share with you some tips or some notes on how to actually implement gradient checking. Let’s go onto the next video.

经过一些调试 最终,结果会是这种非常小的值,那么你的实施可能是正确的,在实施神经网络时,我经常需要执行 foreprop 和 backprop,然后我可能发现这个梯度检验有一个相对较大的值,我会怀疑存在 bug 然后开始调试 调试 调试,调试一段时间后 我得到一个很小的梯度检验值,现在我可以很自信的说 神经网络实施是正确的,现在你已经了解了梯度检验的工作原理,它帮助我在神经网络实施中发现了很多 bug,希望它对你也有所帮助,下节课 我准备讲一讲,实施梯度检验的技巧和注意事项,下节课见。


重点总结:

梯度检验

下面用前面一节的方法来进行梯度检验。

连接参数

因为我们的神经网络中含有大量的参数:W[1],b[1],,W[L],b[L],为了做梯度检验,需要将这些参数全部连接起来,reshape成一个大的向量 θ。

同时对 dW[1],db[1],,dW[L],db[L] 执行同样的操作。

这里写图片描述

进行梯度检验

进行如下图的梯度检验:

这里写图片描述

判断 dθapproxdθ 是否接近。

判断公式:

||dθapproxdθ||2||dθapprox||2+||dθ||2

其中,“ ||||2 ”表示欧几里得范数,它是误差平方之和,然后求平方根,得到的欧氏距离。

参考文献:

[1]. 大树先生.吴恩达Coursera深度学习课程 DeepLearning.ai 提炼笔记(2-1)– 深度学习的实践方面


PS: 欢迎扫码关注公众号:「SelfImprovementLab」!专注「深度学习」,「机器学习」,「人工智能」。以及 「早起」,「阅读」,「运动」,「英语 」「其他」不定期建群 打卡互助活动。

  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值