Deep Learning:深度前馈神经网络(五)

本文深入探讨了深度前馈神经网络中的反向传播算法,它用于计算成本函数关于参数的梯度。此外,介绍了计算图的概念,它是描述反向传播算法的精确工具,以及微积分链式法则在复杂函数导数计算中的应用。
摘要由CSDN通过智能技术生成

Back-Propagation and Other Differentiation Algorithms

  • Forward propagation: The inputs x provide the initial information that then propagates up to the hidden units at each layer and finally produces y^ .
  • Back-propagation: allows the information from the cost to then flow backwards through the network, in order to compute the gradient.
  • The term back-propagation is often misunderstood as meaning the whole learning algorithm for multi-layer neural networks. Actually, back-propagation refers only to the method for computing the gradient, while another algorithm, such as stochastic gradient descent, is used to perform learning using this gradient.
  • In learning algorithms, the gradient we most often require is the gradient of the cost function with respect to the parameters, θJ(θ) .

Computational Graphs

To describe the back-propagation algorithm more precisely, it is helpful to have a more precise computational graph language.

  • Here, we use each node in the graph to indicate a variable. The variable may be a scalar, vector, matrix, tensor, or even a variable of another type.
  • An operation is a simple function of one or more variables. Our graph language is accompanied by a set of allowable operations. Functions more complicated than the operations in this set may be described by composing many operations together.
    这里写图片描述

Chain Rule of Calculus

The chain rule of calculus (not to be confused with the chain rule of probability) is used to compute the derivatives of functions formed by composing other functions whose derivatives are known. Back-propagation is an algorithm that computes the chain rule, with a specific order of operations that is highly efficient.
Let x be a real number, and let f and g both be functions mapping from a real number to a real number. Suppose that y=g(x) and z=f(g(x))=f(y) . Then the chain rule states that

dzdx=dzdydydx

We can generalize this beyond the scalar case:
zxi=jzyjyjx

In vector notation, this may be equivalently written as
xz=(yx)Tyz

where yx is the n × m Jacobian matrix of g.
From this we see that the gradient of a variable x can be obtained by multiplying a Jacobian matrix yx by a gradient yz

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值