Automatic differentiation

看算法看到这个了,转载一下,备忘。


From Wikipedia, the free encyclopedia

In mathematics and computer algebra, automatic differentiation (AD), also called algorithmic differentiation or computational differentiation,[1][2] is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, and accurate to working precision.

Automatic differentiation is not:

Figure 1: How automatic differentiation relates to symbolic differentiation

These classical methods run into problems: symbolic differentiation works at low speed, and faces the difficulty of converting a computer program into a single expression, while numerical differentiation can introduce round-off errors in the discretization process and cancellation. Both classical methods have problems with calculating higher derivatives, where the complexity and errors increase. Finally, both classical methods are slow at computing the partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Automatic differentiation solves all of these problems.

Contents

The chain rule, forward and reverse accumulation

Fundamental to AD is the decomposition of differentials provided by the chain rule. For the simple composition f(x) = g(h(x)) the chain rule gives

\frac{df}{dx} = \frac{dg}{dh} \frac{dh}{dx}

Usually, two distinct modes of AD are presented, forward accumulation (or forward mode) and reverse accumulation (or reverse mode). Forward accumulation specifies that one traverses the chain rule from right to left (that is, first one computes dh/dx and then dg/dh), while reverse accumulation has the traversal from left to right.

Figure 2: Example of forward accumulation with computational graph

Forward accumulation

Forward accumulation automatic differentiation is the easiest to understand and to implement. The function f(x_1,x_2) = x_1 x_2 + \sin(x_1) is interpreted (by a computer or human programmer) as the sequence of elementary operations on the work variables w_i, and an AD tool for forward accumulation adds the corresponding operations on the second component of the augmented arithmetic.

Original code statementsAdded statements for derivatives
w_1 = x_1w'_1 = 1 (seed)
w_2 = x_2w'_2 = 0 (seed)
w_3 = w_1 w_2w'_3 = w'_1 w_2 + w_1 w'_2 = 1  x_2 + x_1  0 = x_2
w_4 = \sin(w_1)w'_4 = \cos(w_1)w'_1 = \cos(x_1)  1
w_5 = w_3 + w_4w'_5 = w'_3 + w'_4 = x_2 + \cos(x_1)

The derivative computation for f(x_1,x_2) = x_1 x_2 + \sin(x_1) needs to be seeded in order to distinguish between the derivative with respect to x_1 or x_2. The table above seeds the computation with w'_1=1 and w'_2=0 and we see that this results in x_2 + \cos(x_1) which is the derivative with respect to x_1. Note that although the table displays the symbolic derivative, in the computer it is always the evaluated (numeric) value that is stored. Figure 2 represents the above statements in a computational graph.

In order to compute the gradient of this example function, that is \partial f/\partial x_1 and \partial f / \partial x_2, two sweeps over the computational graph is needed, first with the seeds w'_1 = 1 and w'_2 = 0, then with w'_1 = 0 and w'_2 = 1.

The computational complexity of one sweep of forward accumulation is proportional to the complexity of the original code.

Forward accumulation is superior to reverse accumulation for functions f:\mathbb{R} \rightarrow \mathbb{R}^m with m \gg 1 as only one sweep is necessary, compared to m sweeps for reverse accumulation.

Figure 3: Example of reverse accumulation with computational graph

Reverse accumulation

Reverse accumulation traverses the chain rule from left to right, or in the case of the computational graph in Figure 3, from top to bottom. The example function is real-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed in order to calculate the (two-component) gradient. This is only half the work when compared to forward accumulation, but reverse accumulation requires the storage of some of the work variables w_i, which may represent a significant memory issue.

The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a unary function y=f(x) in the primal causes x'=f'(x) y' in the adjoint; etc.

Reverse accumulation is superior to forward accumulation for functions f:\mathbb{R}^n \rightarrow \mathbb{R} with n \gg 1, where forward accumulation requires roughly n times as much work.

Backpropagation of errors in multilayer perceptrons, a technique used in machine learning, is a special case of reverse mode AD.

Jacobian computation

The Jacobian J of f:\mathbb{R}^n \rightarrow \mathbb{R}^m is an m \times n matrix. The Jacobian can be computed using n sweeps of forward accumulation, of which each sweep can yield a column vector of the Jacobian, or with m sweeps of reverse accumulation, of which each sweep can yield a row vector of the Jacobian.

Beyond forward and reverse accumulation

Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of F:\mathbb{R}^n \rightarrow \mathbb{R}^m with a minimum number of arithmetic operations is known as the "optimal Jacobian accumulation" (OJA) problem. OJA is NP-complete.[3] Central to this proof is the idea that there may exist algebraic dependences between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent.

Automatic differentiation using dual numbers

Forward mode automatic differentiation is accomplished by augmenting the algebra of real numbers and obtaining a new arithmetic. An additional component is added to every number which will represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. The augmented algebra is the algebra of dual numbers. Computer programs often implement this using the complex number representation.

Replace every number \,x with the number x + x'\varepsilon, where x' is a real number, but \varepsilon is nothing but a symbol with the property \varepsilon^2=0. Using only this, we get for the regular arithmetic

(x + x'\varepsilon) + (y + y'\varepsilon) = x + y + (x' + y')\varepsilon
(x + x'\varepsilon) \cdot (y + y'\varepsilon) = xy + xy'\varepsilon + yx'\varepsilon + x'y'\varepsilon^2 = xy + (x y' + yx')\varepsilon

and likewise for subtraction and division.

Now, we may calculate polynomials in this augmented arithmetic. If P(x) = p_0 + p_1 x + p_2x^2 + \cdots + p_n x^n, then

P(x + x'\varepsilon) =\,p_0 + p_1(x + x'\varepsilon) + \cdots + p_n (x + x'\varepsilon)^n
 =\,p_0 + p_1 x + \cdots + p_n x^n
  \, {} + p_1x'\varepsilon + 2p_2xx'\varepsilon + \cdots + np_n x^{n-1} x'\varepsilon
 =\,P(x) + P^{(1)}(x)x'\varepsilon

where P^{(1)} denotes the derivative of P with respect to its first argument, and x', called a seed, can be chosen arbitrarily.

The new arithmetic consists of ordered pairs, elements written \langle x, x' \rangle, with ordinary arithmetics on the first component, and first order differentiation arithmetic on the second component, as described above. Extending the above results on polynomials to analytic functions we obtain a list of the basic arithmetic and some standard functions for the new arithmetic:

\langle u,u'\rangle +\langle v,v'\rangle = \langle u+v, u'+v' \rangle
\langle u,u'\rangle -\langle v,v'\rangle = \langle u-v, u'-v' \rangle
\langle u,u'\rangle *\langle v,v'\rangle = \langle u v, u'v+uv' \rangle
\langle u,u'\rangle /\langle v,v'\rangle = \left\langle \frac{u}{v}, \frac{u'v-uv'}{v^2} \right\rangle \quad ( v\ne 0)
\sin\langle u,u'\rangle = \langle \sin(u) , u' \cos(u) \rangle
\cos\langle u,u'\rangle = \langle \cos(u) , -u' \sin(u) \rangle
\exp\langle u,u'\rangle = \langle \exp u , u' \exp u \rangle
\log\langle u,u'\rangle = \langle \log(u) , u'/u \rangle \quad (u>0)
\langle u,u'\rangle^k = \langle u^k , k u^{k-1} u' \rangle \quad (u \ne 0)
\left| \langle u,u'\rangle \right| = \langle \left| u \right| , u' \mbox{sign} u \rangle \quad (u \ne 0)

and in general for the primitive function g,

g(\langle u,u' \rangle , \langle v,v' \rangle ) = \langle g(u,v) , g_u(u,v) u' + g_v(u,v) v' \rangle

where g_u and g_v are the derivatives of g with respect to its first and second arguments, respectively.

When a binary basic arithmetic operation is applied to mixed arguments—the pair \langle u, u' \rangle and the real number c—the real number is first lifted to \langle c, 0 \rangle. The derivative of a function f : \mathbb{R}\rightarrow\mathbb{R} at the point x_0 is now found by calculating f(\langle x_0, 1 \rangle) using the above arithmetic, which gives \langle f ( x_0 ) , f' ( x_0 ) \rangle as the result.

Vector arguments and functions

Multivariate functions can be handled with the same efficiency and mechanisms as univariate functions by adopting a directional derivative operator, which finds the directional derivative y' \in \mathbb{R}^m of f:\mathbb{R}^n\rightarrow\mathbb{R}^m at x \in \mathbb{R}^n in the direction x' \in \mathbb{R}^n by calculating (\langle y_1,y'_1\rangle, \ldots, \langle y_m,y'_m\rangle) = f(\langle x_1,x'_1\rangle, \ldots, \langle x_n,x'_n\rangle) using the same arithmetic as above.

Higher order differentials

The above arithmetic can be generalized, in the natural way, to calculate parts of the second order and higher derivatives. However, the arithmetic rules quickly grow very complicated: complexity will be quadratic in the highest derivative degree. Instead, truncated Taylor series arithmetic is used. This is possible because the Taylor summands in a Taylor series of a function are products of known coefficients and derivatives of the function. Currently, there exists efficient Hessian automatic differentiation methods that calculate the entire Hessian matrix with a single forward and reverse accumulation. There also exist a number of specialized methods for calculating large sparse Hessian matrices.

Implementation

Forward-mode AD is implemented by a nonstandard interpretation of the program in which real numbers are replaced by dual numbers, constants are lifted to dual numbers with a zero epsilon coefficient, and the numeric primitives are lifted to operate on dual numbers. This nonstandard interpretation is generally implemented using one of two strategies: source code transformation or operator overloading.

Source code transformation (SCT)

Figure 4: Example of how source code transformation could work

The source code for a function is replaced by an automatically generated source code that includes statements for calculating the derivatives interleaved with the original instructions.

Source code transformation can be implemented for all programming languages, and it is also easier for the compiler to do compile time optimizations. However, the implementation of the AD tool itself is more difficult.

Operator overloading (OO)

Figure 5: Example of how operator overloading could work

Operator overloading is a possibility for source code written in a language supporting it. Objects for real numbers and elementary mathematical operations must be overloaded to cater for the augmented arithmetic depicted above. This requires no change in the form or sequence of operations in the original source code for the function to be differentiated, but often requires changes in basic data types for numbers and vectors to support overloading and often also involves the insertion of special flagging operations.

Operator overloading for forward accumulation is easy to implement, and also possible for reverse accumulation. However, current compilers lag behind in optimizing the code when compared to forward accumulation.

Software

  • C/C++
PackageLicenseApproachBrief Info
ADC Version 4.0nonfreeOO 
ADICfree for noncommercialSCTforward mode
ADMB BSDSCT+OO 
ADNumberdual licenseOOarbitrary order forward/reverse
ADOL-CCPL 1.0 or GPL 2.0OOarbitrary order forward/reverse, part of COIN-OR
AMPLfree for studentsSCT 
FADBAD++free for
noncommercial
OOuses operator new
CasADiLGPLOO/SCTForward/reverse modes, matrix-valued atomic operations.
ceres-solverBSDOOA portable C++ library that allows for modeling and solving large complicated nonlinear least squares problems
CppADEPL 1.0 or GPL 3.0OOarbitrary order forward/reverse, AD<Base> for arbitrary Base including AD<Other_Base>, part of COIN-OR; can also be used to produce C source code using the CppADCodeGen library.
OpenADdepends on componentsSCT 
Sacado GNU GPLOOA part of the Trilinos collection, forward/reverse modes.
Stan BSDOOEstimates Bayesian statistical models using Hamiltonian Monte Carlo.
TAPENADEFree for noncommercialSCT 
CTaylorfreeOOtruncated taylor series, multi variable, high performance, calculating and storing only potentially nonzero derivatives, calculates higher order derivatives, order of derivatives increases when using matching operations until maximum order (parameter) is reached, example source code and executable available for testing performance
  • Fortran
PackageLicenseApproachBrief Info
ADF Version 4.0nonfreeOO 
ADIFOR>>>
(free for non-commercial)
SCT 
AUTO_DERIVfree for non-commercialOO
OpenADdepends on componentsSCT 
TAPENADEFree for noncommercialSCT 
  • Matlab
PackageLicenseApproachBrief Info
AD for MATLAB GNU GPLOOForward (1st & 2nd derivative, Uses MEX files & Windows DLLs)
Adiff BSDOOForward (1st derivative)
MADProprietaryOO 
ADiMat ?SCTForward (1st & 2nd derivative) & Reverse (1st)
  • Python
PackageLicenseApproachBrief Info
ad BSDOOfirst and second-order, reverse accumulation, transparent on-the-fly calculations, basic NumPy support, written in pure python
FuncDesigner BSDOOuses NumPy arrays and SciPy sparse matrices,
also allows to solve linear/non-linear/ODE systems and
to perform numerical optimizations by OpenOpt
ScientificPythonCeCILLOOsee modules Scientific.Functions.FirstDerivatives and
Scientific.Functions.Derivatives
pycppad BSDOOarbitrary order forward/reverse, implemented as wrapper for CppAD including AD<double> and AD< AD<double> >.
pyadolc BSDOOwrapper for ADOL-C, hence arbitrary order derivatives in the (combined) forward/reverse mode of AD, supports sparsity pattern propagation and sparse derivative computations
uncertainties BSDOOfirst-order derivatives, reverse mode, transparent calculations
algopy BSDOOsame approach as pyadolc and thus compatible, support to differentiate through numerical linear algebra functions like the matrix-matrix product, solution of linear systems, QR and Cholesky decomposition, etc.
pyderiv GNU GPLOOautomatic differentiation and (co)variance calculation
CasADiLGPLOO/SCTPython front-end to CasADi. Forward/reverse modes, matrix-valued atomic operations.
  • .NET
PackageLicenseApproachBrief Info
AutoDiff GNU GPLOOAutomatic differentiation with C# operators overloading.
FuncLibMITOOAutomatic differentiation and numerical optimization, operator overloading, unlimited order of differentiation, compilation to IL code for very fast evaluation.
  • Haskell
PackageLicenseApproachBrief Info
ad BSDOOForward Mode (1st derivative or arbitrary order derivatives via lazy lists and sparse tries)
Reverse Mode
Combined forward-on-reverse Hessians.
Uses Quantification to allow the implementation automatically choose appropriate modes.
Quantification prevents perturbation/sensitivity confusion at compile time.
fad BSDOOForward Mode (lazy list). Quantification prevents perturbation confusion at compile time.
rad BSDOOReverse Mode. (Subsumed by 'ad').
Quantification prevents sensitivity confusion at compile time.
  • Octave
PackageLicenseApproachBrief Info
CasADiLGPLOO/SCTOctave front-end to CasADi. Forward/reverse modes, matrix-valued atomic operations.
  • Java
PackageLicenseApproachBrief Info
JAutoDiff-OOProvides a framework to compute derivatives of functions on arbitrary types of field using generics. Coded in 100% pure Java.
Apache Commons MathApache License v2OOThis class is an implementation of the extension to Rall's numbers described in Dan Kalman's paper[4]

References

  1. ^ Neidinger, Richard D. (2010). "Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming". SIAM Review 52 (3): 545–563.
  2. ^ http://www.ec-securehost.com/SIAM/SE24.html
  3. ^ Naumann, Uwe (April 2008). Optimal Jacobian accumulation is NP-complete. "Optimal Jacobian accumulation is NP-complete". Mathematical Programming 112 (2): 427–441. doi:10.1007/s10107-006-0042-z
  4. ^ Kalman, Dan (June 2002). "Doubly Recursive Multivariate Automatic Differentiation". Mathematics Magazine 75 (3): 187–202.

Literature

External links


  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值