In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Leibniz, states that for an integral of the form
{\displaystyle \int _{a(x)}^{b(x)}f(x,t),dt,}{\displaystyle \int _{a(x)}^{b(x)}f(x,t),dt,}
where {\displaystyle -\infty <a(x),b(x)<\infty }{\displaystyle -\infty <a(x),b(x)<\infty } and the integral are functions dependent on {\displaystyle x,}x, the derivative of this integral is expressible as
{\displaystyle {\frac {d}{dx}}\left(\int _{a(x)}^{b(x)}f(x,t),dt\right)=f{\big (}x,b(x){\big )}\cdot {\frac {d}{dx}}b(x)-f{\big (}x,a(x){\big )}\cdot {\frac {d}{dx}}a(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}f(x,t),dt,}{\displaystyle {\frac {d}{dx}}\left(\int _{a(x)}^{b(x)}f(x,t),dt\right)=f{\big (}x,b(x){\big )}\cdot {\frac {d}{dx}}b(x)-f{\big (}x,a(x){\big )}\cdot {\frac {d}{dx}}a(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}f(x,t),dt,}
where the partial derivative {\displaystyle {\tfrac {\partial }{\partial x}}}{\displaystyle {\tfrac {\partial }{\partial x}}} indicates that inside the integral, only the variation of {\displaystyle f(x,t)}{\displaystyle f(x,t)} with {\displaystyle x}x is considered in taking the derivative.[1]
In the special case where the functions {\displaystyle a(x)}a(x) and {\displaystyle b(x)}b(x) are constants {\displaystyle a(x)=a}{\displaystyle a(x)=a} and {\displaystyle b(x)=b}{\displaystyle b(x)=b} with values that do not depend on {\displaystyle x,}x, this simplifies to:
{\displaystyle {\frac {d}{dx}}\left(\int _{a}^{b}f(x,t),dt\right)=\int _{a}^{b}{\frac {\partial }{\partial x}}f(x,t),dt.}{\displaystyle {\frac {d}{dx}}\left(\int _{a}^{b}f(x,t),dt\right)=\int _{a}^{b}{\frac {\partial }{\partial x}}f(x,t),dt.}
If {\displaystyle a(x)=a}{\displaystyle a(x)=a} is constant and {\displaystyle b(x)=x}{\displaystyle b(x)=x}, which is another common situation (for example, in the proof of Cauchy’s repeated integration formula), the Leibniz integral rule becomes:
{\displaystyle {\frac {d}{dx}}\left(\int _{a}^{x}f(x,t),dt\right)=f{\big (}x,x{\big )}+\int _{a}^{x}{\frac {\partial }{\partial x}}f(x,t),dt,}{\displaystyle {\frac {d}{dx}}\left(\int _{a}^{x}f(x,t),dt\right)=f{\big (}x,x{\big )}+\int _{a}^{x}{\frac {\partial }{\partial x}}f(x,t),dt,}
This important result may, under certain conditions, be used to interchange the integral and partial differential operators, and is particularly useful in the differentiation of integral transforms. An example of such is the moment generating function in probability theory, a variation of the Laplace transform, which can be differentiated to generate the moments of a random variable. Whether Leibniz’s integral rule applies is essentially a question about the interchange of limits.
Contents
1 General form: differentiation under the integral sign
Theorem — Let {\displaystyle f(x,t)}f(x,t) be a function such that both {\displaystyle f(x,t)}f(x,t) and its partial derivative {\displaystyle f_{x}(x,t)}{\displaystyle f_{x}(x,t)} are continuous in {\displaystyle t}t and {\displaystyle x}x in some region of the {\displaystyle xt}{\displaystyle xt}-plane, including {\displaystyle a(x)\leq t\leq b(x)}{\displaystyle a(x)\leq t\leq b(x)}, {\displaystyle x_{0}\leq x\leq x_{1}}x_{0}\leq x\leq x_{1}. Also suppose that the functions {\displaystyle a(x)}a(x) and {\displaystyle b(x)}b(x) are both continuous and both have continuous derivatives for {\displaystyle x_{0}\leq x\leq x_{1}}{\displaystyle x_{0}\leq x\leq x_{1}}. Then, for {\displaystyle x_{0}\leq x\leq x_{1}}{\displaystyle x_{0}\leq x\leq x_{1}},
{\displaystyle {\frac {d}{dx}}\left(\int _{a(x)}^{b(x)}f(x,t),dt\right)=f{\big (}x,b(x){\big )}\cdot {\frac {d}{dx}}b(x)-f{\big (}x,a(x){\big )}\cdot {\frac {d}{dx}}a(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}f(x,t),dt.}{\displaystyle {\frac {d}{dx}}\left(\int _{a(x)}^{b(x)}f(x,t),dt\right)=f{\big (}x,b(x){\big )}\cdot {\frac {d}{dx}}b(x)-f{\big (}x,a(x){\big )}\cdot {\frac {d}{dx}}a(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}f(x,t),dt.}
Stronger versions of the theorem only require that the partial derivative exist almost everywhere, and not that it be continuous.[2] This formula is the general form of the Leibniz integral rule and can be derived using the fundamental theorem of calculus. The (first) fundamental theorem of calculus is just the particular case of the above formula where {\displaystyle a(x)=a\in \mathbb {R} }{\displaystyle a(x)=a\in \mathbb {R} }, {\displaystyle b(x)=x}{\displaystyle b(x)=x}, and {\displaystyle f(x,t)=f(t)}{\displaystyle f(x,t)=f(t)}.
If both upper and lower limits are taken as constants, then the formula takes the shape of an operator equation:
{\displaystyle {\mathcal {I}}_{t}\partial {x}=\partial {x}{\mathcal {I}}{t}}{\displaystyle {\mathcal {I}}{t}\partial _{x}=\partial {x}{\mathcal {I}}{t}}
where {\displaystyle \partial {x}}\partial {x} is the partial derivative with respect to {\displaystyle x}x and {\displaystyle {\mathcal {I}}{t}}{\displaystyle {\mathcal {I}}{t}} is the integral operator with respect to {\displaystyle t}t over a fixed interval. That is, it is related to the symmetry of second derivatives, but involving integrals as well as derivatives. This case is also known as the Leibniz integral rule.
The following three basic theorems on the interchange of limits are essentially equivalent:
the interchange of a derivative and an integral (differentiation under the integral sign; i.e., Leibniz integral rule);
the change of order of partial derivatives;
the change of order of integration (integration under the integral sign; i.e., Fubini’s theorem).