机器学习中的矩阵求导

前言

初学机器学习的时候遇到矩阵或向量的导数总是十分头疼。后来在学习过程中,老师告诉我说记住一些普遍的形式,然后套用即可;也有的教程是说采用行形式或者列形式。但我觉得这些都只是在隔靴搔痒,并没有真正地解决矩阵求导的问题。于是我考虑了张量分析,自以为给出了一个矩阵求导的通用规则。其实只要一切归结于矩阵元的运算就可以很容易地看懂张量缩并的运算,继而导出矩阵求导。这其实是一种非常形而上的计算方法,但我个人还是很喜欢自己导出的这个计算方法。

由于线性代数和统计学习的向量普遍都是以列向量为主,本文推导中的向量统一都是列向量。

向量值映照的微分(向量对向量对求导)

给定一个向量值映照, f : R m → R n \boldsymbol{f}:\mathbb{R}^m\rightarrow \mathbb{R}^n f:RmRn.

f \boldsymbol{f} f x \boldsymbol{x} x处可微是指 ∃ A ∈ H o m ( R m , R n ) \exists \mathscr{A}\in Hom\left(\mathbb{R}^m,\mathbb{R}^n\right) AHom(Rm,Rn), s.t. f ( x + Δ x ) − f ( x ) = A ( x ) ( Δ x ) + o ( Δ x ) \boldsymbol{f}\left(\boldsymbol{x}+\Delta \boldsymbol{x}\right)-\boldsymbol{f}\left(\boldsymbol{x}\right)=\mathscr{A}\left(\boldsymbol{x}\right)\left(\Delta \boldsymbol{x}\right)+o\left(\Delta \boldsymbol{x}\right) f(x+Δx)f(x)=A(x)(Δx)+o(Δx).

Remark: H o m ( R m , R n ) Hom\left(\mathbb{R}^m,\mathbb{R}^n\right) Hom(Rm,Rn)表示从 R m \mathbb{R}^m Rm R n \mathbb{R}^n Rn的线性映射的全体, A ( x ) \mathscr{A}\left(\boldsymbol{x}\right) A(x) x \boldsymbol{x} x的函数, 微分在每一点可能不同。

因为是线性映射,所以有

A ( Δ x 1 ⋮ Δ x m ) = A ( Δ x 1 ⋮ 0 ) + ⋯ + A ( 0 ⋮ Δ x m ) = Δ x 1 A ( 1 ⋮ 0 ) + ⋯ + Δ x m A ( 0 ⋮ 1 ) = [ A ( 1 ⋮ 0 ) ⋯ A ( 0 ⋮ 1 ) ] ( Δ x 1 ⋮ Δ x m ) = A Δ x ≃ ( Δ f 1 ⋮ Δ f n ) \begin{equation} \begin{split} \mathscr{A} \left(\begin{matrix} \Delta {x}^1 \\ \vdots \\ \Delta {x}^m \end{matrix}\right) &=\mathscr{A} \left(\begin{matrix} \Delta {x}^1 \\ \vdots \\ 0 \end{matrix}\right)+\cdots+\mathscr{A} \left(\begin{matrix} 0 \\ \vdots \\ \Delta {x}^m \end{matrix}\right) \\& = \Delta {x}^1\mathscr{A} \left(\begin{matrix} 1 \\ \vdots \\ 0 \end{matrix}\right)+\cdots+\Delta {x}^m\mathscr{A} \left(\begin{matrix} 0 \\ \vdots \\ 1 \end{matrix}\right) \\& = \left[\mathscr{A} \left(\begin{matrix} 1 \\ \vdots \\ 0 \end{matrix}\right)\cdots\mathscr{A} \left(\begin{matrix} 0 \\ \vdots \\ 1 \end{matrix}\right)\right] \left(\begin{matrix} \Delta {x}^1 \\ \vdots \\ \Delta {x}^m \end{matrix}\right) \\&= \boldsymbol{A} \Delta \boldsymbol{x}\simeq \left(\begin{matrix} \Delta {f}^1 \\ \vdots \\ \Delta {f}^n \end{matrix}\right) \end{split} \end{equation} A Δx1Δxm =A Δx10 ++A 0Δxm =Δx1A 10 ++ΔxmA 01 = A 10 A 01 Δx1Δxm =AΔx Δf1Δfn

其中矩阵 A ( n × m ) \boldsymbol{A}(n\times m) A(n×m)为线下映射 A \mathscr{A} A在典则基下的表示.

Remark: 很多资料中都是从多对一的角度出发进行偏导数和多变量微分的解释, 但其实以线下映射的观点从一对多的角度出发可以得到更深刻和基本的结果.

i i \boldsymbol{i}_i ii i i ith典则基向量, i.e., [ 0 , ⋯   , 1 , ⋯   , 0 ] ⊤ \left[0,\cdots, 1, \cdots ,0\right]^\top [0,,1,,0].

让我们关注 A i i \mathscr{A}\boldsymbol{i}_i Aii,有

f ( x + λ i i ) − f ( x ) = λ A i i + o ( λ ) \begin{equation} \boldsymbol{f}\left(\boldsymbol{x}+ \lambda \boldsymbol{i}_i\right)-\boldsymbol{f}\left(\boldsymbol{x}\right) =\lambda\mathscr{A}\boldsymbol{i}_i + o\left(\lambda\right) \end{equation} f(x+λii)f(x)=λAii+o(λ)

l i m λ → 0 f ( x + λ i i ) − f ( x ) λ = A i i = : ∂ f ∂ x i = ( ∂ f 1 ∂ x i , ⋯   , ∂ f n ∂ x i ) ⊤ \begin{equation} lim_{\lambda \rightarrow 0} \frac{\boldsymbol{f}\left(\boldsymbol{x}+ \lambda \boldsymbol{i}_i\right)-\boldsymbol{f}\left(\boldsymbol{x}\right)}{\lambda} =\mathscr{A}\boldsymbol{i}_i =: \frac{\partial \boldsymbol{f}}{\partial x^i} =\left(\frac{\partial{f}^1}{\partial x^i},\cdots, \frac{\partial{f}^n}{\partial x^i}\right)^\top \end{equation} limλ0λf(x+λii)f(x)=Aii=:xif=(xif1,,xifn)

Remark: 向量值(多元)自变量放分母是不好做的, 所以一维一维地处理了, 就是曲线的切向量. x \boldsymbol{x} x不动, f ( x + λ i i ) \boldsymbol{f}\left(\boldsymbol{x}+ \lambda \boldsymbol{i}_i\right) f(x+λii)不就是在 x i x^i xi坐标线上。以 λ \lambda λ为参数的曲线. 向量值(多元)因变量放分子是好处理的, 因为每一个分量 f n f^n fn关于 x i x^i xi的微分 是在一元函数微积分中就可以计算了。

R n × m ∋ A = ( A i 1 , ⋯   , A i n ) = ( ∂ f ∂ x 1 , ⋯   , ∂ f ∂ x m ) = : D f \begin{equation} \mathbb{R}^{n\times m}\ni\boldsymbol{A}=\left(\mathscr{A}\boldsymbol{i}_1,\cdots ,\mathscr{A}\boldsymbol{i}_n \right)=\left(\frac{\partial\boldsymbol{f}}{\partial x^1},\cdots, \frac{\partial \boldsymbol{f}}{\partial x^m}\right)=:D\boldsymbol{f} \end{equation} Rn×mA=(Ai1,,Ain)=(x1f,,xmf)=:Df

其中 D f Df Df被称为Jacobian矩阵。

向量值函数的复合微分

f : R m → R n , x ↦ y \boldsymbol{f}:\mathbb{R}^m\rightarrow \mathbb{R}^n, \boldsymbol{x}\mapsto \boldsymbol{y} f:RmRn,xy.

g : R n → R p \boldsymbol{g}:\mathbb{R}^n\rightarrow \mathbb{R}^p g:RnRp.

f ( x + Δ x ) − f ( x ) = D f ( x ) ( Δ x ) + o ( Δ x ) \begin{equation} \boldsymbol{f}\left(\boldsymbol{x}+\Delta \boldsymbol{x}\right)-\boldsymbol{f}\left(\boldsymbol{x}\right)=D\boldsymbol{f}\left(\boldsymbol{x}\right)\left(\Delta \boldsymbol{x}\right)+o\left(\Delta \boldsymbol{x}\right) \end{equation} f(x+Δx)f(x)=Df(x)(Δx)+o(Δx)

g ( y + Δ y ) − g ( y ) = D g ( y ) ( Δ y ) + o ( Δ y ) \begin{equation} \boldsymbol{g}\left(\boldsymbol{y}+\Delta \boldsymbol{y}\right)-\boldsymbol{g}\left(\boldsymbol{y}\right) =D\boldsymbol{g}\left(\boldsymbol{y}\right)\left(\Delta \boldsymbol{y}\right)+o\left(\Delta \boldsymbol{y}\right) \end{equation} g(y+Δy)g(y)=Dg(y)(Δy)+o(Δy)

where y = f ( x ) \boldsymbol{y} = \boldsymbol{f}\left(\boldsymbol{x}\right) y=f(x)

g ( y + Δ y ) = g [ y + f ( x + Δ x ) − f ( x ) + o ( Δ x ) ] = g [ y + D f ( x ) ( Δ x ) + o ( Δ x ) ] = g ( y ) + D g ( y ) D f ( x ) ( Δ x ) + o ( Δ x ) \begin{equation} \begin{split} \boldsymbol{g}\left(\boldsymbol{y}+\Delta \boldsymbol{y}\right) &=\boldsymbol{g}\left[\boldsymbol{y}+\boldsymbol{f}\left(\boldsymbol{x}+\Delta \boldsymbol{x}\right)-\boldsymbol{f}\left(\boldsymbol{x}\right)+o\left(\Delta \boldsymbol{x}\right)\right] \\&=\boldsymbol{g}\left[\boldsymbol{y}+D\boldsymbol{f}\left(\boldsymbol{x}\right)\left(\Delta \boldsymbol{x}\right)+o\left(\Delta \boldsymbol{x}\right)\right] \\&=\boldsymbol{g}\left(\boldsymbol{y}\right) +D\boldsymbol{g}\left(\boldsymbol{y}\right)D\boldsymbol{f}\left(\boldsymbol{x}\right) \left(\Delta \boldsymbol{x}\right)+o\left(\Delta \boldsymbol{x}\right) \end{split} \end{equation} g(y+Δy)=g[y+f(x+Δx)f(x)+o(Δx)]=g[y+Df(x)(Δx)+o(Δx)]=g(y)+Dg(y)Df(x)(Δx)+o(Δx)

Remar: 我们看到了微分 D f Df Df就是一个线性映射, 它的定义域和值域与 f f f是一样的,本质上 D f Df Df就是 f f f的线性化。

一般求导流程

机器学习中一般是某个标量 R R R关于矩阵 X ∈ R I × J \boldsymbol{X}\in \mathbb{R}^{I\times J} XRI×J或向量的导数 x ∈ R I × 1 \boldsymbol{x}\in \mathbb{R}^{I\times 1} xRI×1, 导数维数应与原向量、原矩阵相同。 总体的原则是运用矩阵乘法对元素进行求导.

运用einsum对矩阵进行操作可大大简化求导难度。

记矩阵 X \boldsymbol{X} X的第 i i i行第 j j j列元素记为 X i ⋅ j X_{i}^{\cdot j} Xij, 其转置有 ( X ⊤ ) i ⋅ j = X j ⋅ i (X^\top)_{i}^{\cdot j}=X_{j}^{\cdot i} (X)ij=Xji.

但是向量仅有一个维度,其仅存在指标上下的变化,即 ( x ⊤ ) i = x i (x^\top) _i= x^i (x)i=xi ( x ⊤ ) i = x i (x^\top)^i=x_i (x)i=xi

设有一个映射为 f : R I × J → R f:\mathbb{R}^{I\times J} \rightarrow \mathbb{R} f:RI×JR,
∂ f ∂ X = D \frac{\partial f}{\partial \boldsymbol{X}}=\boldsymbol{D} Xf=D, 有

∂ f ∂ X i ⋅ j = D i ⋅ j \begin{equation} \frac{\partial f}{\partial X_{i}^{\cdot j}}=D_{i}^{\cdot j} \end{equation} Xijf=Dij

设有一个映射为 X : R → R I × J , t ↦ X \boldsymbol{X}:\mathbb{R} \rightarrow \mathbb{R}^{I\times J}, t\mapsto \boldsymbol{X} X:RRI×J,tX,
∂ X ∂ t = D \frac{\partial \boldsymbol{X}}{\partial t}=\boldsymbol{D} tX=D,

∂ X i ⋅ j ∂ t = D i ⋅ j \begin{equation} \frac{\partial X_{i}^{\cdot j}}{\partial t}=D_{i}^{\cdot j} \end{equation} tXij=Dij

例子

Proposition:
Given X ∈ R I × J \boldsymbol{X}\in \mathbb{R}^{I\times J} XRI×J and A ∈ R J × I \boldsymbol{A}\in \mathbb{R}^{J\times I} ARJ×I

∂ T r ( A X ) ∂ X = A ⊤ ∈ R I × J \begin{equation} \frac{\partial Tr\left(\boldsymbol{AX}\right)}{\partial\boldsymbol{X}} =\boldsymbol{A}^\top \in \mathbb{R}^{I\times J} \end{equation} XTr(AX)=ARI×J

Proof: T r ( A X ) = A r ⋅ s X s ⋅ r Tr\left(\boldsymbol{AX}\right) = A_{r}^{\cdot s}X_{s}^{\cdot r} Tr(AX)=ArsXsr,
∂ T r ( A X ) ∂ X i ⋅ j = δ s r i j A r ⋅ s = A j ⋅ i = ( A ⊤ ) i ⋅ j \frac{\partial Tr\left(\boldsymbol{AX}\right)}{\partial X_{i}^{\cdot j}} = \delta _{sr}^{ij} A_{r}^{\cdot s}=A_{j}^{\cdot i}=\left(A^\top\right)_{i}^{\cdot j} XijTr(AX)=δsrijArs=Aji=(A)ij

Proposition:
Given X ∈ R I × J \boldsymbol{X}\in \mathbb{R}^{I\times J} XRI×J and A ∈ R I × I \boldsymbol{A}\in \mathbb{R}^{I\times I} ARI×I

∂ T r ( X ⊤ A X ) ∂ X = A X + A ⊤ X ∈ R I × J \begin{equation} \frac{\partial Tr\left(\boldsymbol{X}^\top \boldsymbol{AX}\right)}{\partial\boldsymbol{X}} =\boldsymbol{AX}+\boldsymbol{A}^\top\boldsymbol{X} \in \mathbb{R}^{I\times J} \end{equation} XTr(XAX)=AX+AXRI×J

Proof: T r ( X ⊤ A X ) = X s ⋅ r A s ⋅ t X t ⋅ r Tr\left(\boldsymbol{X}^\top \boldsymbol{AX}\right)= X_{s}^{\cdot r}A_{s}^{\cdot t}X_{t}^{\cdot r} Tr(XAX)=XsrAstXtr

∂ T r ( X ⊤ A X ) ∂ X i ⋅ j = δ s r i j A s ⋅ t X t ⋅ r + X s ⋅ r A s ⋅ t δ t r i j = A i ⋅ t X t ⋅ j + X s ⋅ j A s ⋅ i = A i ⋅ t X t ⋅ j + ( A ⊤ ) i ⋅ s X s ⋅ j = ( A X ) i ⋅ j + ( A ⊤ X ) i ⋅ j \begin{equation} \begin{split} \frac{\partial Tr\left(\boldsymbol{X}^\top \boldsymbol{AX}\right)}{\partial X_{i}^{\cdot j}} &=\delta_{sr}^{ij}A_{s}^{\cdot t}X_{t}^{\cdot r} +X_{s}^{\cdot r}A_{s}^{\cdot t}\delta_{tr}^{ij} \\&=A_{i}^{\cdot t}X_{t}^{\cdot j} +X_{s}^{\cdot j}A_{s}^{\cdot i} =A_{i}^{\cdot t}X_{t}^{\cdot j} +\left(A^\top\right)_{i}^{\cdot s}X_{s}^{\cdot j} \\&=\left(AX\right)_{i}^{\cdot j}+\left(A^\top X\right)_{i}^{\cdot j} \end{split} \end{equation} XijTr(XAX)=δsrijAstXtr+XsrAstδtrij=AitXtj+XsjAsi=AitXtj+(A)isXsj=(AX)ij+(AX)ij

Remark: 进行矩阵元操作的时候注意指标的行列性与次序即可确定矩阵的转置与否和次序, 注意如果是对 X i ⋅ j X_i^{\cdot j} Xij, 则最后的表达式的第一个下指标是 i i i最后一个上指标是 j j j

当涉及比较复杂的复合运算是,就需要用到以下定理

Theorem:

Given two matrices X ∈ R I × J \boldsymbol{X}\in \mathbb{R}^{I\times J} XRI×J and Y ∈ R M × N \boldsymbol{Y}\in \mathbb{R}^{M\times N} YRM×N and a function f ( Y ( X ) ) ∈ R f\left(\boldsymbol{Y}\left(\boldsymbol{X}\right)\right)\in \mathbb{R} f(Y(X))R.

∂ f ∂ X i ⋅ j = ∂ f ∂ Y k ⋅ l ∂ Y k ⋅ l ∂ X i ⋅ j \begin{equation} \frac{\partial f}{\partial X_{i}^{\cdot j}} =\frac{\partial f}{\partial Y_{k}^{\cdot l}} \frac{\partial Y_{k}^{\cdot l}}{\partial X_{i}^{\cdot j}} \end{equation} Xijf=YklfXijYkl

Remark: 这个式子用了爱因斯坦求和记号是个求和的式子, 应用了多元函数复合求导, 将每个 Y k ⋅ l Y_{k}^{\cdot l} Ykl都看作 X i ⋅ j X_{i}^{\cdot j} Xij的函数. 新手经常出错是直接进行了偏微分抵消, 事实上要把偏微分写成全微分(求和形式)才可以抵消。

Proposition:

Given X ∈ R I × I \boldsymbol{X}\in \mathbb{R}^{I\times I} XRI×I invertiable
∂ X − 1 ∂ X i ⋅ j = − X − 1 ∂ X ∂ X i ⋅ j X − 1 \begin{equation} \frac{\partial \boldsymbol{X}^{-1}}{\partial X_{i}^{\cdot j}}= -\boldsymbol{X}^{-1}\frac{\partial \boldsymbol{X}}{\partial X_{i}^{\cdot j}}\boldsymbol{X}^{-1} \end{equation} XijX1=X1XijXX1

∂ ( X − 1 ) k ⋅ l ∂ X i ⋅ j = − ( X − 1 ) k ⋅ i ( X − 1 ) j ⋅ l \begin{equation} \frac{\partial \left(\boldsymbol{X}^{-1}\right)_{k}^{\cdot l}}{\partial X_{i}^{\cdot j}}= -\left(\boldsymbol{X}^{-1}\right)_{k}^{\cdot i}\left(\boldsymbol{X}^{-1}\right)_{j}^{\cdot l} \end{equation} Xij(X1)kl=(X1)ki(X1)jl

Proof: Considering the fact that X − 1 X = I \boldsymbol{X}^{-1}\boldsymbol{X}=\boldsymbol{I} X1X=I, take derivative of the both sides,

∂ X − 1 ∂ X i ⋅ j X + X − 1 ∂ X ∂ X i ⋅ j = 0 \begin{equation} \frac{\partial \boldsymbol{X}^{-1}}{\partial X_{i}^{\cdot j}}\boldsymbol{X} +\boldsymbol{X}^{-1}\frac{\partial \boldsymbol{X}}{\partial X_{i}^{\cdot j}}=\boldsymbol{0} \end{equation} XijX1X+X1XijX=0

∂ ( X − 1 ) k ⋅ l ∂ X i ⋅ j = ( − X − 1 ∂ X ∂ X i ⋅ j X − 1 ) k ⋅ l = − ( X − 1 ) k ⋅ s ( ∂ X ∂ X i ⋅ j ) s ⋅ t ( X − 1 ) t ⋅ l = − ( X − 1 ) k ⋅ s ( ∂ X s ⋅ t ∂ X i ⋅ j ) ( X − 1 ) t ⋅ l = − ( X − 1 ) k ⋅ s δ i j s t ( X − 1 ) t ⋅ l = − ( X − 1 ) k ⋅ i ( X − 1 ) j ⋅ l \begin{equation} \begin{split} \frac{\partial \left(\boldsymbol{X}^{-1}\right)_{k}^{\cdot l}}{\partial X_{i}^{\cdot j}} &=\left(-\boldsymbol{X}^{-1}\frac{\partial \boldsymbol{X}}{\partial X_{i}^{\cdot j}}\boldsymbol{X}^{-1}\right)_{k}^{\cdot l} \\&=-\left(\boldsymbol{X}^{-1}\right)_{k}^{\cdot s} \left(\frac{\partial \boldsymbol{X}}{\partial X_{i}^{\cdot j}}\right)_{s}^{\cdot t} \left(\boldsymbol{X}^{-1}\right)_{t}^{\cdot l} \\&=-\left(\boldsymbol{X}^{-1}\right)_{k}^{\cdot s} \left(\frac{\partial X_{s}^{\cdot t}}{\partial X_{i}^{\cdot j}}\right) \left(\boldsymbol{X}^{-1}\right)_{t}^{\cdot l} \\&=-\left(\boldsymbol{X}^{-1}\right)_{k}^{\cdot s} \delta^{st}_{ij} \left(\boldsymbol{X}^{-1}\right)_{t}^{\cdot l} =-\left(\boldsymbol{X}^{-1}\right)_{k}^{\cdot i} \left(\boldsymbol{X}^{-1}\right)_{j}^{\cdot l} \end{split} \end{equation} Xij(X1)kl=(X1XijXX1)kl=(X1)ks(XijX)st(X1)tl=(X1)ks(XijXst)(X1)tl=(X1)ksδijst(X1)tl=(X1)ki(X1)jl

□ \square

Remark: 其中运用到了元素的求导是求导的元素,而 ∂ ( X − 1 ) k ⋅ l ∂ X i ⋅ j \frac{\partial \left(\boldsymbol{X}^{-1}\right)_{k}^{\cdot l}}{\partial X_{i}^{\cdot j}} Xij(X1)kl是矩阵对矩阵的求导,因而求导结果是一个四阶张量。

Proposition:
Given X ∈ R I × I \boldsymbol{X}\in \mathbb{R}^{I\times I} XRI×I invertible

∂ T r ( A X − 1 B ) ∂ X = − X − ⊤ A ⊤ B ⊤ X − ⊤ \begin{equation} \frac{\partial Tr\left(\boldsymbol{AX}^{-1}\boldsymbol{B}\right)}{\partial \boldsymbol{X}}= -\boldsymbol{X}^{-\top}\boldsymbol{A}^\top\boldsymbol{B}^\top\boldsymbol{X}^{-\top} \end{equation} XTr(AX1B)=XABX

Proof:

∂ A r ⋅ s ( X − 1 ) s ⋅ t B t ⋅ r ∂ X i ⋅ j = − A r ⋅ s ( X − 1 ) s ⋅ i ( X − 1 ) j ⋅ t B t ⋅ r = − ( X − ⊤ ) i ⋅ s ( A ⊤ ) s ⋅ r ( B ⊤ ) r ⋅ t ( X − ⊤ ) t ⋅ j \frac{\partial A_{r}^{\cdot s}(X^{-1})_{s}^{\cdot t}B_{t}^{\cdot r}}{\partial X_{i}^{\cdot j}} =-A_{r}^{\cdot s}(X^{-1})_{s}^{\cdot i}(X^{-1})_{j}^{\cdot t}B_{t}^{\cdot r} =-(X^{-\top})_{i}^{\cdot s}(A^{\top})_{s}^{\cdot r}(B^{\top})_{r}^{\cdot t} (X^{-\top})_{t}^{\cdot j} XijArs(X1)stBtr=Ars(X1)si(X1)jtBtr=(X)is(A)sr(B)rt(X)tj

□ \square

Proposition:

∂ d e t X ∂ X = d e t X ⋅ ( X − ⊤ ) \begin{equation} \frac{\partial det\boldsymbol{X}}{\partial \boldsymbol{X}}=det\boldsymbol{X}\cdot(\boldsymbol{X}^{-\top}) \end{equation} XdetX=detX(X)

∂ log ⁡ ∣ d e t X ∣ ∂ X = X − ⊤ \begin{equation} \frac{\partial \log \left|det\boldsymbol{X}\right|}{\partial \boldsymbol{X}}=\boldsymbol{X}^{-\top} \end{equation} XlogdetX=X

Proof:

注意在此命题中并没有采用爱因斯坦求和记号,

d e t X = ∑ j X i j A i j \begin{equation} det\boldsymbol{X}=\sum_{j}X_{ij}A_{ij} \end{equation} detX=jXijAij

where A i j A_{ij} Aij X i j X_{ij} Xij的代数余子式. X ∗ \boldsymbol{X}^* X X \boldsymbol{X} X的伴随矩阵,
X − 1 = d e t X ( X ∗ ) \boldsymbol{X}^{-1}=det\boldsymbol{X}(\boldsymbol{X}^*) X1=detX(X), ( X ∗ ) ⊤ = A (\boldsymbol{X}^*)^\top=\boldsymbol{A} (X)=A

∂ d e t X ∂ X i j = A i j ⇒ ∂ d e t X ∂ X = A = ( X ∗ ) ⊤ = d e t X ⋅ ( X − ⊤ ) \begin{equation} \begin{split} &\frac{\partial det\boldsymbol{X}}{\partial X_{ij}} =A_{ij}\Rightarrow \\& \frac{\partial det\boldsymbol{X}}{\partial \boldsymbol{X}}=\boldsymbol{A}= (\boldsymbol{X}^*)^\top =det\boldsymbol{X}\cdot(\boldsymbol{X}^{-\top}) \end{split} \end{equation} XijdetX=AijXdetX=A=(X)=detX(X)

Proposition:

∂ d e t ( A X B ) ∂ X = d e t ( A X B ) ⋅ ( X − ⊤ ) \begin{equation} \frac{\partial det\left(\boldsymbol{AXB}\right)}{\partial \boldsymbol{X}}=det\left(\boldsymbol{AXB}\right)\cdot(\boldsymbol{X}^{-\top}) \end{equation} Xdet(AXB)=det(AXB)(X)

Proof:

∂ d e t ( A X B ) ∂ X i ⋅ j = ∂ d e t ( A X B ) ∂ ( A X B ) k ⋅ l ∂ ( A X B ) k ⋅ l ∂ X i ⋅ j = … \begin{equation} \frac{\partial det\left(\boldsymbol{AXB}\right)}{\partial X_i^{\cdot j}}= \frac{\partial det\left(\boldsymbol{AXB}\right)}{\partial \left(\boldsymbol{AXB}\right)_k^{\cdot l}} \frac{\partial \left(\boldsymbol{AXB}\right)_k^{\cdot l}}{\partial X_i^{\cdot j}} =\dots \end{equation} Xijdet(AXB)=(AXB)kldet(AXB)Xij(AXB)kl=

参考文献

  1. Matrix Cookbook
  2. https://www.bilibili.com/video/BV18a411M7Jd/?spm_id_from=333.788&vd_source=107a981693181bff01f387d4a6c314c9
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值