激光SLAM后端优化——雅克比矩阵推导

激光SLAM后端优化——雅克比矩阵推导

Jacobi Matrix

In the EKF system, the maintained state quantities include three categories: IMU state, camera state, and Lidar state, which are expressed as follows:

X i m u = [ n q b T b g T n v b T b a T n p b T ] X_{imu}=\begin{bmatrix}^nq^T_b &b^T_g &^nv^T_b &b^T_a & ^np^T_b \end{bmatrix} Ximu=[nqbTbgTnvbTbaTnpbT]

X c a m = [ G q c 0 T G p c 0 T ] X_{cam}=\begin{bmatrix}^Gq^T_{c_0} & ^Gp^T_{c_0} \end{bmatrix} Xcam=[Gqc0TGpc0T]

X l i d a r = [ G q L T G p L T ] X_{lidar}=\begin{bmatrix}^Gq_L^T & ^Gp^T_L \end{bmatrix} Xlidar=[GqLTGpLT]

Note: q stands for attitude and p stands for position.Normally,G stands for Earth-Centered,Earth-Fixed Coordinate System,n stands for Navigation System.

We combine the above three state variables into a total state variable,which are expressed as follows:

X o l d = [ X i m u ( X c a m ) N ( X l i d a r ) M ] X_{old}=\begin{bmatrix}X_{imu} & (X_{cam})_N & (X_{lidar})_M \end{bmatrix} Xold=[Ximu(Xcam)N(Xlidar)M]

So far, we have obtained the initial state variables of the entire system. And then,Let’s try to expand our state variables.

If we get a new lidar scan,we need to expand X X X just like this:

X n e w = [ X i m u ( X c a m ) N ( X l i d a r ) M X l i d a r n e w ] = [ X o l d X l i d a r n e w ] X_{new}=\begin{bmatrix} X_{imu} & (X_{cam})_N & (X_{lidar})_M & X_{lidar}^{new} \end{bmatrix}=\begin{bmatrix} X_{old}& X_{lidar}^{new} \end{bmatrix} Xnew=[Ximu(Xcam)N(Xlidar)MXlidarnew]=[XoldXlidarnew]

For the covariance matrix, it can be expressed in the form of the following figure, in which the green blocks represent non-zero blocks, the blue blocks are all zero matrixes, and the orange blocks represent the content to be obtained when extending the lidar state of the k+1 th frame.
加入一阵激光雷达后状态扩维

The expanded covariance is:

P n e w = [ P o l d P O L k + 1 P L k + 1 O P L L l + 1 ] P_{new}=\begin{bmatrix}P_{old} & P_{OL_{k+1}} \\ P_{L_{k+1}O} & P_{LL_{l+1}} \end{bmatrix} Pnew=[PoldPLk+1OPOLk+1PLLl+1]

📌 Let’s discuss some knowledge about the Jacobian matrix.For simplicity, we use a one-dimensional function as an example.If there is function curve y=f(x,t),and this is a point (x_0,y_0) in the curve i.e. y_0=f(x_0,t_0).

If I want to know the value of a point at t 1 t_1 t1 on the curve that is very close to the point ( x 0 , y 0 ) (x_0,y_0) (x0,y0).If I know the X coordinate of this point is x 1 x_1 x1,so how can I get the Y coordinate of this point if the function at t 1 t_1 t1 is very similar to that at t 0 t_0 t0. I can get Y as follows:

f ( x 1 , t 1 ) = y 0 + d f ( x , t 0 ) d x ∣ x = x 0 ( x 1 − x 0 ) f(x_1,t_1)=y_0+\frac {df(x,t_0)}{dx}|_{x=x0}(x_1-x_0) f(x1,t1)=y0+dxdf(x,t0)x=x0(x1x0)

For n-dimensional functions we use the Jacobian matrix J J J to represent the partial derivatives.We can get this:

f ( X 1 , t 1 ) = f ( X 0 , t 0 ) + J ( X 1 − X 0 ) f(X_1,t_1)=f(X_0,t_0)+J(X_1-X_0) f(X1,t1)=f(X0,t0)+J(X1X0)

From the law of covariance propagation, we can draw the following conclusions:

D X 1 X 1 = J D X 0 X 0 J T D_{X_1X_1}=JD_{X_0X_0}J^T DX1X1=JDX0X0JT

D X 0 X 1 = D X 0 X 0 J T D_{X_0X_1}=D_{X_0X_0}J^T DX0X1=DX0X0JT

D X 1 X 0 = J D x 0 x 0 D_{X_1X_0}=JD_{x_0x_0} DX1X0=JDx0x0

So,We can get:

P n e w = [ P o l d P O L k + 1 P L k + 1 O P L L l + 1 ]   = [ P o l d P o l d J T J P o l d J P o l d J T ]   = [ I J ] P o l d [ I T J T ] P_{new}=\begin{bmatrix}P_{old} & P_{OL_{k+1}} \\ P_{L_{k+1}O} & P_{LL_{l+1}} \end{bmatrix} \\ \,\qquad =\begin{bmatrix}P_{old} & P_{old}J^T \\ JP_{old} & JP_{old}J^T \end{bmatrix} \\ \,\qquad =\begin{bmatrix}I \\J\end{bmatrix}P_{old}\begin{bmatrix}I^T & J^T\end{bmatrix} Pnew=[PoldPLk+1OPOLk+1PLLl+1]=[PoldJPoldPoldJTJPoldJT]=[IJ]Pold[ITJT]

Obviously, J J J is the Jacobian matrix of the newly added lidar state variable X l i d a r n e w X_{lidar}^{new} Xlidarnew about the previous state variable X o l d X_{old} Xold. Mathematically, the newly added state variable X l i d a r n e w X_{lidar}^{new} Xlidarnewis only related to the state variable of imu.
First, construct the position and attitude of the lidar:

R ( G q L ) = R ( G q n ) R ( n q b ) R L b R(^Gq_L)=R(^Gq_n)R(^nq_b)R^b_L R(GqL)=R(Gqn)R(nqb)RLb

G p L = R ( G q b ) p L + G p B = R ( G q N ) R ( n q b ) b + G p b ^Gp_L=R(^Gq_b)p_L+^Gp_B=R(^Gq_N)R(^nq_b)^b+^Gp_b GpL=R(Gqb)pL+GpB=R(GqN)R(nqb)b+Gpb

Find partial derivatives by adding perturbation terms,for position:

G p L + δ G p L = R ( G q n ) ( I − [ δ θ ] x ) R ( n q b ) b p L + G p b + δ G p b ^Gp_L+\delta^Gp_L=R(^Gq_n)(I-[\delta\theta]_\text{x})R(^nq_b)^bp_L+^Gp_b+\delta^Gp_b GpL+δGpL=R(Gqn)(I[δθ]x)R(nqb)bpL+Gpb+δGpb

We can get the following formula:

δ G p L = − [ δ θ x ] R ( G q n ) R ( n q b ) b p L + δ G p b = R ( G q N ) [ R ( n q b ) b p L ] x δ θ + δ G p b \delta^Gp_L=-[\delta\theta_\text{x}]R(^Gq_n)R(^nq_b)^bp_L+\delta^Gp_b=R(^Gq_N)[R(^nq_b)^bp_L]_\text{x}\delta\theta+\delta^Gp_b δGpL=[δθx]R(Gqn)R(nqb)bpL+δGpb=R(GqN)[R(nqb)bpL]xδθ+δGpb

Obviously,

∂ δ G p L ∂ δ θ = R ( G q n ) [ R ( n q b ) b p L ] x \frac {\partial\delta^Gp_L}{\partial\delta\theta}=R(^Gq_n)[R(^nq_b)^bp_L]_\text{x} δθδGpL=R(Gqn)[R(nqb)bpL]x

∂ δ G p L δ G p b = I \frac {\partial\delta^Gp_L}{\delta^Gp_b}=I δGpbδGpL=I

And for attitude:

R ( G q L ) = R ( G q n ) R ( n q b ) R L b R(^Gq_L)=R(^Gq_n)R(^nq_b)R^b_L R(GqL)=R(Gqn)R(nqb)RLb

Find partial derivatives by adding perturbation terms:

R ( G q L ) ( I + [ δ G θ L ] x ) = R ( G q n ) ( I − [ δ n θ b ] x ) R ( n q b ) R L b R(^Gq_L)(I+[\delta^G\theta_L]_\text{x})=R(^Gq_n)(I-[\delta^n\theta_b]_\text{x})R(^nq_b)R^b_L R(GqL)(I+[δGθL]x)=R(Gqn)(I[δnθb]x)R(nqb)RLb

We can get the following formula:

R ( G q L ) [ δ G θ L ] x = − R ( G q n ) [ δ n θ ] x R ( n q b ) R L b R(^Gq_L)[\delta^G\theta_L]_\text{x}=-R(^Gq_n)[\delta^n\theta]_\text{x}R(^nq_b)R_L^b R(GqL)[δGθL]x=R(Gqn)[δnθ]xR(nqb)RLb

[ δ G θ L ] x = − R n L [ δ n θ ] x ( R n L ) T [\delta^G\theta_L]_x=-R_n^L[\delta^n\theta]_\text{x}(R_n^L)^T [δGθL]x=RnL[δnθ]x(RnL)T

[ δ G θ L ] x = − [ R n L δ n θ b ] x [\delta^G\theta_L]_x=-[R^L_n\delta^n\theta_b]_\text{x} [δGθL]x=[RnLδnθb]x

Obviously,

∂ δ G θ L ∂ δ θ = − R n L \frac{\partial\delta^G\theta_L}{\partial\delta\theta}=-R_n^L δθδGθL=RnL

∂ δ G θ L ∂ G p b = 0 \frac{\partial\delta^G\theta_L}{\partial^Gp_b}=0 GpbδGθL=0

To sum up, we can obtain the Jacobian matrix:

J = [ − R n L 0 0 0 0 . . . 0 R ( G q n ) [ R ( n q b ) b p L ] x 0 0 0 I . . . 0 ] J=\begin{bmatrix} -R^L_n & 0 & 0 & 0 & 0 & ... & 0 \\ R(^Gq_n)[R(^nq_b)^bp_L]_\text{x} & 0 & 0& 0& I & ... &0\end{bmatrix} J=[RnLR(Gqn)[R(nqb)bpL]x0000000I......00]

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

名蒸蛋柯南.

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值