IMU Preintegration(IMU预积分) Notes

IMU 预积分相关的一些笔记,特别是一些公式细节的推导,模型定义上略有符号不同,其余定义与论文保持一致

由于我是先写的latex文档,然后转成markdown格式到博客中,公式中如果有错误,欢迎各位读者指正。

IMU Model

B˜ωWB(t)=BωWB(t)+bg(t)+wg(t)B˜a(t)=RWB(t)(Wa(t)Wg)+ba(t)+wa(t)

Kinematic Model

{˙RWB=RWB(t)BωWB(t)W˙v(t)=Wa(t)=Wg+RWB(W˜a(t)ba(t)wa(t))W˙p(t)=Wv(t)˙bg(t)=wbg(t)˙ba(t)=wba(t)

The state at time t+Δt is obtained by integrating (1),
{R(t+Δt)=R(t)Exp((˜ω(t)bg(t)wgd(t))Δt)v(t+Δt)=v(t)+gΔt+R(˜a(t)ba(t)wad(t))Δtp(t+Δt)=p(t)+v(t)+12gΔt2+12R(˜a(t)ba(t)wad(t))Δt2

Here we dropped the coordinate frame subscripts for readability. The covariance of the discrete time noise wgd is a function of the sampling rate and relates to the continuous time spectral noise wg via
Cov(wgd(t))=1ΔtCov(wg(t)).

All the IMU measurements between two keyframe at time k=i and k=j can be summarized in a single compound measurement, named preintegrated IMU measurements, which constrains the motion between consecutive keyframes.

Iterating the IMU integration for all Δt intervals between two consecutive keyframes at times k=i and k=j, we find
{Rj=Rij1k=iExp((˜ωkbg,kwgd,k)Δt)vj=vi+gΔtij+j1k=iRk(˜akba,kwad,k)Δtpj=pi+j1k=i[vkΔt+12gΔt2+12Rk(˜akba,kwad,k)Δt2] where we introduced the shorthands \Delta t_{ij} \overset{\mathrm{def}}{=} \sum_{k=i}^{j-1}\Delta t and (\cdot)_i \overset{\mathrm{def}}{=} (\cdot)(t_i) for readability.

To avoid the recomputation of the above integration whenever the state at time t_i changes, we define the following relative motion increments that are independent of the pose and velocity at t_i:

\begin{aligned} \mathbf{R}_j =& \mathbf{R}_i \prod_{k=i}^{j-1} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,k} - \mathbf{w}_{gd,k}) \Delta t \big) \\ =& \mathbf{R}_i \Delta\mathbf{R}_{ij} \\ \Rightarrow\quad \Delta\mathbf{R}_{ij} \overset{\mathrm{def}}{=}& \mathbf{R}_i^\intercal \mathbf{R_j} = \prod_{k=i}^{j-1} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,k} - \mathbf{w}_{gd,k}) \Delta t \big) \end{aligned}

\begin{aligned} \mathbf{v}_j =& \mathbf{v}_i + \mathbf{g}\Delta t_{ij} + \sum_{k=i}^{j-1} \mathbf{R}_k \left(\tilde{\mathbf{a}}_k - \mathbf{b}_{a,k} - \mathbf{w}_{ad,k} \right) \Delta t \\ =& \mathbf{v}_i + \mathbf{g}\Delta t_{ij} + \sum_{k=i}^{j-1} \mathbf{R}_i \Delta\mathbf{R}_{ik} \left(\tilde{\mathbf{a}}_k - \mathbf{b}_{a,k} - \mathbf{w}_{ad,k} \right) \Delta t\\ =& \mathbf{v}_i + \mathbf{g}\Delta t_{ij} + \mathbf{R}_i \Delta\mathbf{v}_{ij} \\ \Rightarrow\quad \Delta\mathbf{v}_{ij} \overset{\mathrm{def}}{=}& \mathbf{R}_i^\intercal (\mathbf{v}_j - \mathbf{v}_i - \mathbf{g}\Delta t_{ij}) = \sum_{k=i}^{j-1} \Delta\mathbf{R}_{ik} \left(\tilde{\mathbf{a}}_k - \mathbf{b}_{a,k} - \mathbf{w}_{ad,k} \right) \Delta t \end{aligned}

\begin{aligned} \mathbf{p}_j =& \mathbf{p}_i + \sum_{k=i}^{j-1} \left[\mathbf{v}_k \Delta t + \frac12 \mathbf{g}\Delta t^2 + \frac12 \mathbf{R}_k \big(\tilde{\mathbf{a}}_k - \mathbf{b}_{a,k} - \mathbf{w}_{ad,k} \big) \Delta t^2 \right] \\ =& \mathbf{p}_i + \sum_{k=i}^{j-1} \left[ (\mathbf{v}_i + \mathbf{g}\Delta t_{ik} + \mathbf{R}_i \Delta\mathbf{v}_{ik}) \Delta t + \frac12 \mathbf{g}\Delta t^2 + \frac12 \mathbf{R}_k \big(\tilde{\mathbf{a}}_k - \mathbf{b}_{a,k} - \mathbf{w}_{ad,k} \big) \Delta t^2 \right] \\ =& \mathbf{p}_i + \mathbf{v}_i \Delta t_{ij} + \sum_{k=1}^{j-1}\left[ \frac12 \mathbf{g}(2\Delta t_{ik}\Delta t + \Delta t^2) + \mathbf{R}_i \Delta\mathbf{v}_{ik} \Delta t + \frac12 \mathbf{R}_i\Delta\mathbf{R}_{ik} \big(\tilde{\mathbf{a}}_k - \mathbf{b}_{a,k} - \mathbf{w}_{ad,k} \big) \Delta t^2 \right] \\ =& \mathbf{p}_i + \mathbf{v}_i \Delta t_{ij} + \frac12 \mathbf{g}\Delta t_{ij}^2 + \mathbf{R}_i \sum_{k=1}^{j-1}\left[ \Delta\mathbf{v}_{ik} \Delta t + \frac12 \Delta\mathbf{R}_{ik} \big(\tilde{\mathbf{a}}_k - \mathbf{b}_{a,k} - \mathbf{w}_{ad,k} \big) \Delta t^2 \right] \\ =& \mathbf{p}_i + \mathbf{v}_i \Delta t_{ij} + \frac12 \mathbf{g}\Delta t_{ij}^2 + \mathbf{R}_i \Delta\mathbf{p}_{ij} \\ \Rightarrow\quad \Delta\mathbf{p}_{ij} \overset{\mathrm{def}}{=}& \mathbf{R}_i^\intercal \left(\mathbf{p}_j - \mathbf{p}_i - \mathbf{v}_i \Delta t_{ij} - \frac12 \mathbf{g}\Delta t_{ij}^2 \right) = \sum_{k=1}^{j-1}\left[ \Delta\mathbf{v}_{ik} \Delta t + \frac12 \Delta\mathbf{R}_{ik} \big(\tilde{\mathbf{a}}_k - \mathbf{b}_{a,k} - \mathbf{w}_{ad,k} \big) \Delta t^2 \right] \end{aligned}

where \sum_{k=1}^{j-1} \frac12 \mathbf{g}(2\Delta t_{ik}\Delta t + \Delta t^2) = \frac12 \mathbf{g}\Delta t_{ij}^2 could be proved through induction method. \left\{\begin{aligned} \Delta\mathbf{R}_{ij} \overset{\mathrm{def}}{=}& \mathbf{R}_i^\intercal \mathbf{R_j} = \prod_{k=i}^{j-1} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,k} - \mathbf{w}_{gd,k}) \Delta t \big) \\ \Delta\mathbf{v}_{ij} \overset{\mathrm{def}}{=}& \mathbf{R}_i^\intercal (\mathbf{v}_j - \mathbf{v}_i - \mathbf{g}\Delta t_{ij}) = \sum_{k=i}^{j-1} \Delta\mathbf{R}_{ik} \left(\tilde{\mathbf{a}}_k - \mathbf{b}_{a,k} - \mathbf{w}_{ad,k} \right) \Delta t \\ \Delta\mathbf{p}_{ij} \overset{\mathrm{def}}{=}& \mathbf{R}_i^\intercal \left(\mathbf{p}_j - \mathbf{p}_i - \mathbf{v}_i \Delta t_{ij} - \frac12 \mathbf{g}\Delta t_{ij}^2 \right) = \sum_{k=1}^{j-1}\left[ \Delta\mathbf{v}_{ik} \Delta t + \frac12 \Delta\mathbf{R}_{ik} \big(\tilde{\mathbf{a}}_k - \mathbf{b}_{a,k} - \mathbf{w}_{ad,k} \big) \Delta t^2 \right] \end{aligned}\right.

The relative motion increments are independent from the state at time i as well as gravitational effects, so we will be able to compute them directly from the inertial measurements between two keyframes. Unfortunately, the increments are still function of the bias. We tackle this problem in two steps, first we assume the bias are known, then, we show how to avoid repeating the integration when bias estimate change.

Preintegrated IMU Measurements

We assume that the bias remains constant between two keframes:
\mathbf{b}_{g,i} = \mathbf{b}_{g,i+1} = \ldots = \mathbf{b}_{g,j-1}, \mathbf{b}_{a,i} = \mathbf{b}_{a,i+1} = \ldots = \mathbf{b}_{a,j-1}

Start with rotation increments, the noise is small and so we can use the first order optimization, \begin{aligned} \Delta\mathbf{R}_{ij} =& \prod_{k=i}^{j-1} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,i} - \mathbf{w}_{gd,k}) \Delta t \big) = \prod_{k=i}^{j-1} \left[\mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,i}) \Delta t \big) \mathrm{Exp}\big( -\mathbf{J}_{r,k} \mathbf{w}_{gd,k}\Delta t \big)\right] \\ =& \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_i - \mathbf{b}_{g,i}) \Delta t\big) \mathrm{Exp}(-\mathbf{J}_{r,i} \mathbf{w}_{gd,i}\Delta t) \\ & \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+1} - \mathbf{b}_{g,i}) \Delta t\big) \mathrm{Exp}(-\mathbf{J}_{r,i+1} \mathbf{w}_{gd,i+1}\Delta t) \\ & \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+2} - \mathbf{b}_{g,i}) \Delta t\big) \mathrm{Exp}(-\mathbf{J}_{r,i+2} \mathbf{w}_{gd,i+2}\Delta t) \cdots \\ & \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{j-1} - \mathbf{b}_{g,i}) \Delta t\big) \mathrm{Exp}(-\mathbf{J}_{r,j-1} \mathbf{w}_{gd,j-1}\Delta t) \\ =& \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_i - \mathbf{b}_{g,i}) \Delta t\big) \\ & \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+1} - \mathbf{b}_{g,i}) \Delta t\big) \mathrm{Exp}\left[- \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+1} - \mathbf{b}_{g,i}) \Delta t\big)^\intercal \mathbf{J}_{r,i} \mathbf{w}_{gd,i}\Delta t\right] \\ & \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+2} - \mathbf{b}_{g,i}) \Delta t\big) \mathrm{Exp}\left[- \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+2} - \mathbf{b}_{g,i}) \Delta t\big)^\intercal \mathbf{J}_{r,i+1} \mathbf{w}_{gd,i+1}\Delta t\right] \\ & \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+3} - \mathbf{b}_{g,i}) \Delta t\big) \mathrm{Exp}\left[- \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+3} - \mathbf{b}_{g,i}) \Delta t\big)^\intercal \mathbf{J}_{r,i+2} \mathbf{w}_{gd,i+2}\Delta t\right] \cdots \\ & \mathrm{Exp}(-\mathbf{J}_{r,j-1} \mathbf{w}_{gd,j-1}\Delta t) \\ =& \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_i - \mathbf{b}_{g,i}) \Delta t\big) \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+1} - \mathbf{b}_{g,i}) \Delta t\big) \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+2} - \mathbf{b}_{g,i}) \Delta t\big) \\ & \mathrm{Exp}\left[- \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+2} - \mathbf{b}_{g,i}) \Delta t\big)^\intercal \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+1} - \mathbf{b}_{g,i}) \Delta t\big)^\intercal \mathbf{J}_{r,i} \mathbf{w}_{gd,i}\Delta t\right] \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+3} - \mathbf{b}_{g,i}) \Delta t\big) \\ & \mathrm{Exp}\left[- \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+3} - \mathbf{b}_{g,i}) \Delta t\big)^\intercal \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+2} - \mathbf{b}_{g,i}) \Delta t\big)^\intercal \mathbf{J}_{r,i+1} \mathbf{w}_{gd,i+1}\Delta t\right] \cdots \\ & \mathrm{Exp}(-\mathbf{J}_{r,j-1} \mathbf{w}_{gd,j-1}\Delta t) \\ =& \prod_{k=i}^{i+2} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,i})\Delta t\big) \cdot \mathrm{Exp}\left\{- \left[\prod_{k=i+1}^{i+2} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,i})\Delta t\big)\right]^\intercal \mathbf{J}_{r,i} \mathbf{w}_{gd,i}\Delta t\right\} \\ & \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{i+3} - \mathbf{b}_{g,i}) \Delta t\big) \mathrm{Exp}\left\{- \left[\prod_{k=i+2}^{i+3} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,i})\Delta t\big)\right]^\intercal \mathbf{J}_{r,i+1} \mathbf{w}_{gd,i+1}\Delta t\right\} \cdots \\ & \mathrm{Exp}(-\mathbf{J}_{r,j-1} \mathbf{w}_{gd,j-1}\Delta t) \\ =& \prod_{k=i}^{j-1} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,i})\Delta t\big) \cdot \mathrm{Exp}\left\{- \left[\prod_{k=i+1}^{j-1} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,i})\Delta t\big)\right]^\intercal \mathbf{J}_{r,i} \mathbf{w}_{gd,i}\Delta t\right\} \\ & \mathrm{Exp}\left\{- \left[\prod_{k=i+2}^{j-1} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,i})\Delta t\big)\right]^\intercal \mathbf{J}_{r,i+1} \mathbf{w}_{gd,i+1}\Delta t\right\} \cdots \mathrm{Exp}(-\mathbf{J}_{r,j-1} \mathbf{w}_{gd,j-1}\Delta t) \\ =& \prod_{k=i}^{j-1} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,i})\Delta t\big) \prod_{k=i}^{j-1} \mathrm{Exp}\left\{- \left[\prod_{k_1=k+1}^{j-1} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_{k_1} - \mathbf{b}_{g,i})\Delta t\big)\right]^\intercal \mathbf{J}_{r,k} \mathbf{w}_{gd,k}\Delta t\right\} \\ \overset{\mathrm{def}}{=}& \Delta \tilde{\mathbf{R}}_{ij} \prod_{k=i}^{j-1} \mathrm{Exp}\left(- \Delta\tilde{\mathbf{R}}_{k+1,j}^\intercal \mathbf{J}_{r,k} \mathbf{w}_{gd,k}\Delta t\right) \overset{\mathrm{def}}{=} \Delta \tilde{\mathbf{R}}_{ij} \mathrm{Exp}\left(-\delta \boldsymbol{\phi}_{ij} \right) \end{aligned}
where \mathbf{J}_{r,k} = \mathbf{J}_r \big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,i}) \Delta t\big) is the right Jacobian of SO(3), \Delta \mathbf{\tilde{R}}_{ij} \overset{\mathrm{def}}{=} \prod_{k=i}^{j-1} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{g,i})\Delta t\big) is the preintegrated rotation measurement, and \delta\boldsymbol{\phi}_{ij} is the noise of \Delta \mathbf{\tilde{R}}_{ij}.

Using the first order approximation \mathrm{Exp}(-\delta\boldsymbol{\phi}_{ij}) = 1 - \delta\boldsymbol{\phi}_{ij}^\wedge, the equation of \mathbf{a}^\wedge \mathbf{b} = -\mathbf{b}^\wedge \mathbf{a}, and dropping higher-order noise terms, we obtain
\begin{aligned} \Delta\mathbf{v}_{ij} =& \sum_{k=i}^{j-1} \Delta\mathbf{R}_{ik} \left(\tilde{\mathbf{a}}_k - \mathbf{b}_{ai} - \mathbf{w}_{ad,k} \right) \Delta t \\ =& \sum_{k=i}^{j-1} \Delta\mathbf{\tilde{R}}_{ik} \mathrm{Exp}(-\delta\boldsymbol{\phi}_{ik}) \left(\tilde{\mathbf{a}}_k - \mathbf{b}_{ai} - \mathbf{w}_{ad,k} \right) \Delta t \\ =& \sum_{k=i}^{j-1} \left[ \Delta\mathbf{\tilde{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai}) \Delta t + \Delta\mathbf{\tilde{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai})^\wedge \delta\boldsymbol{\phi}_{ik} \Delta t - \Delta\mathbf{\tilde{R}}_{ik} \mathbf{w}_{ad,k}\Delta t \right] \\ \overset{\mathrm{def}}{=}& \Delta\mathbf{\tilde{v}}_{ij} + \sum_{k=i}^{j-1} \left[\Delta\mathbf{\tilde{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai})^\wedge \delta\boldsymbol{\phi}_{ik} \Delta t - \Delta\mathbf{\tilde{R}}_{ik} \mathbf{w}_{ad,k}\Delta t \right] \\ \overset{\mathrm{def}}{=}& \Delta\mathbf{\tilde{v}}_{ij} - \delta \mathbf{v}_{ij} \end{aligned}
where we define the preintegrated velocity measurements \Delta\mathbf{\tilde{v}}_{ij} \overset{\mathrm{def}}{=} \sum_{k=i}^{j-1} \Delta\mathbf{\tilde{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai}) \Delta t and its noise \delta\mathbf{v}_{ij}

Similarly, we have \begin{aligned} \Delta\mathbf{p}_{ij} =& \sum_{k=1}^{j-1}\left[ \Delta\mathbf{v}_{ik} \Delta t + \frac12 \Delta\mathbf{R}_{ik} \big(\tilde{\mathbf{a}}_k - \mathbf{b}_{ai} - \mathbf{w}_{ad,k} \big) \Delta t^2 \right] \\ =& \sum_{k=1}^{j-1}\left[\Delta\mathbf{\tilde{v}}_{ik} \Delta t - \delta \mathbf{v}_{ik} \Delta t + \frac12 \Delta \tilde{\mathbf{R}}_{ik} \left(\mathbf{I} - \delta\boldsymbol{\phi}_{ik}^\wedge\right) \big(\tilde{\mathbf{a}}_k - \mathbf{b}_{ai} - \mathbf{w}_{ad,k} \big) \Delta t^2 \right]\\ =& \sum_{k=1}^{j-1}\left[\Delta\mathbf{\tilde{v}}_{ik} \Delta t - \delta \mathbf{v}_{ik} \Delta t + \frac12 \Delta \tilde{\mathbf{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai}) \Delta t^2 + \frac12 \Delta \tilde{\mathbf{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai})^\wedge \delta\boldsymbol{\phi}_{ik} \Delta t^2 - \frac12 \Delta \tilde{\mathbf{R}}_{ik} \mathbf{w}_{ad,k} \Delta t^2 \right] \\ \overset{\mathrm{def}}{=}& \Delta\mathbf{\tilde{p}}_{ij} + \sum_{k=1}^{j-1}\left[-\delta \mathbf{v}_{ik} \Delta t + \frac12 \Delta \tilde{\mathbf{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai})^\wedge \delta\boldsymbol{\phi}_{ik} \Delta t^2 - \frac12 \Delta \tilde{\mathbf{R}}_{ik} \mathbf{w}_{ad,k} \Delta t^2 \right] \\ \overset{\mathrm{def}}{=}& \Delta\mathbf{\tilde{p}}_{ij} - \delta\mathbf{p}_{ij} \end{aligned} where we defined the preintegrated position measurement
\Delta\mathbf{\tilde{p}}_{ij} \overset{\mathrm{def}}{=} \sum_{k=1}^{j-1}\left[\Delta\mathbf{\tilde{v}}_{ik} \Delta t + \frac12 \Delta \tilde{\mathbf{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai}) \Delta t^2 \right]
and its noise \delta\mathbf{p}_{ij}.

\left\{\begin{aligned} \Delta \mathbf{\tilde{R}}_{ij} \overset{\mathrm{def}}{=}& \prod_{k=i}^{j-1} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \mathbf{b}_{gi})\Delta t\big) \\ \Delta\mathbf{\tilde{v}}_{ij} \overset{\mathrm{def}}{=}& \sum_{k=i}^{j-1} \Delta\mathbf{\tilde{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai}) \Delta t \\ \Delta\mathbf{\tilde{p}}_{ij} \overset{\mathrm{def}}{=}& \sum_{k=1}^{j-1}\left[\Delta\mathbf{\tilde{v}}_{ik} \Delta t + \frac12 \Delta \tilde{\mathbf{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai}) \Delta t^2 \right] \end{aligned}\right.

\left\{\begin{aligned} \Delta\mathbf{R}_{ij} \overset{\mathrm{def}}{=}& \Delta \tilde{\mathbf{R}}_{ij} \mathrm{Exp}\left(-\delta \boldsymbol{\phi}_{ij} \right) \\ \Delta\mathbf{v}_{ij} \overset{\mathrm{def}}{=}& \Delta\mathbf{\tilde{v}}_{ij} - \delta \mathbf{v}_{ij} \\ \Delta\mathbf{p}_{ij} \overset{\mathrm{def}}{=}& \Delta\mathbf{\tilde{p}}_{ij} - \delta\mathbf{p}_{ij} \\ \end{aligned}\right.

Preinegrated measurement model
\left\{\begin{aligned} \Delta\mathbf{\tilde{R}}_{ij} =& \Delta\mathbf{R}_{ij} \mathrm{Exp}(\delta\boldsymbol{\phi}_{ij}) = \mathbf{R}_i^\intercal \mathbf{R}_j \mathrm{Exp}(\delta\boldsymbol{\phi}_{ij}) \\ \Delta\mathbf{\tilde{v}}_{ij} =& \Delta\mathbf{v}_{ij} + \delta\mathbf{v}_{ij} = \mathbf{R}_i^\intercal (\mathbf{v}_j - \mathbf{v}_i - \mathbf{g}\Delta t_{ij}) + \delta\mathbf{v}_{ij} \\ \Delta\mathbf{\tilde{p}}_{ij} =& \Delta\mathbf{p}_{ij} + \delta\mathbf{p}_{ij} = \mathbf{R}_i^\intercal \left(\mathbf{p}_j - \mathbf{p}_i - \mathbf{v}_i \Delta t_{ij} - \frac12 \mathbf{g}\Delta t_{ij}^2 \right) + \delta\mathbf{p}_{ij} \end{aligned}\right.\tag{2}\label{2}

where the measurements are written as a function of the state “plus” a random noise, described by the random vector [\delta\boldsymbol{\phi}_{ij}^\intercal, \delta\mathbf{v}_{ij}^\intercal, \delta\mathbf{p}_{ij}^\intercal]^\intercal, and it make the definition of the log-likelihood straightforward.

Noise Propagation

We calculate the covariance \Sigma_{ij} of the preintegrated measurements
\mathbf{w}_{ij}^{\Delta} \overset{\mathrm{def}}{=} [\delta\boldsymbol{\phi}_{ij}^\intercal, \delta\mathbf{v}_{ij}^\intercal, \delta\mathbf{p}_{ij}^\intercal]^\intercal \sim \mathcal{N}(\mathbf{0}_{9\times 1}, \Sigma_{ij})

\begin{aligned} \mathrm{Exp}\left(-\delta \boldsymbol{\phi}_{ij} \right) =& \prod_{k=i}^{j-1} \mathrm{Exp}\left(- \Delta\tilde{\mathbf{R}}_{k+1,j}^\intercal \mathbf{J}_{r,k} \mathbf{w}_{gd,k}\Delta t\right) \\ \Rightarrow\quad \delta\boldsymbol{\phi}_{ij} =& -\mathrm{Log} \prod_{k=i}^{j-1} \mathrm{Exp}\left(- \Delta\tilde{\mathbf{R}}_{k+1,j}^\intercal \mathbf{J}_{r,k} \mathbf{w}_{gd,k}\Delta t\right) \\ =& \sum_{k=i}^{j-1} \Delta\tilde{\mathbf{R}}_{k+1,j}^\intercal \mathbf{J}_{r,k} \mathbf{w}_{gd,k}\Delta t \end{aligned}

\delta \mathbf{v}_{ij} = \sum_{k=i}^{j-1} \left[-\Delta\mathbf{\tilde{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai})^\wedge \delta\boldsymbol{\phi}_{ik} \Delta t + \Delta\mathbf{\tilde{R}}_{ik} \mathbf{w}_{ad,k}\Delta t \right]

\delta\mathbf{p}_{ij} = \sum_{k=1}^{j-1}\left[\delta \mathbf{v}_{ik} \Delta t - \frac12 \Delta \tilde{\mathbf{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai})^\wedge \delta\boldsymbol{\phi}_{ik} \Delta t^2 + \frac12 \Delta \tilde{\mathbf{R}}_{ik} \mathbf{w}_{ad,k} \Delta t^2 \right]

\left\{\begin{aligned} \delta\boldsymbol{\phi}_{ij} =& \sum_{k=i}^{j-1} \Delta\tilde{\mathbf{R}}_{k+1,j}^\intercal \mathbf{J}_{r,k} \mathbf{w}_{gd,k}\Delta t \\ \delta \mathbf{v}_{ij} =& \sum_{k=i}^{j-1} \left[-\Delta\mathbf{\tilde{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai})^\wedge \delta\boldsymbol{\phi}_{ik} \Delta t + \Delta\mathbf{\tilde{R}}_{ik} \mathbf{w}_{ad,k}\Delta t \right] \\ \delta\mathbf{p}_{ij} =& \sum_{k=1}^{j-1}\left[\delta \mathbf{v}_{ik} \Delta t - \frac12 \Delta \tilde{\mathbf{R}}_{ik} (\tilde{\mathbf{a}}_k - \mathbf{b}_{ai})^\wedge \delta\boldsymbol{\phi}_{ik} \Delta t^2 + \frac12 \Delta \tilde{\mathbf{R}}_{ik} \mathbf{w}_{ad,k} \Delta t^2 \right] \end{aligned}\right.

The noise terms \delta\boldsymbol{\phi}_{ij}, \delta \mathbf{v}_{ij} and \delta \mathbf{p}_{ij} are all linear combination of zero-mean noise terms \mathbf{w}_{gd} and \mathbf{w}_{ad}, so the covariance of \mathbf{w}_{ij}^{\Delta} by a simple linear propagation.

The covariance can also be conveniently computed in iterative form.

Incorporating Bias Updates

In the previous discussion, we assume the bias [\mathbf{b}_{gi}, \mathbf{b}_{ai}] that is used during preintegration between k=i and k=j is correct and does not change. However, more likely, the bias estimate changed by small amount \delta\mathbf{b} during optimization. Given a bias update \bar{\mathbf{b}} + \delta\mathbf{b} \rightarrow \mathbf{b}, we can update the delta measurements using first-order expansion.

Consider the case in which we get a new estimate \bar{\mathbf{b}}_i + \delta\mathbf{b}_i \rightarrow \check{\mathbf{b}}_i, where \delta\mathbf{b}_i is a small correction w.r.t the previous estimate \bar{\mathbf{b}}_i. Define
\left\{\begin{aligned} \Delta\bar{\mathbf{R}}_{ij} \overset{\mathrm{def}}{=}& \Delta\tilde{\mathbf{R}}_{ij}(\bar{\mathbf{b}}_i) \\ \Delta\bar{\mathbf{v}}_{ij} \overset{\mathrm{def}}{=}& \Delta\tilde{\mathbf{v}}_{ij}(\bar{\mathbf{b}}_i) \\ \Delta\bar{\mathbf{p}}_{ij} \overset{\mathrm{def}}{=}& \Delta\tilde{\mathbf{p}}_{ij}(\bar{\mathbf{b}}_i) \end{aligned}\right.

\begin{aligned} \Delta \mathbf{\tilde{R}}_{ij} (\hat{\mathbf{b}}_i) =& \prod_{k=i}^{j-1} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \hat{\mathbf{b}}_{gi})\Delta t\big) = \prod_{k=i}^{j-1} \mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \bar{\mathbf{b}}_{gi} - \delta\mathbf{b}_{gi})\Delta t\big) \\ =& \prod_{k=i}^{j-1} \left[\mathrm{Exp}\big((\tilde{\boldsymbol{\omega}}_k - \bar{\mathbf{b}}_{gi})\Delta t\big) \mathrm{Exp}\left(-\mathbf{J}_{rk}\delta\mathbf{b}_{gi} \Delta t \right) \right] = \Delta\bar{\mathbf{R}}_{ij} \prod_{k=i}^{j-1} \mathrm{Exp}\left(-\Delta\bar{\mathbf{R}}_{k+1, j}^\intercal \mathbf{J}_{rk}\delta\mathbf{b}_{gi} \Delta t \right) \\ =& \Delta\bar{\mathbf{R}}_{ij} \mathrm{Exp} \left(\sum_{k=i}^{j-1} -\Delta\bar{\mathbf{R}}_{k+1, j}^\intercal \mathbf{J}_{rk}\delta\mathbf{b}_{gi} \Delta t\right) \overset{\mathrm{def}}{=} \Delta\bar{\mathbf{R}}_{ij} \mathrm{Exp} \left( \frac{\partial\Delta\bar{\mathbf{R}}_{ij}}{\partial\mathbf{b}_{g}} \delta\mathbf{b}_{gi} \right) \end{aligned}

\begin{aligned} \Delta\mathbf{\tilde{v}}_{ij}(\hat{\mathbf{b}}_i) =& \sum_{k=i}^{j-1} \Delta\mathbf{\tilde{R}}_{ik}(\hat{\mathbf{b}}_i) \left(\tilde{\mathbf{a}}_k - \hat{\mathbf{b}}_{ai} \right) \Delta t \\ =& \sum_{k=i}^{j-1} \Delta\mathbf{\bar{R}}_{ik} \mathrm{Exp} \left( \frac{\partial\Delta\bar{\mathbf{R}}_{ik}}{\partial\mathbf{b}_{g}} \delta\mathbf{b}_{gi} \right) \left(\tilde{\mathbf{a}}_k - \bar{\mathbf{b}}_{ai} - \delta\mathbf{b}_{ai}\right) \Delta t \\ =& \sum_{k=i}^{j-1} \Delta\mathbf{\bar{R}}_{ik} \left[\mathbf{I} + \left( \frac{\partial\Delta\bar{\mathbf{R}}_{ik}}{\partial\mathbf{b}_{g}} \delta\mathbf{b}_{gi} \right)^\wedge \right] \left(\tilde{\mathbf{a}}_k - \bar{\mathbf{b}}_{ai} - \delta\mathbf{b}_{ai}\right) \Delta t \\ =& \sum_{k=i}^{j-1} \Delta\mathbf{\bar{R}}_{ik} \left(\tilde{\mathbf{a}}_k - \bar{\mathbf{b}}_{ai} \right) \Delta t - \sum_{k=i}^{j-1} \Delta\mathbf{\bar{R}}_{ik} \delta\mathbf{b}_{ai} \Delta t - \sum_{k=i}^{j-1} \Delta\mathbf{\bar{R}}_{ik} \left( \tilde{\mathbf{a}}_k - \bar{\mathbf{b}}_{ai} \right)^\wedge \frac{\partial\Delta\bar{\mathbf{R}}_{ik}}{\partial\mathbf{b}_{g}} \delta\mathbf{b}_{gi} \Delta t \\ =& \Delta\mathbf{\bar{v}}_{ij} - \sum_{k=i}^{j-1} \Delta\mathbf{\bar{R}}_{ik} \Delta t \delta\mathbf{b}_{ai} - \sum_{k=i}^{j-1} \Delta\mathbf{\bar{R}}_{ik} \left( \tilde{\mathbf{a}}_k - \bar{\mathbf{b}}_{ai} \right)^\wedge \frac{\partial\Delta\bar{\mathbf{R}}_{ik}}{\partial\mathbf{b}_{g}} \Delta t \delta\mathbf{b}_{gi} \\ \overset{\mathrm{def}}{=}& \Delta\mathbf{\bar{v}}_{ij} + \frac{\partial\Delta\mathbf{\bar{v}}_{ij}}{\partial\mathbf{b}_a} \delta\mathbf{b}_{ai} + \frac{\partial\Delta\mathbf{\bar{v}}_{ij}}{\partial\mathbf{b}_g} \delta\mathbf{b}_{gi} \end{aligned}

\begin{aligned} \Delta\mathbf{\tilde{p}}_{ij}(\hat{\mathbf{b}_i}) =& \sum_{k=1}^{j-1}\left[\Delta\mathbf{\tilde{v}}_{ik} \Delta t + \frac12 \Delta \tilde{\mathbf{R}}_{ik} \left(\tilde{\mathbf{a}}_k - \hat{\mathbf{b}}_{ai} \right) \Delta t^2 \right] \\ =& \sum_{k=1}^{j-1}\left[\Delta\mathbf{\bar{v}}_{ik} \Delta t + \frac{\partial\Delta\mathbf{\bar{v}}_{ik}}{\partial\mathbf{b}_a} \delta\mathbf{b}_{ai} \Delta t + \frac{\partial\Delta\mathbf{\bar{v}}_{ik}}{\partial\mathbf{b}_g} \delta\mathbf{b}_{gi} \Delta t \right] + \\ & \sum_{k=1}^{j-1}\left[ \frac12 \Delta \bar{\mathbf{R}}_{ik} \mathrm{Exp} \left( \frac{\partial\Delta\bar{\mathbf{R}}_{ik}}{\partial\mathbf{b}_{g}} \delta\mathbf{b}_{gi} \right) \left(\tilde{\mathbf{a}}_k - \bar{\mathbf{b}}_{ai} - \delta\mathbf{b}_{ai} \right) \Delta t^2 \right] \\ =& \sum_{k=1}^{j-1}\left[\Delta\mathbf{\bar{v}}_{ik} \Delta t + \frac12 \Delta \bar{\mathbf{R}}_{ik} \left(\tilde{\mathbf{a}}_k - \bar{\mathbf{b}}_{ai} \right) \Delta t^2 \right] + \\ & \sum_{k=1}^{j-1}\left[ \frac{\partial\Delta\mathbf{\bar{v}}_{ik}}{\partial\mathbf{b}_a} \delta\mathbf{b}_{ai} \Delta t + \frac{\partial\Delta\mathbf{\bar{v}}_{ik}}{\partial\mathbf{b}_g} \delta\mathbf{b}_{gi} \Delta t - \frac12 \Delta \bar{\mathbf{R}}_{ik} \delta\mathbf{b}_{ai} \Delta t^2 - \frac12 \bar{\mathbf{R}}_{ik} \left(\tilde{\mathbf{a}}_k - \bar{\mathbf{b}}_{ai} \right)^\wedge \frac{\partial\Delta\bar{\mathbf{R}}_{ik}}{\partial\mathbf{b}_{g}} \delta\mathbf{b}_{gi} \Delta t^2 \right] \\ =& \Delta\mathbf{\bar{p}}_{ij} + \sum_{k=1}^{j-1} \left[\frac{\partial\Delta\mathbf{\bar{v}}_{ik}}{\partial\mathbf{b}_a} \Delta t - \frac12 \Delta \bar{\mathbf{R}}_{ik} \Delta t^2\right] \delta\mathbf{b}_{ai} + \sum_{k=1}^{j-1} \left[\frac{\partial\Delta\mathbf{\bar{v}}_{ik}}{\partial\mathbf{b}_g} \Delta t - \frac12 \bar{\mathbf{R}}_{ik} \left(\tilde{\mathbf{a}}_k - \bar{\mathbf{b}}_{ai} \right)^\wedge \frac{\partial\Delta\bar{\mathbf{R}}_{ik}}{\partial\mathbf{b}_{g}} \Delta t^2 \right] \delta\mathbf{b}_{gi} \\ \overset{\mathrm{def}}{=}& \Delta\mathbf{\bar{p}}_{ij} + \frac{\partial\Delta\mathbf{\bar{p}}_{ij}}{\partial\mathbf{b}_a} \delta\mathbf{b}_{ai} + \frac{\partial\Delta\mathbf{\bar{p}}_{ij}}{\partial\mathbf{b}_g} \delta\mathbf{b}_{gi} \end{aligned}

\left\{\begin{aligned} \Delta \mathbf{\tilde{R}}_{ij} (\hat{\mathbf{b}}_i) =& \Delta\bar{\mathbf{R}}_{ij} \mathrm{Exp} \left( \frac{\partial\Delta\bar{\mathbf{R}}_{ij}}{\partial\mathbf{b}_{g}} \delta\mathbf{b}_{gi} \right) \\ \Delta\mathbf{\tilde{v}}_{ij}(\hat{\mathbf{b}}_i) =& \Delta\mathbf{\bar{v}}_{ij} + \frac{\partial\Delta\mathbf{\bar{v}}_{ij}}{\partial\mathbf{b}_a} \delta\mathbf{b}_{ai} + \frac{\partial\Delta\mathbf{\bar{v}}_{ij}}{\partial\mathbf{b}_g} \delta\mathbf{b}_{gi} \\ \Delta\mathbf{\tilde{p}}_{ij}(\hat{\mathbf{b}_i}) =& \Delta\mathbf{\bar{p}}_{ij} + \frac{\partial\Delta\mathbf{\bar{p}}_{ij}}{\partial\mathbf{b}_a} \delta\mathbf{b}_{ai} + \frac{\partial\Delta\mathbf{\bar{p}}_{ij}}{\partial\mathbf{b}_g} \delta\mathbf{b}_{gi} \end{aligned}\right.\tag{3}\label{3}

where \left\{\begin{aligned} \frac{\partial\Delta\bar{\mathbf{R}}_{ij}}{\partial\mathbf{b}_{g}} \overset{\mathrm{def}}{=}& -\sum_{k=i}^{j-1} \Delta\bar{\mathbf{R}}_{k+1, j}^\intercal \mathbf{J}_{rk} \Delta t \\ \frac{\partial\Delta\mathbf{\bar{v}}_{ij}}{\partial\mathbf{b}_a} \overset{\mathrm{def}}{=}& -\sum_{k=i}^{j-1} \Delta\mathbf{\bar{R}}_{ik} \Delta t \\ \frac{\partial\Delta\mathbf{\bar{v}}_{ij}}{\partial\mathbf{b}_g} \overset{\mathrm{def}}{=}& - \sum_{k=i}^{j-1} \Delta\mathbf{\bar{R}}_{ik} \left( \tilde{\mathbf{a}}_k - \bar{\mathbf{b}}_{ai} \right)^\wedge \frac{\partial\Delta\bar{\partial\mathbf{R}}_{ik}}{\partial\mathbf{b}_{g}} \Delta t \\ \frac{\partial\Delta\mathbf{\bar{p}}_{ij}}{\partial\mathbf{b}_a} \overset{\mathrm{def}}{=}& \sum_{k=1}^{j-1} \left[\frac{\partial\Delta\mathbf{\bar{v}}_{ik}}{\partial\mathbf{b}_a} \Delta t - \frac12 \Delta \bar{\mathbf{R}}_{ik} \Delta t^2\right] \\ \frac{\partial\Delta\mathbf{\bar{p}}_{ij}}{\partial\mathbf{b}_g} \overset{\mathrm{def}}{=}& \sum_{k=1}^{j-1} \left[\frac{\partial\Delta\mathbf{\bar{v}}_{ik}}{\partial\mathbf{b}_g} \Delta t - \frac12 \bar{\mathbf{R}}_{ik} \left(\tilde{\mathbf{a}}_k - \bar{\mathbf{b}}_{ai} \right)^\wedge \frac{\partial\Delta\bar{\mathbf{R}}_{ik}}{\partial\mathbf{b}_{g}} \Delta t^2 \right] \end{aligned}\right.

Note that the Jacobians can be computed incrementally.

IMU Measurement Residual

Write the residual errors \mathbf{r}_{\mathcal{I}_{ij}} = \begin{bmatrix}\mathbf{r}_{\Delta\mathbf{R}_{ij}}^\intercal, \mathbf{r}_{\Delta\mathbf{p}_{ij}}^\intercal, \mathbf{r}_{\Delta\mathbf{v}_{ij}}^\intercal, \mathbf{r}_{\mathbf{b}_{g,ij}}^\intercal, \mathbf{r}_{\mathbf{b}_{a,ij}}^\intercal\end{bmatrix}^\intercal \in \mathbb{R}^{15}, based on \eqref{2} and \eqref{3}, then

\left\{\begin{aligned} \mathbf{r}_{\Delta\mathbf{R}_{ij}} \overset{\mathrm{def}}{=}& \mathrm{Log}\left[ \left(\Delta\bar{\mathbf{R}}_{ij} \mathrm{Exp} \left(\frac{\partial\Delta\bar{\mathbf{R}}_{ij}}{\partial\mathbf{b}_{g}} \delta\mathbf{b}_{gi} \right)\right)^\intercal \mathbf{R}_i^\intercal \mathbf{R}_j\right] \\ \mathbf{r}_{\Delta\mathbf{p}_{ij}} \overset{\mathrm{def}}{=}& \mathbf{R}_i^\intercal \left(\mathbf{p}_j - \mathbf{p}_i - \mathbf{v}_i \Delta t_{ij} - \frac12 \mathbf{g}\Delta t_{ij}^2 \right) - \Delta\mathbf{\bar{p}}_{ij} - \frac{\partial\Delta\mathbf{\bar{p}}_{ij}}{\partial\mathbf{b}_a} \delta\mathbf{b}_{ai} - \frac{\partial\Delta\mathbf{\bar{p}}_{ij}}{\partial\mathbf{b}_g} \delta\mathbf{b}_{gi} \\ \mathbf{r}_{\Delta\mathbf{v}_{ij}} \overset{\mathrm{def}}{=}& \mathbf{R}_i^\intercal (\mathbf{v}_j - \mathbf{v}_i - \mathbf{g}\Delta t_{ij}) - \Delta\mathbf{\bar{v}}_{ij} - \frac{\partial\Delta\mathbf{\bar{v}}_{ij}}{\partial\mathbf{b}_a} \delta\mathbf{b}_{ai} - \frac{\partial\Delta\mathbf{\bar{v}}_{ij}}{\partial\mathbf{b}_g} \delta\mathbf{b}_{gi} \\ \mathbf{r}_{\mathbf{b}_{g,ij}}\overset{\mathrm{def}}{=}& \mathbf{b}_{gj} - \mathbf{b}_{gi} \\ \mathbf{r}_{\mathbf{b}_{a,ij}}\overset{\mathrm{def}}{=}& \mathbf{b}_{aj} - \mathbf{b}_{ai} \end{aligned}\right.\tag{4}\label{4}

Use “lift-solve-retract” scheme, define following retractions from \eqref{4}:
\begin{aligned} \mathbf{R}_i\mathrm{Exp}(\delta\boldsymbol{\phi}_i) \quad\rightarrow&\quad \mathbf{R}_i, \qquad& \mathbf{R}_j\mathrm{Exp}(\delta\boldsymbol{\phi}_j) \quad\rightarrow&\quad \mathbf{R}_j \\ \mathbf{p}_i + \mathbf{R}_i \delta\mathbf{p}_i \quad\rightarrow&\quad \mathbf{p}_i, \qquad& \mathbf{p}_j + \mathbf{R}_j \delta\mathbf{p}_j \quad\rightarrow&\quad \mathbf{p}_j \\ \mathbf{v}_i + \delta\mathbf{v}_i \quad\rightarrow&\quad \mathbf{v}_i, \qquad& \mathbf{v}_j + \delta\mathbf{v}_j \quad\rightarrow&\quad \mathbf{v}_j \\ \delta\mathbf{b}_{ai} + \tilde{\delta}\mathbf{b}_{ai} \quad\rightarrow&\quad \delta\mathbf{b}_{ai}, \qquad& \delta\mathbf{b}_{gi} + \tilde{\delta}\mathbf{b}_{gi} \quad\rightarrow&\quad \delta\mathbf{b}_{gi} \\ \end{aligned}

Therefore, we derive the Jacobians w.r.t the vectors \delta\boldsymbol{\phi}_i, \delta\mathbf{p}_i, \delta\mathbf{v}_i, \tilde{\delta}\mathbf{b}_{ai}, \tilde{\delta}\mathbf{b}_{gi}, \delta\boldsymbol{\phi}_j, \delta\mathbf{p}_j, \delta\mathbf{v}_j, \tilde{\delta}\mathbf{b}_{aj}, \tilde{\delta}\mathbf{b}_{gj}.

Jacobians of rΔRij r Δ R i j

rΔRij(RiExp(δϕi))=Log[(ΔˉRijExp(ΔˉRijbgδbgi))(RiExp(δϕi))Rj]=Log[(ΔˉRijExp(ΔˉRijbgδbgi))Exp(δϕi)RiRj]=Log[(ΔˉRijExp(ΔˉRijbgδbgi))RiRjExp(RjRiδϕi)]=rΔRij(Ri)J1r(rΔRij(Ri))RjRiδϕi

r Δ R i j ( R i E x p ( δ ϕ i ) ) = L o g [ ( Δ R ¯ i j E x p ( ∂ Δ R ¯ i j ∂ b g δ b g i ) ) ⊺ ( R i E x p ( δ ϕ i ) ) ⊺ R j ] = L o g [ ( Δ R ¯ i j E x p ( ∂ Δ R ¯ i j ∂ b g δ b g i ) ) ⊺ E x p ( − δ ϕ i ) R i ⊺ R j ] = L o g [ ( Δ R ¯ i j E x p ( ∂ Δ R ¯ i j ∂ b g δ b g i ) ) ⊺ R i ⊺ R j E x p ( − R j ⊺ R i δ ϕ i ) ] = r Δ R i j ( R i ) − J r − 1 ( r Δ R i j ( R i ) ) R j ⊺ R i δ ϕ i

rΔRij(RjExp(δϕj))=Log[(ΔˉRijExp(Δ¯Rijbgδbgi))RiRjExp(δϕj)]=rΔRij(Rj)+J1r(rΔRij(Rj))δϕj

r Δ R i j ( R j E x p ( δ ϕ j ) ) = L o g [ ( Δ R ¯ i j E x p ( ∂ Δ ∂ R ¯ i j b g δ b g i ) ) ⊺ R i ⊺ R j E x p ( δ ϕ j ) ] = r Δ R i j ( R j ) + J r − 1 ( r Δ R i j ( R j ) ) δ ϕ j

rΔRij(δbgi+˜δbgi)=Log[(ΔˉRijExp(ΔˉRijbg(δbgi+˜δbgi)))RiRj]=Log{[ΔˉRijExp(ΔˉRijbgδbgi)Exp(Jr(ΔˉRijbgδbgi)ΔˉRijbg˜δbgi)]RiRj}=Log{Exp[Jr(ΔˉRijbgδbgi)ΔˉRijbg˜δbgi][ΔˉRijExp(ΔˉRijbgδbgi)]RiRj}=Log[Exp[Jr(ΔˉRijbgδbgi)ΔˉRijbg˜δbgi]Exp(rΔRij(δbgi))]=Log{Exp(rΔRij(δbgi))Exp[(Exp(rΔRij(δbgi)))Jr(ΔˉRijbgδbgi)ΔˉRijbg˜δbgi]}=rΔRij(δbgi)J1r(rΔRij(δbgi))Exp(rΔRij(δbgi))Jr(ΔˉRijbgδbgi)ΔˉRijbg˜δbgi

r Δ R i j ( δ b g i + δ ~ b g i ) = L o g [ ( Δ R ¯ i j E x p ( ∂ Δ R ¯ i j ∂ b g ( δ b g i + δ ~ b g i ) ) ) ⊺ R i ⊺ R j ] = L o g { [ Δ R ¯ i j E x p ( ∂ Δ R ¯ i j ∂ b g δ b g i ) E x p ( J r ( ∂ Δ R ¯ i j ∂ b g δ b g i ) ∂ Δ R ¯ i j ∂ b g δ ~ b g i ) ] ⊺ R i ⊺ R j } = L o g { E x p [ − J r ( ∂ Δ R ¯ i j ∂ b g δ b g i ) ∂ Δ R ¯ i j ∂ b g δ ~ b g i ] [ Δ R ¯ i j E x p ( ∂ Δ R ¯ i j ∂ b g δ b g i ) ] ⊺ R i ⊺ R j } = L o g [ E x p [ − J r ( ∂ Δ R ¯ i j ∂ b g δ b g i ) ∂ Δ R ¯ i j ∂ b g δ ~ b g i ] E x p ( r Δ R i j ( δ b g i ) ) ] = L o g { E x p ( r Δ R i j ( δ b g i ) ) ⋅ E x p [ − ( E x p ( r Δ R i j ( δ b g i ) ) ) ⊺ J r ( ∂ Δ R ¯ i j ∂ b g δ b g i ) ∂ Δ R ¯ i j ∂ b g δ ~ b g i ] } = r Δ R i j ( δ b g i ) − J r − 1 ( r Δ R i j ( δ b g i ) ) E x p ( r Δ R i j ( δ b g i ) ) ⊺ J r ( ∂ Δ R ¯ i j ∂ b g δ b g i ) ∂ Δ R ¯ i j ∂ b g δ ~ b g i

Therefore,
{rΔRijδϕi=J1r(rΔRij(Ri))RjRirΔRijδϕj=J1r(rΔRij(Rj))rΔRijδpi=rΔRijδpj=0rΔRijδvi=rΔRijδvj=0rΔRij˜δbgi=J1r(rΔRij(δbgi))Exp(rΔRij(δbgi))Jr(ΔˉRijbgδbgi)ΔˉRijbgrΔRij˜δbai=0

{ ∂ r Δ R i j ∂ δ ϕ i = − J r − 1 ( r Δ R i j ( R i ) ) R j ⊺ R i ∂ r Δ R i j ∂ δ ϕ j = J r − 1 ( r Δ R i j ( R j ) ) ∂ r Δ R i j ∂ δ p i = ∂ r Δ R i j ∂ δ p j = 0 ∂ r Δ R i j ∂ δ v i = ∂ r Δ R i j ∂ δ v j = 0 ∂ r Δ R i j ∂ δ ~ b g i = J r − 1 ( r Δ R i j ( δ b g i ) ) E x p ( r Δ R i j ( δ b g i ) ) ⊺ J r ( ∂ Δ R ¯ i j ∂ b g δ b g i ) ∂ Δ R ¯ i j ∂ b g ∂ r Δ R i j ∂ δ ~ b a i = 0

Jacobians of rΔpij r Δ p i j

rΔpij(RiExp(δϕi))=(RiExp(δϕi))(pjpiviΔtij12gΔt2ij)C1=Exp(δϕi)Ri(pjpiviΔtij12gΔt2ij)C1=(Iδϕi)Ri(pjpiviΔtij12gΔt2ij)C1=rΔpij(Ri)+[Ri(pjpiviΔtij12gΔt2ij)]δϕi

r Δ p i j ( R i E x p ( δ ϕ i ) ) = ( R i E x p ( δ ϕ i ) ) ⊺ ( p j − p i − v i Δ t i j − 1 2 g Δ t i j 2 ) − C 1 = E x p ( − δ ϕ i ) R i ⊺ ( p j − p i − v i Δ t i j − 1 2 g Δ t i j 2 ) − C 1 = ( I − δ ϕ i ∧ ) R i ⊺ ( p j − p i − v i Δ t i j − 1 2 g Δ t i j 2 ) − C 1 = r Δ p i j ( R i ) + [ R i ⊺ ( p j − p i − v i Δ t i j − 1 2 g Δ t i j 2 ) ] ∧ δ ϕ i

rΔpij(pi+Riδpi)=Ri(pjRipiδpiviΔtij12gΔt2ij)C1=rΔpij(pi)δpi

r Δ p i j ( p i + R i δ p i ) = R i ⊺ ( p j − R i p i − δ p i − v i Δ t i j − 1 2 g Δ t i j 2 ) − C 1 = r Δ p i j ( p i ) − δ p i

rΔpij(pj+Rjδpj)=Ri(pj+RjδpjpiviΔtij12gΔt2ij)C1=rΔpij(pj)+RiRjδpj

r Δ p i j ( p j + R j δ p j ) = R i ⊺ ( p j + R j δ p j − p i − v i Δ t i j − 1 2 g Δ t i j 2 ) − C 1 = r Δ p i j ( p j ) + R i ⊺ R j δ p j

rΔpij(vi+δvi)=Ri(pjpiviΔtijδviΔtij12gΔt2ij)C1=rΔpij(vi)RiΔtijδvj

r Δ p i j ( v i + δ v i ) = R i ⊺ ( p j − p i − v i Δ t i j − δ v i Δ t i j − 1 2 g Δ t i j 2 ) − C 1 = r Δ p i j ( v i ) − R i ⊺ Δ t i j δ v j

where C1=ΔˉpijΔˉpijbaδbaiΔˉpijbgδbgi C 1 = Δ p ¯ i j − ∂ Δ p ¯ i j ∂ b a δ b a i − ∂ Δ p ¯ i j ∂ b g δ b g i . Therefore,

{rΔpijδϕi=[Ri(pjpiviΔtij12gΔt2ij)]rΔpijδϕj=0rΔpijδpi=I3×1rΔpijδpj=RiRjrΔpijδvi=RiΔtijrΔpijδvj=0rΔpij˜δbgi=ΔˉpijbarΔpij˜δbai=Δˉpijbg

{ ∂ r Δ p i j ∂ δ ϕ i = [ R i ⊺ ( p j − p i − v i Δ t i j − 1 2 g Δ t i j 2 ) ] ∧ ∂ r Δ p i j ∂ δ ϕ j = 0 ∂ r Δ p i j ∂ δ p i = − I 3 × 1 ∂ r Δ p i j ∂ δ p j = R i ⊺ R j ∂ r Δ p i j ∂ δ v i = − R i ⊺ Δ t i j ∂ r Δ p i j ∂ δ v j = 0 ∂ r Δ p i j ∂ δ ~ b g i = − ∂ Δ p ¯ i j ∂ b a ∂ r Δ p i j ∂ δ ~ b a i = − ∂ Δ p ¯ i j ∂ b g

Jacobians of rΔvij r Δ v i j

rΔvij(RiExp(δϕi))=(RiExp(δϕi))(vjvigΔtij)C2=Exp(δϕi)Ri(vjvigΔtij)C2=(Iδϕi)Ri(vjvigΔtij)C2=rΔvij(Ri)+(Ri(vjvigΔtij))δϕi

r Δ v i j ( R i E x p ( δ ϕ i ) ) = ( R i E x p ( δ ϕ i ) ) ⊺ ( v j − v i − g Δ t i j ) − C 2 = E x p ( δ ϕ i ) R i ⊺ ( v j − v i − g Δ t i j ) − C 2 = ( I − δ ϕ i ∧ ) R i ⊺ ( v j − v i − g Δ t i j ) − C 2 = r Δ v i j ( R i ) + ( R i ⊺ ( v j − v i − g Δ t i j ) ) ∧ δ ϕ i

rΔvij(vi+δvi)=Ri(vjviδvigΔtij)C2=rΔvij(vi)Riδvi

r Δ v i j ( v i + δ v i ) = R i ⊺ ( v j − v i − δ v i − g Δ t i j ) − C 2 = r Δ v i j ( v i ) − R i ⊺ δ v i

rΔvij(vj+δvj)=Ri(vj+δvivigΔtij)C2=rΔvij(vi)+Riδvj

r Δ v i j ( v j + δ v j ) = R i ⊺ ( v j + δ v i − v i − g Δ t i j ) − C 2 = r Δ v i j ( v i ) + R i ⊺ δ v j

where C2=ΔˉvijΔˉvijbaδbaiΔˉvijbgδbgi C 2 = Δ v ¯ i j − ∂ Δ v ¯ i j ∂ b a δ b a i − ∂ Δ v ¯ i j ∂ b g δ b g i .
Therefore,

{rΔvijδϕi=(Ri(vjvigΔtij))rΔvijδϕj=0rΔvijδpi=rΔvijδpj=0rΔvijδvi=RirΔvijδvj=RirΔvij˜δbgi=ΔˉvijbarΔvij˜δbai=Δˉvijbg

{ ∂ r Δ v i j ∂ δ ϕ i = ( R i ⊺ ( v j − v i − g Δ t i j ) ) ∧ ∂ r Δ v i j ∂ δ ϕ j = 0 ∂ r Δ v i j ∂ δ p i = ∂ r Δ v i j δ p j = 0 ∂ r Δ v i j ∂ δ v i = − R i ⊺ ∂ r Δ v i j ∂ δ v j = R i ⊺ ∂ r Δ v i j ∂ δ ~ b g i = − ∂ Δ v ¯ i j ∂ b a ∂ ∂ r Δ v i j ∂ δ ~ b a i = − ∂ Δ v ¯ i j ∂ b g

Reference

[1] Forster C, Carlone L, Dellaert F, et al. On-Manifold Preintegration for Real-Time Visual–Inertial Odometry[J]. IEEE Transactions on Robotics, 2017, 33(1): 1-21.

评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值