Direct Sparse Mapping reading notes -- trackFrame

这个函数就是两帧之间计算relative的地方,首先它的目标函数如下:

// Error computations of transformed points
float computeResiduals(const std::shared_ptr<FrameTrackerReference>& reference,
                        const std::shared_ptr<Frame>& newFrame, const int lvl, 
                        const Sophus::SE3f &refToFrame, const AffineLight& light, 
                        float* res, unsigned int* validMask) const;

残差在res中输出;在计算残差的时候,变量pose和affine light保持不动。

const auto& calib = GlobalCalibration::getInstance();
const auto& settings = Settings::getInstance();

// constant values
const Eigen::Mat33f& K = calib.matrix3f(lvl);
const int width = calib.width(lvl);
const int height = calib.height(lvl);

const float* const newImage = newFrame->image(lvl);

const float* const pointU = reference->u(lvl);
const float* const pointV = reference->v(lvl);
const float* const pointX = reference->x(lvl);
const float* const pointY = reference->y(lvl);
const float* const pointIDepth = reference->iDepth(lvl);
const float* const pointColor = reference->color(lvl);
const float* const pointWeight = reference->weight(lvl);
const int numPoints = reference->numPoints(lvl);

const Eigen::Mat34f Rt = refToFrame.matrix3x4();
const Eigen::Vec3f& t = refToFrame.translation();

const float light_a = light.a();
const float light_b = light.b();

float usageCount = 0;

把reference中的点都拿出来,然后投影到cur frame去比较,获得一个插值

for (int i = 0; i < numPoints; ++i)
{
    const Eigen::Vec3f ptRT = Rt * Eigen::Vec4f(pointX[i], pointY[i], 1.f, pointIDepth[i]);
    const float rescale = 1.f / ptRT[2];
    const Eigen::Vec2f ptRTn = ptRT.head<2>() * rescale;
    const float uRT = ptRTn[0] * K(0, 0) + K(0, 2);
    const float vRT = ptRTn[1] * K(1, 1) + K(1, 2);

    // check that the pixel is valid and the point is in from of the camera
    if (!(uRT > 2.1f && vRT > 2.1f && uRT < width - 2.1f && vRT < height - 2.1f && rescale > 0.f))
    {
        res[i] = 0.f;
        validMask[i] = 0x0;		//false				
        continue;
    }

    const float newImgColor = bilinearInterpolation(newImage, uRT, vRT, width);

    const float photoResidual = pointColor[i] - light_a*newImgColor - light_b;

    // if depth becomes larger: pixel becomes "smaller", hence count it less.
    usageCount += rescale < 1.f ? rescale : 1.f;

    // transform from photometric residual to geometric residual
    const float residual = pointWeight[i] * photoResidual;

    // store
    res[i] = residual;
    validMask[i] = 0xffffffff;		//true
}

最关键的地方在于残差的计算: 

const float photoResidual = pointColor[i] - light_a*newImgColor - light_b;

 可以写成如下形式:

const float photoResidual = pointColor[i] - (light_a*newImgColor + light_b);

像素之间因为曝光带来的系统误差,是一个线性的。类似y = ax+b;


FrameTrackerProblem::computeJacobAtIdentity 中使用的SSE优化
const __m128 zeros = _mm_setzero_ps();
const __m128 ones = _mm_set_ps1(1.f);
const __m128 minusOnes = _mm_set_ps1(-1.f);

const __m128 mfx = _mm_set_ps1(K(0, 0));
const __m128 mfy = _mm_set_ps1(K(1, 1));

const __m128 varScaleTrans = _mm_set_ps1(settings.varScaleTrans);
const __m128 varScaleRot = _mm_set_ps1(settings.varScaleRot);
const __m128 varScaleAlpha = _mm_set_ps1(settings.varScaleAlpha);
const __m128 varScaleBeta = _mm_set_ps1(settings.varScaleBeta);

参考:C/C++指令集介绍以及优化(主要针对SSE优化) - 知乎

SIMD简介 - 知乎

__m128是单精度浮点数,类似于float

_mm_setzero_ps,清零操作符
_mm_set_ps1,浮点数赋值

主要步骤:

每四个点,循环一次; 然后计算了一个J0-J7 8个数,获取要计算的雅克比地址,四个点的雅克比分别为Jpt0-3,  

zeros 是一个四维数组(这也就是4倍速的来源), Jpt0是一个8维数组,表示了第0个点,对应的雅克比。


FrameTracker::trackFrame 流程

这个函数只是优化了光度模型,这个地方强制迭代了50次

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

手持电烙铁的侠客

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值