这个函数就是两帧之间计算relative的地方,首先它的目标函数如下:
// Error computations of transformed points
float computeResiduals(const std::shared_ptr<FrameTrackerReference>& reference,
const std::shared_ptr<Frame>& newFrame, const int lvl,
const Sophus::SE3f &refToFrame, const AffineLight& light,
float* res, unsigned int* validMask) const;
残差在res中输出;在计算残差的时候,变量pose和affine light保持不动。
const auto& calib = GlobalCalibration::getInstance();
const auto& settings = Settings::getInstance();
// constant values
const Eigen::Mat33f& K = calib.matrix3f(lvl);
const int width = calib.width(lvl);
const int height = calib.height(lvl);
const float* const newImage = newFrame->image(lvl);
const float* const pointU = reference->u(lvl);
const float* const pointV = reference->v(lvl);
const float* const pointX = reference->x(lvl);
const float* const pointY = reference->y(lvl);
const float* const pointIDepth = reference->iDepth(lvl);
const float* const pointColor = reference->color(lvl);
const float* const pointWeight = reference->weight(lvl);
const int numPoints = reference->numPoints(lvl);
const Eigen::Mat34f Rt = refToFrame.matrix3x4();
const Eigen::Vec3f& t = refToFrame.translation();
const float light_a = light.a();
const float light_b = light.b();
float usageCount = 0;
把reference中的点都拿出来,然后投影到cur frame去比较,获得一个插值
for (int i = 0; i < numPoints; ++i)
{
const Eigen::Vec3f ptRT = Rt * Eigen::Vec4f(pointX[i], pointY[i], 1.f, pointIDepth[i]);
const float rescale = 1.f / ptRT[2];
const Eigen::Vec2f ptRTn = ptRT.head<2>() * rescale;
const float uRT = ptRTn[0] * K(0, 0) + K(0, 2);
const float vRT = ptRTn[1] * K(1, 1) + K(1, 2);
// check that the pixel is valid and the point is in from of the camera
if (!(uRT > 2.1f && vRT > 2.1f && uRT < width - 2.1f && vRT < height - 2.1f && rescale > 0.f))
{
res[i] = 0.f;
validMask[i] = 0x0; //false
continue;
}
const float newImgColor = bilinearInterpolation(newImage, uRT, vRT, width);
const float photoResidual = pointColor[i] - light_a*newImgColor - light_b;
// if depth becomes larger: pixel becomes "smaller", hence count it less.
usageCount += rescale < 1.f ? rescale : 1.f;
// transform from photometric residual to geometric residual
const float residual = pointWeight[i] * photoResidual;
// store
res[i] = residual;
validMask[i] = 0xffffffff; //true
}
最关键的地方在于残差的计算:
const float photoResidual = pointColor[i] - light_a*newImgColor - light_b;
可以写成如下形式:
const float photoResidual = pointColor[i] - (light_a*newImgColor + light_b);
像素之间因为曝光带来的系统误差,是一个线性的。类似y = ax+b;
FrameTrackerProblem::computeJacobAtIdentity 中使用的SSE优化
const __m128 zeros = _mm_setzero_ps();
const __m128 ones = _mm_set_ps1(1.f);
const __m128 minusOnes = _mm_set_ps1(-1.f);
const __m128 mfx = _mm_set_ps1(K(0, 0));
const __m128 mfy = _mm_set_ps1(K(1, 1));
const __m128 varScaleTrans = _mm_set_ps1(settings.varScaleTrans);
const __m128 varScaleRot = _mm_set_ps1(settings.varScaleRot);
const __m128 varScaleAlpha = _mm_set_ps1(settings.varScaleAlpha);
const __m128 varScaleBeta = _mm_set_ps1(settings.varScaleBeta);
参考:C/C++指令集介绍以及优化(主要针对SSE优化) - 知乎
__m128是单精度浮点数,类似于float
_mm_setzero_ps,清零操作符
_mm_set_ps1,浮点数赋值
主要步骤:
每四个点,循环一次; 然后计算了一个J0-J7 8个数,获取要计算的雅克比地址,四个点的雅克比分别为Jpt0-3,
zeros 是一个四维数组(这也就是4倍速的来源), Jpt0是一个8维数组,表示了第0个点,对应的雅克比。
FrameTracker::trackFrame 流程
这个函数只是优化了光度模型,这个地方强制迭代了50次