karto探秘之open_karto 第三章 --- 扫描匹配

目录

1 kt_bool Mapper::Process(LocalizedRangeScan* pScan)

3 kt_double ScanMatcher::MatchScan()

3.1 AddScans()

3.2 SmearPoint()

4 kt_double ScanMatcher::CorrelateScan()

4.1 生成角度查找表 

4.2 kt_double ScanMatcher::GetResponse()

4.3 void ScanMatcher::ComputePositionalCovariance() 

4.4 void ScanMatcher::ComputeAngularCovariance()

5 void SetSensorPose()

6 MapperSensorManager::AddScan()

7 void MapperSensorManager::AddRunningScan()

总结:

process 的过程

扫描匹配 的过程

REFERENCES


由于阅读代码的时间很短,很多地方还不懂,代码里的中文注释大部分来自于 

加入了详细中文解释的open_karto地址:  https://github.com/kadn/open_karto

 

slam_karto的addScan()函数调用了Process()函数。slam_karto的代码请看上一篇博文

open_karto的一些数据类型介绍请看 karto探秘之open_karto 第一章 --- 数据结构

 

1 kt_bool Mapper::Process(LocalizedRangeScan* pScan)

  kt_bool Mapper::Process(LocalizedRangeScan* pScan)
  {
      // ...省略

      Matrix3 covariance;
      covariance.SetToIdentity();
        
      // 如果当前帧是第一帧,则不执行MatchScan操作,否则执行。MatchScan是用来将当前帧和前面的帧(多个)进行比较,根据odometry给出的初始位姿,在初始位姿附近找到更合适的位姿作为机器人移动位姿,同时返回该位姿下的response(衡量标准)以及covariance(可信度),具体内容在后面介绍
      // correct scan (if not first scan)
      if (m_pUseScanMatching->GetValue() && pLastScan != NULL)
      {
        Pose2 bestPose;
        m_pSequentialScanMatcher->MatchScan(pScan,
                                            m_pMapperSensorManager->
                                                GetRunningScans(pScan->GetSensorName()),
                                                                                                                                               
                                            bestPose, // 求出的最优位姿
                                                                                            
                                            covariance);
        pScan->SetSensorPose(bestPose);
      }

      // add scan to buffer and assign id 将当前帧加入数据库中
      m_pMapperSensorManager->AddScan(pScan);

      if (m_pUseScanMatching->GetValue())
      {
        // add to graph
        m_pGraph->AddVertex(pScan);               // 将当前帧设置为图优化的顶点
        m_pGraph->AddEdges(pScan, covariance);    // 将当前帧执行 添加边的操作
 
        // 将当前帧添加到runningScan中,并进行维护. runningScan即是 扫描匹配 中被比较的对象,也即是“前面的帧”
        m_pMapperSensorManager->AddRunningScan(pScan);

        if (m_pDoLoopClosing->GetValue())
        {
          std::vector<Name> deviceNames = m_pMapperSensorManager->GetSensorNames();
          const_forEach(std::vector<Name>, &deviceNames)
          {
            m_pGraph->TryCloseLoop(pScan, *iter);    // 进行闭环检测
          }
        }
      }

      m_pMapperSensorManager->SetLastScan(pScan);

      return true;
    }

    return false;
  }

 

3 kt_double ScanMatcher::MatchScan()

1次粗匹配(搜索步长为两倍的分辨率), 如果没匹配到的话改变角度再进行一次粗匹配, 之后进行1次精细匹配(步长为分辨率)

  /**
   * Match given scan against set of scans   在相关网格中找到最佳扫描姿势
   * @param pScan scan being scan-matched    当前雷达数据
   * @param rBaseScans set of scans whose points will mark cells in grid as being occupied ,也就是 m_RunningScans 
   * @param rMean output parameter of mean (best pose) of match 输出的位姿的平均值
   * @param rCovariance output parameter of covariance of match 输出位姿的协方差 
   * @param doPenalize default = true  
        whether to penalize matches further from the search center
   * @param doRefineMatch default= true 
        whether to do finer-grained matching if coarse match is good (default is true)
   * @return strength of response
   */
  kt_double ScanMatcher::MatchScan(LocalizedRangeScan* pScan, 
                                   const LocalizedRangeScanVector& rBaseScans, 
                                   Pose2& rMean,
                                   Matrix3& rCovariance, 
                                   kt_bool doPenalize, 
                                   kt_bool doRefineMatch)
  {
    ///
    // set scan pose to be center of grid
    // pScan还没有更新父类LaserRangeScan中原始数据,扫描深度数据都在父类中,通过 pScan->GetRangeReadings()[iterNum]这种形式来访问。
    // 1. get scan position
    Pose2 scanPose = pScan->GetSensorPose();   

    // scan has no readings; cannot do scan matching
    // best guess of pose is based off of adjusted odometer reading
    if (pScan->GetNumberOfRangeReadings() == 0)
    {
      rMean = scanPose;

      // maximum covariance
      rCovariance(0, 0) = MAX_VARIANCE;  // XX
      rCovariance(1, 1) = MAX_VARIANCE;  // YY
      rCovariance(2, 2) = 4 * math::Square(m_pMapper->m_pCoarseAngleResolution->GetValue());  // TH*TH

      return 0.0;
    }

    // 2. get size of grid
    // 这个是矩形栅格地图,2431 * 2431的栅格(雷达范围12, 之前设置过的 SetRangeThreshold) * 2 + 0.3m(滑窗范围) 除以 分辨率 + 1
    Rectangle2<kt_int32s> roi = m_pCorrelationGrid->GetROI();   

    // 3. compute offset (in meters - lower left corner)
    Vector2<kt_double> offset;  //offset是栅格左下角在世界坐标系中的位置

    // m_pCorrelationGrid->GetResolution()  栅格地图一个格子在实际中的长度,单位:默认是 m
    offset.SetX(scanPose.GetX() - (0.5 * (roi.GetWidth() - 1) * m_pCorrelationGrid->GetResolution()));
    offset.SetY(scanPose.GetY() - (0.5 * (roi.GetHeight() - 1) * m_pCorrelationGrid->GetResolution()));

    // 4. set offset
    m_pCorrelationGrid->GetCoordinateConverter()->SetOffset(offset);

    ///

    // set up correlation grid
    // 这里的scanPose.GetPosition()是 Lidar的位置,不是Lidar point的位置

    // 将rBaseScans的点所对应的栅格置为“occupied”,这就相当于存储好了原始的数据,之后只要拿现在的和原始的栅格相比较就行了
    // 获得了Roi里栅格状态  //这个会对扫描点进行处理
    AddScans(rBaseScans, scanPose.GetPosition());  

    // compute how far to search in each direction   
    // m_pSearchSpaceProbs 是一个 31 * 31 大小的栅格地图
    Vector2<kt_double> searchDimensions(m_pSearchSpaceProbs->GetWidth(), m_pSearchSpaceProbs->GetHeight());

    // coarseSearchOffset 是一个坐标,31 * 31 网格的中心 距离左下角的距离,单位m
    Vector2<kt_double> coarseSearchOffset(0.5 * (searchDimensions.GetX() - 1) * m_pCorrelationGrid->GetResolution(),
                                          0.5 * (searchDimensions.GetY() - 1) * m_pCorrelationGrid->GetResolution());

    // a coarse search only checks half the cells in each dimension
    // 因为每两个格子扫描一下,所以认为是粗略的
    Vector2<kt_double> coarseSearchResolution(2 * m_pCorrelationGrid->GetResolution(),
                                              2 * m_pCorrelationGrid->GetResolution());

    // actual scan-matching   //找到了 最好的位姿使得response最大   

    // 粗匹配 使用的位姿是 scanPose
    // offset是coarseSearchOffset,搜索空间是 coarseSearchOffset * 2 除以分辨率,再加1,xy搜索分辨率为2个栅格大小
    // 角度搜索范围 20度,里边会乘以2所以是40度,角度搜索分辨率为2
    kt_double bestResponse = CorrelateScan(pScan, scanPose, coarseSearchOffset, coarseSearchResolution,
                                           m_pMapper->m_pCoarseSearchAngleOffset->GetValue(),
                                           m_pMapper->m_pCoarseAngleResolution->GetValue(),
                                           doPenalize, rMean, rCovariance, false);

    // 如果最初没有找到合适的匹配项,是否增加 角度搜索空间,默认为false
    if (m_pMapper->m_pUseResponseExpansion->GetValue() == true)
    {
      if (math::DoubleEqual(bestResponse, 0.0))
      {

        // try and increase search angle offset with 20 degrees and do another match
        kt_double newSearchAngleOffset = m_pMapper->m_pCoarseSearchAngleOffset->GetValue();
        for (kt_int32u i = 0; i < 3; i++)
        {
          newSearchAngleOffset += math::DegreesToRadians(20);

          bestResponse = CorrelateScan(pScan, scanPose, coarseSearchOffset, coarseSearchResolution,
                                       newSearchAngleOffset, m_pMapper->m_pCoarseAngleResolution->GetValue(),
                                       doPenalize, rMean, rCovariance, false);

          if (math::DoubleEqual(bestResponse, 0.0) == false)
          {
            break;
          }
        }
      }
    }

    if (doRefineMatch)
    {
      // fineSearchOffset 为什么是 coarseSearchResolution*0.5? 等于一个栅格的分辨率?
      Vector2<kt_double> fineSearchOffset(coarseSearchResolution * 0.5);
      Vector2<kt_double> fineSearchResolution(m_pCorrelationGrid->GetResolution(), m_pCorrelationGrid->GetResolution());

      // 精确匹配时 使用的位姿是 粗匹配 输出的rMean
      // 匹配范围为 3个的 栅格大小 fineSearchOffset * 2 除以分辨率 再 +1,xy搜索分辨率也为1个栅格大小
      // 角度搜索范围 是粗匹配的一半,角度搜索分辨率为0.2
      bestResponse = CorrelateScan(pScan, rMean, fineSearchOffset, fineSearchResolution,
                                   0.5 * m_pMapper->m_pCoarseAngleResolution->GetValue(),
                                   m_pMapper->m_pFineSearchAngleOffset->GetValue(),
                                   doPenalize, rMean, rCovariance, true);  //这里的true会让程序计算角度方差
    }
    return bestResponse;
  }

3.1 AddScans()

首先将 m_pCorrelationGrid 中栅格全置为 0 ,

然后遍历 running scna , 将每一帧雷达数据进行 AddScan()

确定是否是有效的点,首先判断距离大于0.01,再判断是否是同方向的,应该是按照右手定则判断的,直接按公式求cos得正负,

然后将这些点对应的栅格地图中的点 不是占用状态的 设置成 占用状态,同时,更新当前坐标点周围的占用值, SmearPoint(gridPoint)

  /**
   * Marks cells where scans' points hit as being occupied
   * @param rScans scans whose points will mark cells in grid as being occupied
   * @param viewPoint do not add points that belong to scans "opposite" the view point
   */
  void ScanMatcher::AddScans(const LocalizedRangeScanVector& rScans, Vector2<kt_double> viewPoint)
  {
    m_pCorrelationGrid->Clear();   //把栅格地图的所有值(0,100,255)置为0

    // add all scans to grid
    // rScans 可能是 running scans 或者 相邻的scans 或者 闭环产生的scans
    const_forEach(LocalizedRangeScanVector, &rScans)   
    {
      // viewPoint 是当前帧的 在世界坐标系中的位置,但是不包含laser朝向信息,
      // 而 *iter都是存储的之前的帧的扫描点在世界坐标系中的位置
      AddScan(*iter, viewPoint);    
    }
  }

  /**
   * Marks cells where scans' points hit as being occupied.  Can smear points as they are added.
   * @param pScan scan whose points will mark cells in grid as being occupied
   * @param viewPoint do not add points that belong to scans "opposite" the view point
   * @param doSmear whether the points will be smeared
   */
  void ScanMatcher::AddScan(LocalizedRangeScan* pScan, const Vector2<kt_double>& rViewPoint, kt_bool doSmear)
  {
    PointVectorDouble validPoints = FindValidPoints(pScan, rViewPoint);   //找到之前的scan在 rviewpoint这个位置上看来合适的点,具体条件见函数内部,记为有效的点

    // put in all valid points
    const_forEach(PointVectorDouble, &validPoints)
    {
      Vector2<kt_int32s> gridPoint = m_pCorrelationGrid->WorldToGrid(*iter);  //把 point的位置变为 在栅格地图上的坐标,栅格地图的原点是左下角

      // IsUpTo  means  <
      if (!math::IsUpTo(gridPoint.GetX(), m_pCorrelationGrid->GetROI().GetWidth()) ||
          !math::IsUpTo(gridPoint.GetY(), m_pCorrelationGrid->GetROI().GetHeight()))
      {
        // point not in grid
        continue;
      }

      int gridIndex = m_pCorrelationGrid->GridIndex(gridPoint);   //二维变为一维索引

      // set grid cell as occupied
      if (m_pCorrelationGrid->GetDataPointer()[gridIndex] == GridStates_Occupied)  //查询grid cell状态
      {
        // value already set
        continue;
      }

      m_pCorrelationGrid->GetDataPointer()[gridIndex] = GridStates_Occupied;  //在这里将之前的数据帧的所有扫描点都在grid中更新状态,用于之后的匹配

      // smear grid
      if (doSmear == true)
      {
        m_pCorrelationGrid->SmearPoint(gridPoint);
      }
    }
  }

 3.1.1 FindValidPoints()

  /**
   * Compute which points in a scan are on the same side as the given viewpoint
   * @param pScan
   * @param rViewPoint
   * @return points on the same side
   */
  PointVectorDouble ScanMatcher::FindValidPoints(LocalizedRangeScan* pScan, const Vector2<kt_double>& rViewPoint) const
  {
    //得到以前已经经过了转换的点, 但是还需要判断方向:顺时针还是逆时针
    const PointVectorDouble& rPointReadings = pScan->GetPointReadings();

    // points must be at least 10 cm away when making comparisons of inside/outside of viewpoint
    const kt_double minSquareDistance = math::Square(0.1);  // in m^2

    // this iterator lags from the main iterator adding points only when the points are on
    // the same side as the viewpoint
    PointVectorDouble::const_iterator trailingPointIter = rPointReadings.begin();
    PointVectorDouble validPoints;

    Vector2<kt_double> firstPoint;
    kt_bool firstTime = true;

    // 访问一个帧的扫描点数据,这些数据的含义是扫描点在世界坐标系中的位置
    const_forEach(PointVectorDouble, &rPointReadings)  
    {
      Vector2<kt_double> currentPoint = *iter;

      if (firstTime && !std::isnan(currentPoint.GetX()) && !std::isnan(currentPoint.GetY()))
      {
        firstPoint = currentPoint;
        firstTime = false;
      }

      Vector2<kt_double> delta = firstPoint - currentPoint;
      if (delta.SquaredLength() > minSquareDistance)
      {
        // This compute the Determinant (viewPoint FirstPoint, viewPoint currentPoint)
        // Which computes the direction of rotation, if the rotation is counterclock
        // wise then we are looking at data we should keep. If it's negative rotation
        // we should not included in in the matching
        // have enough distance, check viewpoint
        double a = rViewPoint.GetY() - firstPoint.GetY();
        double b = firstPoint.GetX() - rViewPoint.GetX();
        double c = firstPoint.GetY() * rViewPoint.GetX() - firstPoint.GetX() * rViewPoint.GetY();
        double ss = currentPoint.GetX() * a + currentPoint.GetY() * b + c;

        // reset beginning point
        firstPoint = currentPoint;

        if (ss < 0.0)  // wrong side, skip and keep going
        {
          trailingPointIter = iter;
        }
        else
        {
          for (; trailingPointIter != iter; ++trailingPointIter)
          {
            // 将符合条件的(1.距离大于一定值 2.是正确的时针方向)扫描点加入validPoints之中
            validPoints.push_back(*trailingPointIter);    
          }
        }
      }
    }

    return validPoints;
  }

3.2 SmearPoint()

大致上感觉是:通过一个 6 * 6 的核函数,将当前坐标点附近 6*6 的点 做卷积,如果附近点的占用值小于核函数中的占用值,则将 核函数中的占用值 赋值给 附近点的占用值。

    /**
     * Smear cell if the cell at the given point is marked as "occupied"
     * @param rGridPoint
     */
    inline void SmearPoint(const Vector2<kt_int32s>& rGridPoint)
    {
      assert(m_pKernel != NULL);

      int gridIndex = GridIndex(rGridPoint);
      if (GetDataPointer()[gridIndex] != GridStates_Occupied)
      {
        return;
      }

      kt_int32s halfKernel = m_KernelSize / 2;

      // apply kernel
      for (kt_int32s j = -halfKernel; j <= halfKernel; j++)
      {
        kt_int8u* pGridAdr = GetDataPointer(Vector2<kt_int32s>(rGridPoint.GetX(), rGridPoint.GetY() + j));

        kt_int32s kernelConstant = (halfKernel) + m_KernelSize * (j + halfKernel);

        // if a point is on the edge of the grid, there is no problem
        // with running over the edge of allowable memory, because
        // the grid has margins to compensate for the kernel size
        for (kt_int32s i = -halfKernel; i <= halfKernel; i++)
        {
          kt_int32s kernelArrayIndex = i + kernelConstant;

          kt_int8u kernelValue = m_pKernel[kernelArrayIndex];
          if (kernelValue > pGridAdr[i])
          {
            // kernel value is greater, so set it to kernel value
            pGridAdr[i] = kernelValue;
          }
        }
      }
    }


    /**
     * Sets up the kernel for grid smearing.
     */
    virtual void CalculateKernel()

核函数的占用值为;

          kt_double distanceFromMean = hypot(i * resolution, j * resolution);
          kt_double z = exp(-0.5 * pow(distanceFromMean / m_SmearDeviation, 2));

          kt_int32u kernelValue = static_cast<kt_int32u>(math::Round(z * GridStates_Occupied));

          int kernelArrayIndex = (i + halfKernel) + m_KernelSize * (j + halfKernel);
          m_pKernel[kernelArrayIndex] = static_cast<kt_int8u>(kernelValue);

 

4 kt_double ScanMatcher::CorrelateScan()

扫描匹配的实际处理函数

  /**
   * Finds the best pose for the scan centering the search in the correlation grid
   * at the given pose and search in the space by the vector and angular offsets
   * in increments of the given resolutions
   * @param rScan scan to match against correlation grid
   * @param rSearchCenter the center of the search space
   * @param rSearchSpaceOffset searches poses in the area offset by this vector around search center
   * @param rSearchSpaceResolution how fine a granularity to search in the search space
   * @param searchAngleOffset searches poses in the angles offset by this angle around search center   (20 degrees)
   * @param searchAngleResolution how fine a granularity to search in the angular search space           (2 degrees)
   * @param doPenalize whether to penalize matches further from the search center
        是否对距离搜索中心较远的匹配进行处罚
   * @param rMean output parameter of mean (best pose) of match
   * @param rCovariance output parameter of covariance of match
   * @param doingFineMatch whether to do a finer search after coarse search
   * @return strength of response
   */
  kt_double ScanMatcher::CorrelateScan(LocalizedRangeScan* pScan, const Pose2& rSearchCenter,
                                       const Vector2<kt_double>& rSearchSpaceOffset,
                                       const Vector2<kt_double>& rSearchSpaceResolution,
                                       kt_double searchAngleOffset, kt_double searchAngleResolution,
                                       kt_bool doPenalize, Pose2& rMean, Matrix3& rCovariance, kt_bool doingFineMatch)
  {
    //
    // 1. 制作角度查找表
    assert(searchAngleResolution != 0.0);

    // setup lookup arrays    
    // 这个函数把lookuparray的角度写好了。每一个lookuparry存储了每一个激光数据
    // (总共361/修改之后为180个)在角度滑窗之后的结果,还没有xy的滑窗
    m_pGridLookup->ComputeOffsets(pScan, rSearchCenter.GetHeading(), searchAngleOffset, searchAngleResolution);

    // only initialize probability grid if computing positional covariance (during coarse match)
    if (!doingFineMatch)
    {
      m_pSearchSpaceProbs->Clear();

      // position search grid - finds lower left corner of search grid
      Vector2<kt_double> offset(rSearchCenter.GetPosition() - rSearchSpaceOffset);
      m_pSearchSpaceProbs->GetCoordinateConverter()->SetOffset(offset);
    }

    //
    // 2. 求得搜索空间的个数,分别为x y angle 的个数

    // calculate position arrays
    //以下计算出需要的各种x,y,angle用于三重循环 ,在这里x有16个值,y有16个值,angle有21个值
    std::vector<kt_double> xPoses;
    kt_int32u nX = static_cast<kt_int32u>(math::Round(rSearchSpaceOffset.GetX() *
                                          2.0 / rSearchSpaceResolution.GetX()) + 1);
    kt_double startX = -rSearchSpaceOffset.GetX();
    for (kt_int32u xIndex = 0; xIndex < nX; xIndex++)
    {
      xPoses.push_back(startX + xIndex * rSearchSpaceResolution.GetX());
    }
    assert(math::DoubleEqual(xPoses.back(), -startX));

    std::vector<kt_double> yPoses;
    kt_int32u nY = static_cast<kt_int32u>(math::Round(rSearchSpaceOffset.GetY() *
                                          2.0 / rSearchSpaceResolution.GetY()) + 1);
    kt_double startY = -rSearchSpaceOffset.GetY();
    for (kt_int32u yIndex = 0; yIndex < nY; yIndex++)
    {
      yPoses.push_back(startY + yIndex * rSearchSpaceResolution.GetY());
    }
    assert(math::DoubleEqual(yPoses.back(), -startY));

    // calculate pose response array size
    kt_int32u nAngles = static_cast<kt_int32u>(math::Round(searchAngleOffset * 2.0 / searchAngleResolution) + 1);

    kt_int32u poseResponseSize = static_cast<kt_int32u>(xPoses.size() * yPoses.size() * nAngles);  // 位置、角度的变化
    // allocate array   //Pose2包含了 x,y,theta. 滑窗的结果存储在pPoseResponse中
    std::pair<kt_double, Pose2>* pPoseResponse = new std::pair<kt_double, Pose2>[poseResponseSize];
    
    // rSearchCenter 是搜索空间的中心
    Vector2<kt_int32s> startGridPoint = m_pCorrelationGrid->WorldToGrid(Vector2<kt_double>(rSearchCenter.GetX() + startX, 
                                                                                           rSearchCenter.GetY() + startY));
    
    //
    // 3. 遍历搜索空间,求得所有搜索项的响应值

    // 在 xy范围上进行遍历,因为 使用了角度的查找表,所以其实 遍历了 x  y theta, 最终找到最大的response
    // 如果使用了 惩罚项,那么公式变成了  response = k * response. 当 dx dy dtheta越大的时候, k越小 
    kt_int32u poseResponseCounter = 0;
    forEachAs(std::vector<kt_double>, &yPoses, yIter)
    {
      kt_double y = *yIter;
      kt_double newPositionY = rSearchCenter.GetY() + y;
      kt_double squareY = math::Square(y);

      forEachAs(std::vector<kt_double>, &xPoses, xIter)
      {
        kt_double x = *xIter;
        kt_double newPositionX = rSearchCenter.GetX() + x;
        kt_double squareX = math::Square(x);

        //由 x,y得出一个 gridIndex
        Vector2<kt_int32s> gridPoint = m_pCorrelationGrid->WorldToGrid(Vector2<kt_double>(newPositionX, newPositionY));  
        kt_int32s gridIndex = m_pCorrelationGrid->GridIndex(gridPoint);
        assert(gridIndex >= 0);

        kt_double angle = 0.0;
        kt_double startAngle = rSearchCenter.GetHeading() - searchAngleOffset;
        for (kt_int32u angleIndex = 0; angleIndex < nAngles; angleIndex++)
        {
          angle = startAngle + angleIndex * searchAngleResolution;
          //由 angleIndex 和 gridIndex得出response   
          //将旋转平移之后的这一帧数据和上一帧的数据进行匹配得分
          kt_double response = GetResponse(angleIndex, gridIndex);


          if (doPenalize && (math::DoubleEqual(response, 0.0) == false))
          {
            // simple model (approximate Gaussian) to take odometry into account

            kt_double squaredDistance = squareX + squareY;
            // DISTANCE_PENALTY_GAIN : 0.2 , DISTANCE_PENALTY_GAIN :0.3
            kt_double distancePenalty = 1.0 - (DISTANCE_PENALTY_GAIN *
                                               squaredDistance / m_pMapper->m_pDistanceVariancePenalty->GetValue());
            distancePenalty = math::Maximum(distancePenalty, m_pMapper->m_pMinimumDistancePenalty->GetValue());

            kt_double squaredAngleDistance = math::Square(angle - rSearchCenter.GetHeading());
            // m_pAngleVariancePenalty 20角度变成弧度的平方
            kt_double anglePenalty = 1.0 - (ANGLE_PENALTY_GAIN *
                                            squaredAngleDistance / m_pMapper->m_pAngleVariancePenalty->GetValue());
            anglePenalty = math::Maximum(anglePenalty, m_pMapper->m_pMinimumAnglePenalty->GetValue());

            response *= (distancePenalty * anglePenalty);
          }

          // store response and pose
          pPoseResponse[poseResponseCounter] = std::pair<kt_double, Pose2>(response, Pose2(newPositionX, newPositionY,
                                                                           math::NormalizeAngle(angle)));
          poseResponseCounter++;
        }

        assert(math::DoubleEqual(angle, rSearchCenter.GetHeading() + searchAngleOffset));
      }
    }

    assert(poseResponseSize == poseResponseCounter);

    //
    // 4. 在 所有搜索项的响应值 中寻找最大的响应值

    // find value of best response (in [0; 1])
    kt_double bestResponse = -1;
    for (kt_int32u i = 0; i < poseResponseSize; i++)
    {
      bestResponse = math::Maximum(bestResponse, pPoseResponse[i].first);

      // will compute positional covariance, save best relative probability for each cell
      if (!doingFineMatch)
      {
        const Pose2& rPose = pPoseResponse[i].second;
        Vector2<kt_int32s> grid = m_pSearchSpaceProbs->WorldToGrid(rPose.GetPosition());

        // Changed (kt_double*) to the reinterpret_cast - Luc
        kt_double* ptr = reinterpret_cast<kt_double*>(m_pSearchSpaceProbs->GetDataPointer(grid));
        if (ptr == NULL)
        {
          throw std::runtime_error("Mapper FATAL ERROR - Index out of range in probability search!");
        }

        *ptr = math::Maximum(pPoseResponse[i].first, *ptr);
      }
    }

    //
    // 5. 最大的响应值(同为最大)对应的位姿可能有很多种,求得 x y angle 的平均值 

    // average all poses with same highest response   
    Vector2<kt_double> averagePosition;
    kt_double thetaX = 0.0;
    kt_double thetaY = 0.0;
    kt_int32s averagePoseCount = 0;
    for (kt_int32u i = 0; i < poseResponseSize; i++)
    {
      // DoubleEqual: 在 正负KT_TOLERANCE = 1e-06 内范围内的都是相等
      if (math::DoubleEqual(pPoseResponse[i].first, bestResponse))
      {
        averagePosition += pPoseResponse[i].second.GetPosition();

        kt_double heading = pPoseResponse[i].second.GetHeading();
        thetaX += cos(heading);
        thetaY += sin(heading);

        averagePoseCount++;                        //对 x,y进行平均
      }
    }

    Pose2 averagePose;
    if (averagePoseCount > 0)
    {
      averagePosition /= averagePoseCount;

      thetaX /= averagePoseCount;
      thetaY /= averagePoseCount;

      averagePose = Pose2(averagePosition, atan2(thetaY, thetaX));
    }
    else
    {
      throw std::runtime_error("Mapper FATAL ERROR - Unable to find best position");
    }

    // delete pose response array
    delete [] pPoseResponse;

    #ifdef KARTO_DEBUG
    std::cout << "bestPose: " << averagePose << std::endl;
    std::cout << "bestResponse: " << bestResponse << std::endl;
    #endif

    //
    // 6. 粗匹配时计算位置协方差,精匹配时计算角度协方差

    if (!doingFineMatch)
    {  //计算位置协方差
      ComputePositionalCovariance(averagePose, bestResponse, rSearchCenter, rSearchSpaceOffset,
                                  rSearchSpaceResolution, searchAngleResolution, rCovariance);
    }
    else
    {
      //角度上的协方差
      ComputeAngularCovariance(averagePose, bestResponse, rSearchCenter,
                              searchAngleOffset, searchAngleResolution, rCovariance);
    }

    rMean = averagePose; // 最优位姿

    #ifdef KARTO_DEBUG
    std::cout << "bestPose: " << averagePose << std::endl;
    #endif

    if (bestResponse > 1.0)
    {
      bestResponse = 1.0;
    }

    assert(math::InRange(bestResponse, 0.0, 1.0));
    assert(math::InRange(rMean.GetHeading(), -KT_PI, KT_PI));

    return bestResponse;
  }

4.1 生成角度查找表 

    /**
     * Compute lookup table of the points of the given scan for the given angular space
     * @param pScan the scan
     * @param angleCenter
     * @param angleOffset computes lookup arrays for the angles within this offset around angleStart
     * @param angleResolution how fine a granularity to compute lookup arrays in the angular space
     */
    void ComputeOffsets(LocalizedRangeScan* pScan,
                        kt_double angleCenter,
                        kt_double angleOffset,
                        kt_double angleResolution)
    {
      assert(angleOffset != 0.0);
      assert(angleResolution != 0.0);

      // 角度搜索范围:40度: 2 * 20
      kt_int32u nAngles = static_cast<kt_int32u>(math::Round(angleOffset * 2.0 / angleResolution) + 1);
      SetSize(nAngles);

      // convert points into local coordinates of scan pose
      // rPointReadings是激光数据的深度信息转换为在世界坐标系中的二维坐标
      const PointVectorDouble& rPointReadings = pScan->GetPointReadings();  

      // compute transform to scan pose
      Transform transform(pScan->GetSensorPose());   
      
      Pose2Vector localPoints;  //localPoints的所有数据的角度是 0.0
      const_forEach(PointVectorDouble, &rPointReadings)
      {
        // do inverse transform to get points in local coordinates 进行逆变换以获取局部坐标中的点
        Pose2 vec = transform.InverseTransformPose(Pose2(*iter, 0.0));
        localPoints.push_back(vec);
      }

      //
      // create lookup array for different angles
      kt_double angle = 0.0;
      kt_double startAngle = angleCenter - angleOffset;
      for (kt_int32u angleIndex = 0; angleIndex < nAngles; angleIndex++)
      {
        angle = startAngle + angleIndex * angleResolution;
        ComputeOffsets(angleIndex, angle, localPoints, pScan);
      }
    }


    /**
     * Compute lookup value of points for given angle
     * @param angleIndex
     * @param angle
     * @param rLocalPoints
     */
    void ComputeOffsets(kt_int32u angleIndex, kt_double angle, const Pose2Vector& rLocalPoints, LocalizedRangeScan* pScan)
    {
      // m_ppLookupArray 是个2维指针,存储不同角度下的雷达数据
      m_ppLookupArray[angleIndex]->SetSize(static_cast<kt_int32u>(rLocalPoints.size()));
      m_Angles.at(angleIndex) = angle; // m_Angles 一维数组,只保存当前scan的角度

      // set up point array by computing relative offsets to points readings
      // when rotated by given angle

      const Vector2<kt_double>& rGridOffset = m_pGrid->GetCoordinateConverter()->GetOffset();

      kt_double cosine = cos(angle);
      kt_double sine = sin(angle);

      kt_int32u readingIndex = 0;

      // 第i个角度的所有扫描数据的指针,每一个扫描数据又是一个uint数组,
      // 这个uint数组的大小是LookupArray的m_Capacity, m_Capacity 的大小是rLocalPoints.size()
      kt_int32s* pAngleIndexPointer = m_ppLookupArray[angleIndex]->GetArrayPointer();

      kt_double maxRange = pScan->GetLaserRangeFinder()->GetMaximumRange();

      const_forEach(Pose2Vector, &rLocalPoints)
      {
        const Vector2<kt_double>& rPosition = iter->GetPosition();

        if (std::isnan(pScan->GetRangeReadings()[readingIndex]) || 
            std::isinf(pScan->GetRangeReadings()[readingIndex]))
        {
          // const kt_int32s INVALID_SCAN = std::numeric_limits<kt_int32s>::max();
          pAngleIndexPointer[readingIndex] = INVALID_SCAN;
          readingIndex++;
          continue;
        }

        // counterclockwise rotation and that rotation is about the origin (0, 0). 是关于原点(0,0)的 逆时针 旋转。
        // 将rPosition相对于当前帧的sensor这个点(比如在tutorial1中第一次来这里时是1,1)逆时针旋转angle度,得到offset
        Vector2<kt_double> offset;   
        offset.SetX(cosine * rPosition.GetX() - sine * rPosition.GetY());
        offset.SetY(sine * rPosition.GetX() + cosine * rPosition.GetY());

        // have to compensate for the grid offset when getting the grid index
        // rPosition 旋转后的坐标,加上 地图中心 与原点间的偏差?地图就是 pCorrelationGrid
        Vector2<kt_int32s> gridPoint = m_pGrid->WorldToGrid(offset + rGridOffset);

        // use base GridIndex to ignore ROI
        // lookupIndex 就是 雷达点经过旋转之后在地图中的索引值
        kt_int32s lookupIndex = m_pGrid->Grid<T>::GridIndex(gridPoint, false);

        // pAngleIndexPointer = m_ppLookupArray[angleIndex]
        // m_ppLookupArray[angleIndex][readingIndex] = = lookupIndex
        // 所以一个角度值 angleIndex 对应的 查找表里 存储的是 一帧雷达数据 经过旋转后在地图中的索引值,
        // 也就是 只算出了scan旋转后的坐标值,并没有进行实际的雷达数据旋转
        pAngleIndexPointer[readingIndex] = lookupIndex;

        readingIndex++;
      }
      assert(readingIndex == rLocalPoints.size());
    }

4.2 kt_double ScanMatcher::GetResponse()

response : 响应值,就是对应的雷达点在地图中的占用值 的和 的平均值。

假设一帧雷达只有2个点,第一个点是占用状态,就是100,第二个点是unknow状态,就是0,然后用(100 + 0)/ ( 2 * 100) = 0.5

 来自于 https://blog.csdn.net/qq_24893115/article/details/52965410

  /**
   * Get response at given position for given rotation (only look up valid points)
   * @param angleIndex
   * @param gridPositionIndex
   * @return response
   */
  kt_double ScanMatcher::GetResponse(kt_int32u angleIndex, kt_int32s gridPositionIndex) const
  {
    kt_double response = 0.0;

    // add up value for each point
    // 实际获得了Grid<T>的m_pData, T* m_pData;  即 uchar* m_pData; 大小是m_WidthStep * m_Height,因此是栅格格子个数大小
    kt_int8u* pByte = m_pCorrelationGrid->GetDataPointer() + gridPositionIndex; 

    // m_pGridLoopup存储 nAngles个LookupArray指针,每一个角度索引 对应着一个 pOffsets,
    // pOffsets 存储了 scan 所有的数据点 经过旋转后 的坐标 对应着的 栅格的索引
    const LookupArray* pOffsets = m_pGridLookup->GetLookupArray(angleIndex); 
    assert(pOffsets != NULL);

    // get number of points in offset list,一帧scan有多少个点
    kt_int32u nPoints = pOffsets->GetSize();
    if (nPoints == 0)
    {
      return response;
    }

    // calculate response
    kt_int32s* pAngleIndexPointer = pOffsets->GetArrayPointer();
    for (kt_int32u i = 0; i < nPoints; i++)
    {
      // ignore points that fall off the grid
      kt_int32s pointGridIndex = gridPositionIndex + pAngleIndexPointer[i];
      if (!math::IsUpTo(pointGridIndex, m_pCorrelationGrid->GetDataSize()) || pAngleIndexPointer[i] == INVALID_SCAN)
      {
        continue;
      }

      // uses index offsets to efficiently find location of point in the grid
      // pAngleIndexPointer[i] 是这帧scan的第i个点经过旋转后对应着的栅格地图的索引
      // pByte[pAngleIndexPointer[i]] 就是这个索引对应着的占用值:    
      // GridStates_Unknown = 0, GridStates_Occupied = 100, GridStates_Free = 255
      response += pByte[pAngleIndexPointer[i]];
    }

    // normalize response
    // 归一化为什么要除以 GridStates_Occupied,明明GridStates_Free 255 才是最高的?
    // 因为mapper.cpp中只有占用 和 free 两种状态,没有给格子添加 free 状态。
    // free 状态是在 OccupancyGrid 中设置的,OccupancyGrid 是在slam_karto中进行调用的
    response /= (nPoints * GridStates_Occupied);
    assert(fabs(response) <= 1.0);
    return response;
  }

4.3 void ScanMatcher::ComputePositionalCovariance() 

图片来自于 https://blog.csdn.net/qq_24893115/article/details/52965410

  /**
   * Computes the positional covariance of the best pose
   * @param rBestPose
   * @param bestResponse
   * @param rSearchCenter
   * @param rSearchSpaceOffset
   * @param rSearchSpaceResolution
   * @param searchAngleResolution
   * @param rCovariance
   */
  void ScanMatcher::ComputePositionalCovariance(const Pose2& rBestPose, kt_double bestResponse,
                                                const Pose2& rSearchCenter,
                                                const Vector2<kt_double>& rSearchSpaceOffset,
                                                const Vector2<kt_double>& rSearchSpaceResolution,
                                                kt_double searchAngleResolution, Matrix3& rCovariance)
  {
    // reset covariance to identity matrix
    rCovariance.SetToIdentity();

    // if best response is vary small return max variance
    if (bestResponse < KT_TOLERANCE)
    {
      rCovariance(0, 0) = MAX_VARIANCE;  // XX
      rCovariance(1, 1) = MAX_VARIANCE;  // YY
      rCovariance(2, 2) = 4 * math::Square(searchAngleResolution);  // TH*TH

      return;
    }

    kt_double accumulatedVarianceXX = 0;
    kt_double accumulatedVarianceXY = 0;
    kt_double accumulatedVarianceYY = 0;
    kt_double norm = 0;

    kt_double dx = rBestPose.GetX() - rSearchCenter.GetX();
    kt_double dy = rBestPose.GetY() - rSearchCenter.GetY();

    kt_double offsetX = rSearchSpaceOffset.GetX();
    kt_double offsetY = rSearchSpaceOffset.GetY();

    kt_int32u nX = static_cast<kt_int32u>(math::Round(offsetX * 2.0 / rSearchSpaceResolution.GetX()) + 1);
    kt_double startX = -offsetX;
    assert(math::DoubleEqual(startX + (nX - 1) * rSearchSpaceResolution.GetX(), -startX));

    kt_int32u nY = static_cast<kt_int32u>(math::Round(offsetY * 2.0 / rSearchSpaceResolution.GetY()) + 1);
    kt_double startY = -offsetY;
    assert(math::DoubleEqual(startY + (nY - 1) * rSearchSpaceResolution.GetY(), -startY));

    for (kt_int32u yIndex = 0; yIndex < nY; yIndex++)
    {
      kt_double y = startY + yIndex * rSearchSpaceResolution.GetY();

      for (kt_int32u xIndex = 0; xIndex < nX; xIndex++)
      {
        kt_double x = startX + xIndex * rSearchSpaceResolution.GetX();

        Vector2<kt_int32s> gridPoint = m_pSearchSpaceProbs->WorldToGrid(Vector2<kt_double>(rSearchCenter.GetX() + x,
                                                                                           rSearchCenter.GetY() + y));
        kt_double response = *(m_pSearchSpaceProbs->GetDataPointer(gridPoint));

        // response is not a low response
        if (response >= (bestResponse - 0.1))
        {
          norm += response;
          accumulatedVarianceXX += (math::Square(x - dx) * response);
          accumulatedVarianceXY += ((x - dx) * (y - dy) * response);
          accumulatedVarianceYY += (math::Square(y - dy) * response);
        }
      }
    }

    if (norm > KT_TOLERANCE)
    {
      kt_double varianceXX = accumulatedVarianceXX / norm;
      kt_double varianceXY = accumulatedVarianceXY / norm;
      kt_double varianceYY = accumulatedVarianceYY / norm;
      kt_double varianceTHTH = 4 * math::Square(searchAngleResolution);

      // lower-bound variances so that they are not too small;
      // ensures that links are not too tight
      kt_double minVarianceXX = 0.1 * math::Square(rSearchSpaceResolution.GetX());
      kt_double minVarianceYY = 0.1 * math::Square(rSearchSpaceResolution.GetY());
      varianceXX = math::Maximum(varianceXX, minVarianceXX);
      varianceYY = math::Maximum(varianceYY, minVarianceYY);

      // increase variance for poorer responses
      kt_double multiplier = 1.0 / bestResponse;
      rCovariance(0, 0) = varianceXX * multiplier;
      rCovariance(0, 1) = varianceXY * multiplier;
      rCovariance(1, 0) = varianceXY * multiplier;
      rCovariance(1, 1) = varianceYY * multiplier;
      rCovariance(2, 2) = varianceTHTH;  // this value will be set in ComputeAngularCovariance
    }

    // if values are 0, set to MAX_VARIANCE
    // values might be 0 if points are too sparse and thus don't hit other points
    if (math::DoubleEqual(rCovariance(0, 0), 0.0))
    {
      rCovariance(0, 0) = MAX_VARIANCE;
    }

    if (math::DoubleEqual(rCovariance(1, 1), 0.0))
    {
      rCovariance(1, 1) = MAX_VARIANCE;
    }
  }

 4.3 void ScanMatcher::ComputeAngularCovariance()

  /**
   * Computes the angular covariance of the best pose
   * @param rBestPose
   * @param bestResponse
   * @param rSearchCenter
   * @param rSearchAngleOffset
   * @param searchAngleResolution
   * @param rCovariance
   */
  void ScanMatcher::ComputeAngularCovariance(const Pose2& rBestPose,
                                             kt_double bestResponse,
                                             const Pose2& rSearchCenter,
                                             kt_double searchAngleOffset,
                                             kt_double searchAngleResolution,
                                             Matrix3& rCovariance)
  {
    // NOTE: do not reset covariance matrix

    // normalize angle difference
    kt_double bestAngle = math::NormalizeAngleDifference(rBestPose.GetHeading(), rSearchCenter.GetHeading());

    Vector2<kt_int32s> gridPoint = m_pCorrelationGrid->WorldToGrid(rBestPose.GetPosition());
    kt_int32s gridIndex = m_pCorrelationGrid->GridIndex(gridPoint);

    kt_int32u nAngles = static_cast<kt_int32u>(math::Round(searchAngleOffset * 2 / searchAngleResolution) + 1);

    kt_double angle = 0.0;
    kt_double startAngle = rSearchCenter.GetHeading() - searchAngleOffset;

    kt_double norm = 0.0;
    kt_double accumulatedVarianceThTh = 0.0;
    for (kt_int32u angleIndex = 0; angleIndex < nAngles; angleIndex++)
    {
      angle = startAngle + angleIndex * searchAngleResolution;
      kt_double response = GetResponse(angleIndex, gridIndex);

      // response is not a low response
      if (response >= (bestResponse - 0.1))
      {
        norm += response;
        accumulatedVarianceThTh += (math::Square(angle - bestAngle) * response);
      }
    }
    assert(math::DoubleEqual(angle, rSearchCenter.GetHeading() + searchAngleOffset));

    if (norm > KT_TOLERANCE)
    {
      if (accumulatedVarianceThTh < KT_TOLERANCE)
      {
        accumulatedVarianceThTh = math::Square(searchAngleResolution);
      }

      accumulatedVarianceThTh /= norm;
    }
    else
    {
      accumulatedVarianceThTh = 1000 * math::Square(searchAngleResolution);
    }

    rCovariance(2, 2) = accumulatedVarianceThTh;
  }

5 void SetSensorPose()

更新 m_CorrectedPose :车 相对 世界坐标系 的位置及角度。以及当前雷达在世界坐标系的位置

    /**
     * Computes the robot pose given the corrected scan pose
     * @param rScanPose pose of the sensor
     */
    void SetSensorPose(const Pose2& rScanPose)
    {
      Pose2 deviceOffsetPose2 = GetLaserRangeFinder()->GetOffsetPose();   //这是sensor相对于车中心的坐标
      kt_double offsetLength = deviceOffsetPose2.GetPosition().Length();
      kt_double offsetHeading = deviceOffsetPose2.GetHeading();  //这是sensor相对于车坐标系的角度
      kt_double angleoffset = atan2(deviceOffsetPose2.GetY(), deviceOffsetPose2.GetX());
      kt_double correctedHeading = math::NormalizeAngle(rScanPose.GetHeading());
      // worldSensorOffset 是"scan相对于车的偏移"在世界坐标系中的位置

      //先假设angleoffset是0,那么correctHeading减去offsetHeading是 offset相对于世界坐标系的角度。
      //假设 offsetLength是1, correctedHeading 是pi,  offsetHeading是 pi/2, 那么
      //offsetLength * cos(correctedHeading + angleoffset - offsetHeading) = 0
      //offsetLength * sin(correctedHeading + angleoffset - offsetHeading) = 1
      // angleoffset 的含义与在最初设定offset时有关,且可以认为这里的angleoffset的符号写反了,
      // offset 的heading 的含义如果是相对于车坐标系而言的,那么就没有必要使用 angleoffset, 如果是相对于 "以车原点为圆心,laser到圆心为x轴的坐标系",
      // 那么也应该用 correctedHeading - angleoffset - offsetHeading。 
      Pose2 worldSensorOffset = Pose2(offsetLength * cos(correctedHeading + angleoffset - offsetHeading),
                                      offsetLength * sin(correctedHeading + angleoffset - offsetHeading),
                                      offsetHeading);
      // rScanPose是 scan在世界坐标系的位置,  worldSensorOffset是 "scan相对于车的偏移"在世界坐标系中的位置,相减之后就是车相对世界坐标系的位置
      // 这里的相减 会让角度也相减,也就是 correctedHeading - offsetHeading, 这样就得到了车的角度相对于世界坐标系的角度
      m_CorrectedPose = rScanPose - worldSensorOffset;

      Update();
    }


    /**
     * Compute point readings based on range readings
     * Only range readings within [minimum range; range threshold] are returned
     */
    virtual void Update()
    {
      LaserRangeFinder* pLaserRangeFinder = GetLaserRangeFinder();

      if (pLaserRangeFinder != NULL)
      {
        m_PointReadings.clear();
        m_UnfilteredPointReadings.clear();

        kt_double rangeThreshold = pLaserRangeFinder->GetRangeThreshold();
        kt_double minimumAngle = pLaserRangeFinder->GetMinimumAngle();
        kt_double angularResolution = pLaserRangeFinder->GetAngularResolution();
        Pose2 scanPose = GetSensorPose();

        // compute point readings
        Vector2<kt_double> rangePointsSum;
        kt_int32u beamNum = 0;
        for (kt_int32u i = 0; i < pLaserRangeFinder->GetNumberOfRangeReadings(); i++, beamNum++)
        {
          kt_double rangeReading = GetRangeReadings()[i];
          if (!math::InRange(rangeReading, pLaserRangeFinder->GetMinimumRange(), rangeThreshold))
          {
            kt_double angle = scanPose.GetHeading() + minimumAngle + beamNum * angularResolution;
            //将laser scan的读入数据转换成了在地图上的坐标
            Vector2<kt_double> point;
            point.SetX(scanPose.GetX() + (rangeReading * cos(angle)));
            point.SetY(scanPose.GetY() + (rangeReading * sin(angle)));

            m_UnfilteredPointReadings.push_back(point);
            continue;
          }

          kt_double angle = scanPose.GetHeading() + minimumAngle + beamNum * angularResolution;
          //将laser scan的读入数据转换成了在地图上的坐标
          Vector2<kt_double> point;
          point.SetX(scanPose.GetX() + (rangeReading * cos(angle)));
          point.SetY(scanPose.GetY() + (rangeReading * sin(angle)));

          m_PointReadings.push_back(point);
          m_UnfilteredPointReadings.push_back(point);
          //将laser scan扫描到的点的世界坐标加在一起,然后求取平均值,就是m_BarycenterPose的结果
          rangePointsSum += point;
        }

        // compute barycenter
        kt_double nPoints = static_cast<kt_double>(m_PointReadings.size());
        if (nPoints != 0.0)
        {
          Vector2<kt_double> averagePosition = Vector2<kt_double>(rangePointsSum / nPoints);
          m_BarycenterPose = Pose2(averagePosition, 0.0);
        }
        else
        {
          m_BarycenterPose = scanPose;
        }

        // calculate bounding box of scan
        m_BoundingBox = BoundingBox2();
        //m_BoundingBox本身是没有范围的,通过Add函数得出一个能把这个点框进去的范围,总共有nPoints个点,那么就形成了一个刚好将m_PointReadings都能包含进去的BoundingBox
        m_BoundingBox.Add(scanPose.GetPosition());
        forEach(PointVectorDouble, &m_PointReadings)
        {
          m_BoundingBox.Add(*iter);   //Add函数会不断地改变 m_BoundingBox的包含范围
        }
      }

      m_IsDirty = false;
    }

6 MapperSensorManager::AddScan()

  /**
   * Adds scan to scan vector of device that recorded scan
   * @param pScan
   */
  void MapperSensorManager::AddScan(LocalizedRangeScan* pScan)
  {
    GetScanManager(pScan)->AddScan(pScan, m_NextScanId);   //这个是在ScanManager 类中加入了 pScan
    m_Scans.push_back(pScan);                              //这个是在当前的MapperSensorManager 中加入了 pScan,两者有关系
    m_NextScanId++;                                        //MapperSensorManager has a "typedef std::map<Name, ScanManager*> ScanManagerMap"
  }


  class ScanManager
  {
    /**
     * Adds scan to vector of processed scans tagging scan with given unique id
     * @param pScan
     */
    inline void AddScan(LocalizedRangeScan* pScan, kt_int32s uniqueId)
    {
      // assign state id to scan
      pScan->SetStateId(static_cast<kt_int32u>(m_Scans.size()));

      // assign unique id to scan
      pScan->SetUniqueId(uniqueId);

      // add it to scan buffer
      m_Scans.push_back(pScan);
    }
  }

7 void MapperSensorManager::AddRunningScan()

  /**
   * Adds scan to running scans of device that recorded scan
   * @param pScan
   */
  inline void MapperSensorManager::AddRunningScan(LocalizedRangeScan* pScan)
  {
    GetScanManager(pScan)->AddRunningScan(pScan);
  }


    /**
     * Adds scan to vector of running scans
     * @param pScan
     */
    void AddRunningScan(LocalizedRangeScan* pScan)
    {
      m_RunningScans.push_back(pScan);

      // vector has at least one element (first line of this function), so this is valid
      Pose2 frontScanPose = m_RunningScans.front()->GetSensorPose();
      Pose2 backScanPose = m_RunningScans.back()->GetSensorPose();

      // cap vector size and remove all scans from front of vector that are too far from end of vector
      //进行 runningScan的维护,数量上的维护以及第一个与最后一个距离之间的维护
      kt_double squaredDistance = frontScanPose.GetPosition().SquaredDistance(backScanPose.GetPosition());
      while (m_RunningScans.size() > m_RunningBufferMaximumSize ||
             squaredDistance > math::Square(m_RunningBufferMaximumDistance) - KT_TOLERANCE)
      {
        // remove front of running scans
        m_RunningScans.erase(m_RunningScans.begin());

        // recompute stats of running scans
        frontScanPose = m_RunningScans.front()->GetSensorPose();
        backScanPose = m_RunningScans.back()->GetSensorPose();
        squaredDistance = frontScanPose.GetPosition().SquaredDistance(backScanPose.GetPosition());
      }
    }

 

总结:

process 的过程

本节介绍了扫描匹配的大致流程,其中涉及各个函数的初始化部分及参数部分可以参照 karto探秘之open_karto 第一章 和 karto探秘之open_karto 第二章 --- 参数解析

1 通过上一帧雷达数据的坐标,加上上一帧雷达到当前帧雷达的odom变化,得到当前帧的初始位姿

2 判断 scan 满足 时间,距离,角度 的条件,则是一个有效的点,将这个点进行 扫描匹配,求出最优的位姿。

3 将这帧雷达加入到 running scan 中

4 将将这帧雷达加入到 图结构 中,添加顶点,与边,进行回环检测与整体图优化。

扫描匹配 的过程

1 首先获取当前帧的初始位姿,通过里程计得到的猜测,

2 roi 就是 m_pCorrelationGrid 的区域,大小为 2431 * 2431 的栅格,也就是前后左右12米的范围,又加了0.3m的 核函数的边界。这个区域是随着雷达的移动不断更新的,通过 running scan 进行更新,running scan 不包括当前的scan。karto的扫描匹配只维护这么大的一个网格,所以消耗的内存和cpu很小

所以扫描匹配是当前scan与过去的一段时间内scan形成的栅格地图间的匹配。

3 地图的offset 是当前scan 与 m_pCorrelationGrid 地图中心的 坐标偏差

4 通过AddScans() 将m_pCorrelationGrid中 的 雷达点上占据的栅格设置为 占用状态,首先是将m_pCorrelationGrid的栅格值全设置为0,然后将 pScan 进行筛选,距离与方向上的筛选。筛选之后将这些雷达点对应的栅格设置为占用状态,同时使用smear将雷达点周围的栅格使用核函数进行更新一下。

5 设置搜索范围与搜索分辨率,粗匹配时计算位置协方差;使用粗匹配输出的坐标均值 再次进行精匹配,设置搜索范围与分辨率,精匹配时计算角度协方差。

6 响应值的计算:就是当前scan对应在栅格地图中的格子的占用值的平均值,如果这帧scan对应所有格子都为占用,则相应值为1,格子目前只有 GridStates_Unknown = 0,GridStates_Occupied = 100 2种状态,所以归一化时除以的是GridStates_Occupied 。free 状态实在OccupiedGrid中设置的,OccupiedGrid类在slam_karto中进行调用。

7 角度查找表:将当前scan按照不同角度进行旋转,每个旋转角度对应一个角度查找表。一个角度查找表中存储的是当前scan的每一个数据点 经过旋转后对应着栅格地图中格子的索引值,一个角度查找表存储了当前scan经过旋转后的索引值,这里并不是真正的将scan进行旋转了,只是记录下旋转后的对应的格子的索引,用于求取响应值。

 

 

REFERENCES

Karto SLAM之open_karto代码学习笔记(一) https://blog.csdn.net/wphkadn/article/details/85144186

Karto SLAM之open_karto代码学习笔记(二) https://blog.csdn.net/wphkadn/article/details/90247146

Karto_slam框架与代码解析  https://blog.csdn.net/qq_24893115/article/details/52965410

加入了详细中文解释的open_karto地址  https://github.com/kadn/open_karto

SLAM_Karto 学习(四) 深入理解 ScanMatch 过程  https://blog.csdn.net/Fourier_Legend/article/details/81558886?utm_source=blogxgwz0#process%E5%87%BD%E6%95%B0uml

  • 3
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
在使用ROS时,当编译别人的功能包时,可能会遇到缺少功能包的问题。具体错误提示为"Could not find a package configuration file provided by 'slam_karto' with any of the following names: slam_kartoConfig.cmake slam_karto-config.cmake"。解决这个问题的方法是使用以下命令安装缺少的功能包: ``` sudo apt install ros-[你的ROS版本]-slam-karto ``` 其中,[你的ROS版本]是你正在使用的ROS版本,slam-karto是你缺少的功能包名称。将这两个信息替换到命令中,然后执行即可解决问题。这个命令的通用格式是"sudo apt install ros-[你的ROS版本]-[功能包名称]"。 另外,在解决这个问题时,还有一个可能的解决方法是将"slam_karto"的安装路径添加到CMAKE_PREFIX_PATH中,或者设置"slam_karto_DIR"为包含上述文件之一的目录。如果"slam_karto"提供了单独的开发包或SDK,请确保已经安装了它。根据调用堆栈信息,可以找到CMakeLists.txt文件中的第3行使用了find_package函数来查找功能包。但需要注意的是,这个方法可能不适用于所有情况,具体还需要根据实际情况来判断。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* *3* [Could not find a package configuration file provided by “slam_karto” with any of the following ...](https://blog.csdn.net/maizousidemao/article/details/88896851)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 100%"] [ .reference_list ]
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值