动态环境下的ORB-SLAM2_实现鲁棒的定位

参考论文:DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments (github网址

注:直接使用请引用。
github:ORBSLAM2_Dynamic

实现原理

将ORB-SLAM2的特征点提取进行了改进,分为动态区域和静态区域,只提取静态区域的ORB特征点。

getDynamicObjectBbox实现

  • 目标检测:使用darknet框架实现的YOLOv4
  • 动态特征点检测:使用LK光流跟踪特征点,然后根据特征点到极线的距离判断是否为动态特征点(注意:此处提取的特征点用作判断是否为动态目标框的条件,与ORB-SLAM2中提取的ORB特征点不同)。
  • 动态区域判断:动态区域为目标检测的矩形区域,通过矩形区域中动态特征点的数量和所占比例进行筛选。

getSpecifiedObjectBbox实现

  • 目标检测:使用darknet框架实现的YOLOv4。
  • 动态区域判断:动态区域为目标检测的矩形区域,通过先验的动态信息直接选择特征物体种类的矩形区域。

操作指南

UBuntu18.04系统(GPU RTX2080ti)

编译

安装ORB-SLAM2相关依赖
编译darknet,生成动态库libdarknet.so
(如果使用GPU需要安装CUDA和CUDNN)

执行

  • 先设置.yaml文件参数。下面是新增的参数:
    yaml_set
  • darknet/weight目录下载yolov4.weights或者yolov4-tiny.weights。(下载地址
  • darknet/下默认的libdarknet.so是在CPU上运行的,在GPU上运行将libdarknet_GPU.so重命名为libdarknet.so
  • 具体运行命令与ORB-SLAM2一样。

自动化工具

运行脚本evaluate/run.sh,所有数据集运行5次(次数可以自己修改),然后计算绝对轨迹误差的平均值,画绝对轨迹误差图。

# 运行举例
sh run.sh ~/Github/ORB_SLAM2_dynamic /media/sata3/TUM_Dynamic/dataset TS_SLAM
#!/bin/bash

workspace=$1  # 工作空间根目录
datasetPath=$2 # 数据集根目录
resultFileName=$3 # 保存结果的文本文件

workspace=${workspace%*/}"/"
datasetPath=${datasetPath%*/}"/"
resultFile=$workspace"evaluate/"$resultFileName
tempFile=$workspace"evaluate/temp.txt" # 保存执行evaluate_ate.py输出数据
tempFile2=$workspace"evaluate/temp2.txt" # 保存执行Examples/RGB-D/rgbd_tum输出数据

folders="$(ls "$datasetPath")"
cd "$workspace" || return #进入工作空间的目录
for foldername in $folders
do
	# >> 追加,> 覆盖
	echo "$foldername" >> "$resultFile"
	resultFloder=evaluate/$foldername
	mkdir "$resultFloder" 
	allFrames=$(awk 'END{print NR}' "$datasetPath$foldername""/associations.txt")
	
	# 每个数据集执行5次,计算均值
	for (( i=1; i<=5; i++ ))
	do
		command="Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt config/TUM3.yaml ""$datasetPath$foldername"" ""$datasetPath$foldername""/associations.txt"
		info=$(eval "$command") # 执行字符串指令
		echo "$info" >> "$tempFile2"
		timeLine=$(sed -n '/^mean tracking time/p' "$tempFile2")
		time=${timeLine##* }
		mv -i CameraTrajectory.txt "$resultFloder""/CameraTrajectory""$i"".txt"
		mv -i KeyFrameTrajectory.txt "$resultFloder""/KeyFrameTrajectory""$i"".txt"
		command="python2 ./evaluate/evaluate_ate.py ""$datasetPath$foldername""/groundtruth.txt ./evaluate/""$foldername""/CameraTrajectory""$i"".txt --plot ""$resultFloder""/""$foldername""_""$i"".png"
		eval "$command"
		command="python2 ./evaluate/evaluate_ate.py ""$datasetPath$foldername""/groundtruth.txt ./evaluate/""$foldername""/""CameraTrajectory""$i"".txt --verbose"
		error=$(eval "$command")
		trackOkFrame=$(awk 'END{print NR}' ./evaluate/"$foldername"/CameraTrajectory"$i".txt)
		echo "$error"" ""$trackOkFrame"" ""$allFrames"" ""$time" >> "$resultFile"
		echo "$error"" ""$trackOkFrame"" ""$allFrames"" ""$time" >> "$tempFile"
		rm "$tempFile2"
	done
	command="python ./evaluate/tool.py"
	error=$(eval "$command")
	echo "$error" >> "$resultFile"
	rm "$tempFile"
done

源码分析

CMakeLists

增加darknet相关部分动态库。

ADD_DEFINITIONS(-DOPENCV)
ADD_DEFINITIONS(-DGPU)

include_directories(
${PROJECT_SOURCE_DIR}/darknet-master/include  # darknet
)

add_library(${PROJECT_NAME} SHARED
src/DynamicObjectDetecting.cc
)

find_library(darknet libdarknet_yolov4.so ${PROJECT_SOURCE_DIR}/darknet-master/)

target_link_libraries(${PROJECT_NAME}
${darknet}
)

Tracking::Tracking

读取.yaml文件参数,初始化目标检测器。

Tracking::Tracking(System *pSys, ORBVocabulary* pVoc, FrameDrawer *pFrameDrawer, MapDrawer *pMapDrawer, Map *pMap, KeyFrameDatabase* pKFDB, const string &strSettingPath, const int sensor, DynamicObjectDetecting& dynamicObjectDetector):
	......
    double BBoxZoomSize = 1;
    fSettings["Detect.BBoxZoomSize"] >> BBoxZoomSize;
    mBBoxSingleSideZoomSize = (BBoxZoomSize-1)/2;
    fSettings["Detect.SelectMethod"] >>  mSelectMethod;
    fSettings["Detect.SaveDynaImD"] >> mbSaveDynaImD; 
    fSettings["Detect.SaveDynaImDPath"] >>  mSaveDynaImDPath;

    std::cout << endl << "Parameters for Dynamic SLAM:" << std::endl;
    switch (mSelectMethod) {
        case 1:
            std::cout << "ORB-SLAM2 in dynamic environment and detect the specified object." << std::endl;
            break;
        case 2:
            std::cout << "ORB-SLAM2 in dynamic environment and detect the dynamic object." << std::endl;
            break;
        default :
            mbSaveDynaImD = false;
            std::cout << "Please Select 'Detect.SelectMethod' 1(Specified), 2(Dynamic)." << std::endl;
            std::cout << "Now, it runs just as ORB-SLAM2." << std::endl;
    }

    string strThings;
    fSettings["Detect.SpecifiedThings"] >> strThings;
    fSettings["Detect.Network.CfgFile"] >> mCfgFile; 
    fSettings["Detect.Network.WeightFile"] >> mWeightFile;
    fSettings["Detect.LabelPath"] >> mLabelPath;
    fSettings["Detect.Threshold"] >> mDetectTh;

    string name = "";
    for (int i = 0; i < strThings.size(); i++) {
        if (strThings[i]==' ' || strThings[i]==','){
            if (name.size() > 0) {
                mSpecifiedThings.push_back(name);
                name = "";
            }
            continue;
        }
        name += strThings[i];
    }
    if (name.size() > 0) {
        mSpecifiedThings.push_back(name);
    }
    
    dynamicObjectDetector.init(mSpecifiedThings, mCfgFile, mWeightFile, mLabelPath, mDetectTh);
}

Tracking::GrabImageRGBD

筛选出包含动态物体的目标框。

cv::Mat Tracking::GrabImageRGBD(const cv::Mat &imRGB,const cv::Mat &imD, const double &timestamp, const double &timestampD, DynamicObjectDetecting& dynamicObjectDetector)
{
	......
	// 不同的目标框筛选方式,dynamic保存了筛选后的bbox
    std::vector<bbox_t>  dynaObjs;
    switch (mSelectMethod) {
        case 1:
            dynaObjs = dynamicObjectDetector.getSpecifiedObjectBbox(mImGray);
            break;
        case 2:
            dynaObjs = dynamicObjectDetector.getDynamicObjectBbox(mImGray);
            break;
    }
    // 每一帧初始化的时候会进行ORB特征点的提取,故在此去除动态区域的特征点
    mCurrentFrame = Frame(mImGray,imDepth,timestamp,mpORBextractorLeft,mpORBVocabulary,mK,mDistCoef,mbf,mThDepth, dynaObjs, mBBoxSingleSideZoomSize);
    // 原图画上bbox,会影响关键点提取,应该放在关键点提取之后
    mpFrameDrawer->draw_boxes(mImGray, dynaObjs, dynamicObjectDetector.id2name)
    Track();
    // track后的帧设置为上一帧
    dynamicObjectDetector.setImGrayPre(mImGray);
    return mCurrentFrame.mTcw.clone();
}

ORBextractor::ComputeStaticKeyPointsOctTree

去除动态区域的特征点。

void ORBextractor::ComputeStaticKeyPointsOctTree(vector<vector<KeyPoint> >& allKeypoints, const std::vector<bbox_t>& dynaObjs, const double bboxSingleSideZoomSize)
{
        ......   
        // *************************************************************
        vector<vector<bool>> g(maxBorderY+1,vector<bool>(maxBorderX+1,true));
        // 去除动态特征点
        for (bbox_t bbox: dynaObjs) {
            // bbox扩大0.1倍
            int x1 = round((float)(bbox.x-bboxSingleSideZoomSize*bbox.w)*scale);
            int y1 = round((float)(bbox.y-bboxSingleSideZoomSize*bbox.h)*scale);
            int x2 = round((float)(bbox.x+(1+bboxSingleSideZoomSize)*bbox.w)*scale);
            int y2 = round((float)(bbox.y+(1+bboxSingleSideZoomSize)*bbox.h)*scale);
            y1 = y1 < minBorderY ? minBorderY : y1;
            y2 = y2 >= maxBorderY ? maxBorderY : y2;
            x1 = x1 < minBorderX ? minBorderX : x1;
            x2 = x2 >= maxBorderX ? maxBorderX : x2;
            for (size_t i = y1; i <= y2; i++) {
                for (size_t j = x1; j <= x2; j++) {
                    g[i][j] = false;
                }
            }
        }
        // *************************************************************

        for(int i=0; i<nRows; i++)
        {
			......
            for(int j=0; j<nCols; j++)
            {
            	......
                // *************************************************************
                vector<cv::KeyPoint> vKeysCellStatic;
                for (cv::KeyPoint kp: vKeysCell) {
                    if (g[kp.pt.y+i*hCell+minBorderY][kp.pt.x+j*wCell+minBorderX]) { // 选择非目标框范围内的特征点
                        vKeysCellStatic.push_back(kp);
                    }else {
                        // std::cout << "gg" << std::endl;
                    }
                }

                if(!vKeysCellStatic.empty())
                {
                    for(vector<cv::KeyPoint>::iterator vit=vKeysCellStatic.begin(); vit!=vKeysCellStatic.end();vit++)
                    {
                        (*vit).pt.x+=j*wCell;
                        (*vit).pt.y+=i*hCell;
                        vToDistributeKeys.push_back(*vit);
                    }
                }
                // *************************************************************
            }
        }
        ......
}

DynamicObjectDetecting.cc

新增加的类,主要用于实现getDynamicObjectBboxgetSpecifiedObjectBbox两个功能。

  • 6
    点赞
  • 62
    收藏
    觉得还不错? 一键收藏
  • 10
    评论
评论 10
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值