公式解读
本文使用g2o
对论文Multi-modal Mapping and Localization of Unmanned Aerial Robots based on Ultra-Wideband and RGB-D sensing
中提出的UWB RO-SLAM
进行复现 要复现的地方: 在这条公式里,
X
i
X_{i}
X i 是机器人的位姿,
E
(
.
)
E(.)
E ( . ) 是里程计的最小二乘误差,
c
i
j
c_{ij}
c i j 是遮挡系数,当
T
a
g
Tag
T a g 在某时刻
i
i
i 不在某个基站
j
j
j 信号范围内时,
c
i
j
c_{ij}
c i j 为0;
d
i
j
d_{ij}
d i j 是测距值;
b
j
b_{j}
b j 是UWB基站的位置;
B
B
B 是
b
j
b_{j}
b j 的集合。 由于优化问题依赖于良好的初值,如果随机化UWB基站位置的初始估计,那么结果几乎一定会陷入一个错误的极值解(文章原话);因此,这篇文章提出了一种方法来提供UWB基站的初始估计: 说人话就是:在第一次获取到测距数据时,以测距值为半径画圆,在这个圆上取若干个点作为基站的初始估计。举个例子:假设在
i
i
i 时刻,tag收到来自基站
j
j
j 的测距
d
i
j
d_{ij}
d i j ,那么就以当前的tag位置
P
i
P_i
P i 为圆心以
d
i
j
d_{ij}
d i j 为半径画圆,在这个圆上取若干个点作为基站
j
j
j 的初始位置估计。 虽然每个基站都有若干个假设值,但随着观测数据的增多,这些假设值会逐渐聚拢到一个可能性最高的值上(前提是机器人的运动能消除基站位置的不确定性,后面再讨论这个事)
用g2o来实现
与文章有出入的地方
本文并非完全按照文章的思路来实现,在这里先指出与文章思路的不同之处:
不优化机器人的位姿。在文章中利用里程计信息同时优化了机器人的位姿,(由于懒得搭机器人)本文不对机器人位姿进行优化,而是假设机器人位姿已经被确定。 每个基站的假设数都是一样的,而文章中是根据第一次测距的数值来设置假设数的,远的基站假设数量多,近的则少。
顶点定义
优化变量是UWB的基站位置,因此顶点就是基站位置。
class UWBAnchorVertex: public g2o::BaseVertex<3, Eigen::Vector3d>
{
public:
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
virtual void setToOriginImpl() // 重置
{
_estimate << 0,0,0;
}
virtual void oplusImpl( const double* update ) // 更新
{
_estimate += Eigen::Vector3d(update);
}
// 存盘和读盘:留空
virtual bool read( istream& in ) {}
virtual bool write( ostream& out ) const {}
};
边定义
边是测距值与机器人与UWB基站之间的距离之差(
e
i
=
∣
r
a
n
g
e
i
j
−
d
i
s
(
P
i
,
b
j
)
∣
e_{i} = | range_{ij} - dis(P_{i}, b_{j}) |
e i = ∣ r a n g e i j − d i s ( P i , b j ) ∣ ) 但由于每个基站都有若干个假设(代码中设置假设数为HYPOTHESIS_ANCHOR_NUM
) ,因此第
j
j
j 个基站的总残差是残差和除以假设数。
class UWBRangeEdge: public g2o::BaseUnaryEdge<1,double,UWBAnchorVertex>
{
public:
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
UWBRangeEdge( const Eigen::Vector3d& tag_pos): BaseUnaryEdge(), _tag_pos(tag_pos){}
void computeError()// 测距残差
{
const UWBAnchorVertex* v = static_cast<const UWBAnchorVertex*> (_vertices[0]); //_vertices即为端点,是UWB基站的估计值
const Eigen::Vector3d anchor_pos = v->estimate(); //将待估计值取出
_error(0,0) = abs(_measurement - (anchor_pos - _tag_pos).norm()) / HYPOTHESIS_ANCHOR_NUM;
}
virtual bool read( istream& in ) {}
virtual bool write( ostream& out ) const {}
public:
Eigen::Vector3d _tag_pos;//i时刻的机器人位置
};
整个Demo的流程
设置基站的真实位置 产生机器人轨迹 产生对应机器人轨迹的UWB测距数据 设置G2O求解器 往里边添加顶点和边 求解
设置基站的真实位置
vector<Eigen::Vector3d> GetActualAnchorPos() {
vector<Eigen::Vector3d> ret;
ret.emplace_back(Eigen::Vector3d(0, 0, 0));
ret.emplace_back(Eigen::Vector3d(L, 0, 0));
ret.emplace_back(Eigen::Vector3d(0, L, 0));
ret.emplace_back(Eigen::Vector3d(L, L, L));
return ret;
}
产生机器人轨迹
vector<Eigen::Vector3d> GetTagPos(int N) {
vector<Eigen::Vector3d> ret;
Eigen::Vector3d cur_tag_pos(0, 0, 0);
Eigen::Vector3d tag_pos_step(STEP_L, STEP_L, STEP_L);
Eigen::Vector3d tag_pos_step_1(STEP_L, 0, 0);
Eigen::Vector3d tag_pos_step_2(0, STEP_L, 0);
Eigen::Vector3d tag_pos_step_3(0, 0, STEP_L);
for (int i = 0 ; i < N / 3; i++) {
ret.push_back(cur_tag_pos);
cur_tag_pos += tag_pos_step_1;
}
for (int i = N / 3; i < N * 2 / 3; i++) {
ret.push_back(cur_tag_pos);
cur_tag_pos += tag_pos_step_2;
}
for (int i = N * 2 / 3; i < N; i++) {
ret.push_back(cur_tag_pos);
cur_tag_pos += tag_pos_step_3;
}
return ret;
}
要注意的一点:如果机器人轨迹是一条直线,且这条直线经过了某个基站,那么这个基站的位置将无法被完全确定(后面解释);所以最好让机器人走曲线或多走几条直线
产生对应机器人轨迹的UWB测距数据
//tag_pos:机器人轨迹
vector<vector<double>> GetNoiseRanges(const vector<Eigen::Vector3d>& actual_anchor_pos, const vector<Eigen::Vector3d>& tag_pos, double sigma) {
cv::RNG rng; // OpenCV随机数产生器
int N = tag_pos.size();
int anchors_num = actual_anchor_pos.size();
vector<vector<double>> noise_data(N, vector<double>(anchors_num, 0));
for (int i = 0 ; i < N; i++) {
for (int anchor_index = 0; anchor_index < anchors_num; anchor_index++) {
noise_data[i][anchor_index] = abs((tag_pos[i] - actual_anchor_pos[anchor_index]).norm()) + rng.gaussian ( sigma ) / HYPOTHESIS_ANCHOR_NUM;
}
}
return noise_data;
}
设置G2O求解器
// 构建图优化,先设定g2o
typedef g2o::BlockSolver< g2o::BlockSolverTraits<3,1> > Block; // 每个误差项优化变量维度为3,误差值维度为1
Block::LinearSolverType* linearSolver = new g2o::LinearSolverDense<Block::PoseMatrixType>(); // 线性方程求解器
Block* solver_ptr = new Block( linearSolver ); // 矩阵块求解器
// 梯度下降方法,从GN, LM, DogLeg 中选
g2o::OptimizationAlgorithmLevenberg* solver = new g2o::OptimizationAlgorithmLevenberg( solver_ptr );
g2o::SparseOptimizer optimizer; // 图模型
optimizer.setAlgorithm( solver ); // 设置求解器
optimizer.setVerbose( true ); // 打开调试输出
往里边添加顶点和边
// 往图中增加顶点和边
int edge_id = 0;
for (int tag_pos_index = 0; tag_pos_index < N; tag_pos_index++) {
if (tag_pos_index == 0) { // 如果是第一组数据,则往里边添加基站的假设
for (int anchor_index = 0; anchor_index < num_anchors; anchor_index++) { // 顶点初始值初始化为HYPOTHESIS_ANCHOR_NUM个
double x_tag = tag_pos[tag_pos_index].x();//当前机器人位置
double y_tag = tag_pos[tag_pos_index].y();//当前机器人位置
double r = noise_ranges[tag_pos_index][anchor_index];//初始测距
//这个for里面计算各个基站的假设位置
for ( int vertex_index = 0; vertex_index < HYPOTHESIS_ANCHOR_NUM; vertex_index++ ) {
UWBAnchorVertex* v = new UWBAnchorVertex();
double x = x_tag + r * cos(2 * M_PI / HYPOTHESIS_ANCHOR_NUM * vertex_index);
double y = y_tag + r * sin(2 * M_PI / HYPOTHESIS_ANCHOR_NUM * vertex_index);
double z = anchor_index == 3 ? L : 0;
v->setEstimate( Eigen::Vector3d(x, y, z)); //设置初值
int id = anchor_index * HYPOTHESIS_ANCHOR_NUM + vertex_index;
v->setId(id);
optimizer.addVertex(v);
}
}
将测距值加入优化问题中
for (int anchor_index = 0; anchor_index < num_anchors; anchor_index++) {
for ( int vertex_index = 0; vertex_index < HYPOTHESIS_ANCHOR_NUM; vertex_index++ ) {
UWBRangeEdge* edge = new UWBRangeEdge(tag_pos[tag_pos_index]);
edge->setId(edge_id++);
edge->setVertex(0, optimizer.vertex(anchor_index * HYPOTHESIS_ANCHOR_NUM + vertex_index));
edge->setMeasurement( noise_ranges[tag_pos_index][anchor_index] ); // 观测数值
edge->setInformation( Eigen::Matrix<double,1,1>::Identity());
optimizer.addEdge( edge );
}
}
}
执行优化
// 执行优化
cout<<"start optimization"<<endl;
chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
optimizer.initializeOptimization();
optimizer.optimize(100);
chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>>( t2-t1 );
cout<<"solve time cost = "<<time_used.count()<<" seconds. "<<endl;
// 输出优化值
for (int anchor_index = 0; anchor_index < num_anchors; anchor_index++) {
for (int vertex_index = 0; vertex_index < HYPOTHESIS_ANCHOR_NUM; vertex_index++) {
auto v = static_cast<const UWBAnchorVertex*> (optimizer.vertex(anchor_index * HYPOTHESIS_ANCHOR_NUM + vertex_index));
cout<<"基站 " << anchor_index << "的第 " << vertex_index << "个假设: \n" << v->estimate() << endl;
}
}
/// 将各个假设按矩阵打印出来
Eigen::Matrix<double, HYPOTHESIS_ANCHOR_NUM * 4, 3> result;
for (int anchor_index = 0; anchor_index < num_anchors; anchor_index++) {
for (int vertex_index = 0; vertex_index < HYPOTHESIS_ANCHOR_NUM; vertex_index++) {
auto v = static_cast<const UWBAnchorVertex*> (optimizer.vertex(anchor_index * HYPOTHESIS_ANCHOR_NUM + vertex_index));
result.block(anchor_index * HYPOTHESIS_ANCHOR_NUM + vertex_index,0,1,3) = v->estimate().transpose();
}
}
cout << result;
总体代码
#include <iostream>
#include <g2o/core/base_vertex.h>
#include <g2o/core/base_unary_edge.h>
#include <g2o/core/base_binary_edge.h>
#include <g2o/core/block_solver.h>
#include <g2o/core/optimization_algorithm_levenberg.h>
#include <g2o/core/optimization_algorithm_gauss_newton.h>
#include <g2o/core/optimization_algorithm_dogleg.h>
#include <g2o/solvers/dense/linear_solver_dense.h>
#include <Eigen/Core>
#include <opencv2/core/core.hpp>
#include <cmath>
#include <chrono>
using namespace std;
const int HYPOTHESIS_ANCHOR_NUM = 12;
const int L = 10000; //基站位置参数
const int STEP_L = L / 100; //Tag移动参数
// 曲线模型的顶点,模板参数:优化变量维度和数据类型
class UWBAnchorVertex: public g2o::BaseVertex<3, Eigen::Vector3d>
{
public:
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
virtual void setToOriginImpl() // 重置
{
_estimate << 0,0,0;
}
virtual void oplusImpl( const double* update ) // 更新
{
_estimate += Eigen::Vector3d(update);
}
// 存盘和读盘:留空
virtual bool read( istream& in ) {}
virtual bool write( ostream& out ) const {}
};
误差模型 模板参数:观测值维度,类型,连接顶点类型
class UWBRangeEdge: public g2o::BaseUnaryEdge<1,double,UWBAnchorVertex>
{
public:
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
UWBRangeEdge( const Eigen::Vector3d& tag_pos): BaseUnaryEdge(), _tag_pos(tag_pos){}
// 测距残差
void computeError()
{
const UWBAnchorVertex* v = static_cast<const UWBAnchorVertex*> (_vertices[0]); //_vertices即为端点,是待估计值
const Eigen::Vector3d anchor_pos = v->estimate(); //将待估计值取出
_error(0,0) = abs(_measurement - (anchor_pos - _tag_pos).norm()) / HYPOTHESIS_ANCHOR_NUM;
// _error(0,0) = abs(_measurement - (anchor_pos - _tag_pos).norm());
}
virtual bool read( istream& in ) {}
virtual bool write( ostream& out ) const {}
public:
Eigen::Vector3d _tag_pos;
};
vector<Eigen::Vector3d> GetActualAnchorPos() {
vector<Eigen::Vector3d> ret;
ret.emplace_back(Eigen::Vector3d(0, 0, 0));
ret.emplace_back(Eigen::Vector3d(L, 0, 0));
ret.emplace_back(Eigen::Vector3d(0, L, 0));
ret.emplace_back(Eigen::Vector3d(L, L, L));
return ret;
}
vector<Eigen::Vector3d> GetTagPos(int N) {
vector<Eigen::Vector3d> ret;
Eigen::Vector3d cur_tag_pos(0, 0, 0);
Eigen::Vector3d tag_pos_step(STEP_L, STEP_L, STEP_L);
Eigen::Vector3d tag_pos_step_1(STEP_L, 0, 0);
Eigen::Vector3d tag_pos_step_2(0, STEP_L, 0);
Eigen::Vector3d tag_pos_step_3(0, 0, STEP_L);
for (int i = 0 ; i < N / 3; i++) {
ret.push_back(cur_tag_pos);
cur_tag_pos += tag_pos_step_1;
}
for (int i = N / 3; i < N * 2 / 3; i++) {
ret.push_back(cur_tag_pos);
cur_tag_pos += tag_pos_step_2;
}
for (int i = N * 2 / 3; i < N; i++) {
ret.push_back(cur_tag_pos);
cur_tag_pos += tag_pos_step_3;
}
return ret;
}
vector<vector<double>> GetNoiseRanges(const vector<Eigen::Vector3d>& actual_anchor_pos, const vector<Eigen::Vector3d>& tag_pos, double sigma) {
cv::RNG rng; // OpenCV随机数产生器
int N = tag_pos.size();
int anchors_num = actual_anchor_pos.size();
vector<vector<double>> noise_data(N, vector<double>(anchors_num, 0));
for (int i = 0 ; i < N; i++) {
for (int anchor_index = 0; anchor_index < anchors_num; anchor_index++) {
// noise_data[i][anchor_index] = abs((cur_tag_pos - actual_anchor_pos[anchor_index]).norm()) + rng.gaussian ( sigma );
noise_data[i][anchor_index] = abs((tag_pos[i] - actual_anchor_pos[anchor_index]).norm()) + rng.gaussian ( sigma ) / HYPOTHESIS_ANCHOR_NUM;
}
}
return noise_data;
}
void InitDistanceTable(const vector<Eigen::Vector3d>& actual_anchor_pos, vector<vector<double>>& table) {
cout << "真实的基站距离:\n";
for (int i = 0; i < actual_anchor_pos.size(); i++) {
for (int j = 0; j < actual_anchor_pos.size(); j++) {
if (i == j) continue;
table[i][j] = abs((actual_anchor_pos[i] - actual_anchor_pos[j]).norm());
cout<<"基站 " << i << "与基站 " << j << "的距离为: \n" << table[i][j] << endl;
}
}
}
int main( int argc, char** argv )
{
auto actual_anchor_pos = GetActualAnchorPos();//设定基站真值
double w_sigma = 5.0; // 噪声Sigma值
int N = 100; //观测次数
auto tag_pos = GetTagPos(N);
auto noise_ranges = GetNoiseRanges(actual_anchor_pos, tag_pos, w_sigma);//产生噪声数据
// 构建图优化,先设定g2o
//@todo:这两个求解器怎么设置?求解器对问题的求解有怎样的影响?
typedef g2o::BlockSolver< g2o::BlockSolverTraits<3,1> > Block; // 每个误差项优化变量维度为3,误差值维度为1
Block::LinearSolverType* linearSolver = new g2o::LinearSolverDense<Block::PoseMatrixType>(); // 线性方程求解器
Block* solver_ptr = new Block( linearSolver ); // 矩阵块求解器
// 梯度下降方法,从GN, LM, DogLeg 中选
g2o::OptimizationAlgorithmLevenberg* solver = new g2o::OptimizationAlgorithmLevenberg( solver_ptr );
g2o::SparseOptimizer optimizer; // 图模型
optimizer.setAlgorithm( solver ); // 设置求解器
optimizer.setVerbose( true ); // 打开调试输出
// 往图中增加顶点和边
int num_anchors = actual_anchor_pos.size();
int edge_id = 0;
for (int tag_pos_index = 0; tag_pos_index < N; tag_pos_index++) {
if (tag_pos_index == 0) { // 如果是第一组数据,则往里边添加顶点
for (int anchor_index = 0; anchor_index < num_anchors; anchor_index++) { // 顶点初始值初始化为HYPOTHESIS_ANCHOR_NUM个
double x_tag = tag_pos[tag_pos_index].x();
double y_tag = tag_pos[tag_pos_index].y();
double r = noise_ranges[tag_pos_index][anchor_index];
for ( int vertex_index = 0; vertex_index < HYPOTHESIS_ANCHOR_NUM; vertex_index++ ) {
UWBAnchorVertex* v = new UWBAnchorVertex();
double x = x_tag + r * cos(2 * M_PI / HYPOTHESIS_ANCHOR_NUM * vertex_index);
double y = y_tag + r * sin(2 * M_PI / HYPOTHESIS_ANCHOR_NUM * vertex_index);
double z = anchor_index == 3 ? L : 0;
v->setEstimate( Eigen::Vector3d(x, y, z)); //设置初值
int id = anchor_index * HYPOTHESIS_ANCHOR_NUM + vertex_index;
v->setId(id);
optimizer.addVertex(v);
}
}
}
将测距加入优化问题中
for (int anchor_index = 0; anchor_index < num_anchors; anchor_index++) {
for ( int vertex_index = 0; vertex_index < HYPOTHESIS_ANCHOR_NUM; vertex_index++ ) {
UWBRangeEdge* edge = new UWBRangeEdge(tag_pos[tag_pos_index]);
edge->setId(edge_id++);
edge->setVertex(0, optimizer.vertex(anchor_index * HYPOTHESIS_ANCHOR_NUM + vertex_index));
edge->setMeasurement( noise_ranges[tag_pos_index][anchor_index] ); // 观测数值
edge->setInformation( Eigen::Matrix<double,1,1>::Identity());
optimizer.addEdge( edge );
}
}
}
// 执行优化
cout<<"start optimization"<<endl;
chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
optimizer.initializeOptimization();
optimizer.optimize(100);
chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>>( t2-t1 );
cout<<"solve time cost = "<<time_used.count()<<" seconds. "<<endl;
// 输出优化值
for (int anchor_index = 0; anchor_index < num_anchors; anchor_index++) {
for (int vertex_index = 0; vertex_index < HYPOTHESIS_ANCHOR_NUM; vertex_index++) {
auto v = static_cast<const UWBAnchorVertex*> (optimizer.vertex(anchor_index * HYPOTHESIS_ANCHOR_NUM + vertex_index));
cout<<"基站 " << anchor_index << "的第 " << vertex_index << "个假设: \n" << v->estimate() << endl;
}
}
/// 将各个假设按矩阵打印出来
Eigen::Matrix<double, HYPOTHESIS_ANCHOR_NUM * 4, 3> result;
for (int anchor_index = 0; anchor_index < num_anchors; anchor_index++) {
for (int vertex_index = 0; vertex_index < HYPOTHESIS_ANCHOR_NUM; vertex_index++) {
auto v = static_cast<const UWBAnchorVertex*> (optimizer.vertex(anchor_index * HYPOTHESIS_ANCHOR_NUM + vertex_index));
result.block(anchor_index * HYPOTHESIS_ANCHOR_NUM + vertex_index,0,1,3) = v->estimate().transpose();
}
}
cout << result;
return 0;
}
运行结果:
基站0的假设:
-0.0691062 -0.00995211 0.200876
-0.0691109 -0.00993858 0.200882
-0.0691098 -0.00995461 0.200887
-0.0691029 -0.00998294 0.20098
-0.0691155 -0.00994223 0.200907
-0.0691109 -0.00994146 0.200891
-0.0691115 -0.00995253 0.200919
-0.0691146 -0.00994314 0.200901
-0.0691056 -0.00994185 0.200848
-0.0691162 -0.0099313 0.200886
-0.0691029 -0.00997411 0.200945
-0.0691031 -0.00997974 0.200951
基站1的假设:
9999.97 0.340838 -0.85547
9999.97 0.340801 -0.855334
9999.97 0.340814 -0.855169
9999.97 0.340727 -0.855166
9999.97 0.34064 -0.854842
9999.97 0.340732 -0.855145
9999.97 0.340584 -0.85489
9999.97 0.340988 -0.855669
9999.97 0.340906 -0.855541
9999.97 0.340746 -0.855266
9999.97 0.340708 -0.854933
9999.97 0.340858 -0.855323
基站2的假设:
-0.512896 9999.86 0.120668
-0.512774 9999.86 0.120797
-0.512715 9999.86 0.12066
-0.5132 9999.86 0.121011
-0.512709 9999.86 0.120667
-0.512791 9999.86 0.120874
-0.512989 9999.86 0.12099
-0.512929 9999.86 0.120928
-0.512584 9999.86 0.120673
-0.512986 9999.86 0.120801
-0.512803 9999.86 0.121133
-0.512751 9999.86 0.120933
基站3的假设:
10000.4 9998.9 10000.7
10000.4 9998.9 10000.7
10000.4 9998.9 10000.7
10000.4 9998.9 10000.7
10000.4 9998.9 10000.7
10000.4 9998.9 10000.7
10000.4 9998.9 10000.7
10000.4 9998.9 10000.7
10000.4 9998.9 10000.7
10000.4 9998.9 10000.7
10000.4 9998.9 10000.7
10000.4 9998.9 10000.7
用matlab把结果画出来: + 可以看到,四个基站的所有假设都收敛到了正确的位置
机器人轨迹对该算法的影响
vector<Eigen::Vector3d> GetTagPos(int N) {
vector<Eigen::Vector3d> ret;
Eigen::Vector3d cur_tag_pos(0, 0, 0);
Eigen::Vector3d tag_pos_step(STEP_L, STEP_L, STEP_L);
Eigen::Vector3d tag_pos_step_1(STEP_L, 0, 0);
Eigen::Vector3d tag_pos_step_2(0, STEP_L, 0);
Eigen::Vector3d tag_pos_step_3(0, 0, STEP_L);
for (int i = 0 ; i < N / 3; i++) {
ret.push_back(cur_tag_pos);
cur_tag_pos += tag_pos_step_1;
}
for (int i = N / 3; i < N * 2 / 3; i++) {
ret.push_back(cur_tag_pos);
cur_tag_pos += tag_pos_step_2;
}
for (int i = N * 2 / 3; i < N; i++) {
ret.push_back(cur_tag_pos);
cur_tag_pos += tag_pos_step_3;
}
return ret;
}
现在机器人是分别沿x,y,z
轴各走一段直线的 问题:如果我只让机器人走一条直线结果会怎样?还能收敛吗? 于是改一下:
vector<Eigen::Vector3d> GetTagPos(int N) {
vector<Eigen::Vector3d> ret;
Eigen::Vector3d cur_tag_pos(0, 0, 0);
Eigen::Vector3d tag_pos_step(STEP_L, STEP_L, STEP_L);
Eigen::Vector3d tag_pos_step_1(STEP_L, 0, 0);
Eigen::Vector3d tag_pos_step_2(0, STEP_L, 0);
Eigen::Vector3d tag_pos_step_3(0, 0, STEP_L);
for (int i = 0 ; i < N; i++) {
ret.push_back(cur_tag_pos);
cur_tag_pos += tag_pos_step;
}
return ret;
}
现在机器人就是沿着基站0和基站3之间的线段走的 运行结果:
基站0:
-0.023611 -0.0201045 -0.0234157
-0.020568 -0.0225429 -0.0240207
-0.022949 -0.0221869 -0.0220391
-0.0229311 -0.0215704 -0.0225475
-0.0221689 -0.022843 -0.0221445
-0.0221116 -0.0230715 -0.0219545
-0.0228027 -0.023407 -0.0208632
-0.0215533 -0.0238267 -0.021658
-0.0219942 -0.0234234 -0.0216693
-0.0236133 -0.0221679 -0.0213049
-0.023557 -0.0200241 -0.0235711
-0.0235677 -0.0227217 -0.0207883
基站1:
9999.93 0.00651112 0.00651112
8787.17 3926.68 -2713.91
3925.86 8787.65 -2713.56
0.583104 9999.93 -0.569974
-1863.13 9548.24 2314.83
-2881.63 8529.73 4351.84
-3333.3 6664.52 6668.72
-2118.99 2737.25 9381.68
2738.61 -2119.78 9381.1
6665.55 -3333.3 6667.69
8529.65 -2881.67 4351.96
9548.63 -1862.44 2313.75
基站2:
9999.93 -0.073328 -0.073328
8787.5 3926.04 -2713.77
3925.69 8787.71 -2713.61
-0.469946 9999.93 0.323545
-1861.69 9549.12 2312.36
-2881.56 8530.02 4351.32
-3333.41 6665.85 6667.34
-2120.39 2739.45 9380.72
2737.69 -2119.37 9381.46
6666.26 -3333.41 6666.92
8529.31 -2881.97 4352.44
9549.09 -1861.74 2312.43
基站3:
10000.3 9999.71 10000.1
10000.5 9999.62 9999.99
9999.69 10000.5 9999.87
9999.96 10000.2 9999.94
9999.67 10000.1 10000.3
9999.75 10000.2 10000.1
9999.7 10000.1 10000.3
9999.75 9999.99 10000.3
10000.1 9999.68 10000.3
10000 9999.79 10000.2
10000 9999.84 10000.2
10000.4 9999.77 9999.92
matlab画出来: 可以看到,只有基站0和基站3的假设收敛到了一个值上,而基站1和基站2的假设却没有收敛到一个值上. 我写的第一个demo就是用的这种轨迹,一开始我还以为是程序逻辑出错了,后来把轨迹改成多条直线才没事;我怀疑这跟圆的对称性有关。