代码解读:DP-SLAM(3)
上一次看代码,看到CheckScore()函数,有一个遗留问题,就是
l_particle[ newSample[sampleNum].parent ].ancestryNode->ID
从字面上,理解为某个粒子的父辈在ancestryNode对应的ID
究竟这是什么意思?过于仔细的追究很难让我看到整个DP SLAM的框架,所以暂且不去理会
我来总结一下CheckScore()的用法:
1,某一时刻雷达的数据存放在sense中,任取一个粒子,评估该粒子对当前数据的适应程度,得分越高,这个粒子
就越可靠
2,这样会引入一个新问题,怎样去评估“适应程度”呢?这是LowLineTrace()做的事情
除了CheckScore,还有QuickScore,但是意思差不多
接下来,就得分析low.c文件最为重要的函数Localize(),这一块函数实在是太长了,就一段一段分析吧
//
// Localize
//
// This is where the bulk of evaluating and resampling the particles takes place.
// Also applies the motion model
//
void Localize(TSense sense)
{
double ftemp;
double threshold; // threshhold for discarding particles (in log prob.)
double total;
double turn, distance, moveAngle; // The incremental motion reported by the odometer
double CCenter, DCenter, TCenter, CCoeff, DCoeff, TCoeff;
double tempC, tempD; // Temporary variables for the motion model.
int i, j, k, p, best; // Incremental counters.
int keepers = 0; // How many particles finish all rounds
int newchildren[SAMPLE_NUMBER]; // Used for resampling
// Take the odometry readings from both this time step and the last, in order to figure out
// the base level of incremental motion. Convert our measurements from meters and degrees
// into terms of map squares and radians
distance = sqrt( ((odometry.x - lastX) * (odometry.x - lastX))
+ ((odometry.y - lastY) * (odometry.y - lastY)) ) * MAP_SCALE;
turn = (odometry.theta - lastTheta);
// Keep motion bounded between pi and -pi
if (turn > M_PI/3)
turn = turn - 2*M_PI;
else if (turn < -M_PI/3)
turn = turn + 2*M_PI;
从这段代码,可以了解到:
1,double型变量threshold是是否丢弃粒子的标准
2,变量turn,distance,moveAngle 是小车里程计的数据,会实时更新里程计信息
3,还有一些关于小车模型的参数
4,int型变量newchildren用于存放重采样信息
5,小车的朝向设定在-180度到+180度之间
来看一段关于小车模型的代码,
// Our motion model consists of motion along three variables; D is the major axis of motion,
// which is lateral motion along the robot's average facing angle for this time step, C is the
// minor axis of lateral motion, which is perpendicular to D, and T is the amount of turn in
// the robot's facing angle.
// Since the motion model is probablistic, the *Center terms compute the expected center of the
// distributions of C D and T. Note that these numbers are each a function of the reported
// odometric distance and turn which have been observed. The constant meanC_D is the amount of
// effect that the distance reported from the odometry has on our motion model's expected motion
// along the minor axis. All of these constants are defined at the top of this file.
CCenter = distance*meanC_D + turn*meanC_T;
DCenter = distance*meanD_D + turn*meanD_T;
TCenter = distance*meanT_D + turn*meanT_T;
// *Coeff computes the standard deviation for each parameter when generating gaussian noise.
// These numbers are limited to have at least some minimal level of noise, regardless of the
// reported motion. This is especially important for dealing with a robot skidding or sliding
// or just general unexpected motion which may not be reported at all by the odometry (it
// happens more often than we would like)
CCoeff = MAX((fabs(distance*varC_D) + fabs(turn*varC_T)), 0.8);
DCoeff = MAX((fabs(distance*varD_D) + fabs(turn*varD_T)), 0.8);
TCoeff = MAX((fabs(distance*varT_D) + fabs(turn*varT_T)), 0.10);
从这段代码,可以了解到:
1,可以发现CCenter,DCenter,TCenter会随着里程计信息distance和turn更新而更新
2,从注释可以了解,D,C,T的意义,以及Center和Coeff的含义(用来刻画一个高斯分布)
然后,再看下一段,
// To start this function, we have already determined which particles have been resampled, and
// how many times. What we still need to do is move them from their parent's position, according
// to the motion model, so that we have the appropriate scatter.
i = 0;
// Iterate through each of the old particles, to see how many times it got resampled.
for (j = 0; j < PARTICLE_NUMBER; j++) {
// Now create a new sample for each time this particle got resampled (possibly 0)
for (k=0; k < children[j]; k++) {
// We make a sample entry. The first, most important value is which of the old particles
// is this new sample's parent. This defines which map is being inherited, which will be
// used during localization to evaluate the "fitness" of that sample.
newSample[i].parent = j;
// Randomly calculate the 'probable' trajectory, based on the movement model. The starting
// point is of course the position of the parent.
tempC = CCenter + GAUSSIAN(CCoeff); // The amount of motion along the minor axis of motion
tempD = DCenter + GAUSSIAN(DCoeff); // The amount of motion along the major axis of motion
// Record this actual motion. If we are using hierarchical SLAM, it will be used to keep track
// of the "corrected" motion of the robot, to define this step of the path.
newSample[i].C = tempC;
newSample[i].D = tempD;
newSample[i].T = TCenter + GAUSSIAN(TCoeff);
newSample[i].theta = l_particle[j].theta + newSample[i].T;
// Assuming that the robot turned continuously throughout the time step, the major direction
// of movement (D) should be the average of the starting angle and the final angle
moveAngle = (newSample[i].theta + l_particle[j].theta)/2.0;
// The first term is to correct for the LRF not being mounted on the pivot point of the robot's turns
// The second term is to allow for movement along the major axis of movement (D)
// The last term is movement perpendicular to the the major axis (C). We add pi/2 to give a consistent
// "positive" direction for this term. MeanC significantly shifted from 0 would mean that the robot
// has a distinct drift to one side.
newSample[i].x = l_particle[j].x + (TURN_RADIUS * (cos(newSample[i].theta) - cos(l_particle[j].theta))) +
(tempD * cos(moveAngle)) + (tempC * cos(moveAngle + M_PI/2));
newSample[i].y = l_particle[j].y + (TURN_RADIUS * (sin(newSample[i].theta) - sin(l_particle[j].theta))) +
(tempD * sin(moveAngle)) + (tempC * sin(moveAngle + M_PI/2));
newSample[i].probability = 0.0;
i++;
}
}
这一段代码有些长,但是可以慢慢分析:
1,两条for的意义很明显,第一条for语句遍历所有粒子,第二条for语句遍历每一个粒子的children
2,注意“i”从等于0开始,到遍历完每一个粒子的children为止,所以呢,newSample
3,GAUSSIAN(CCoeff)就是抽样过程
4,通过抽样,得到每一个粒子在这一时刻的位置姿态更新值
5,注意每一个粒子的概率都是“0”
得到每一个粒子children的更新值之后,来看下一段代码,
// Go through these particles in a number of passes, in order to find the best particles. This is
// where we cull out obviously bad particles, by performing evaluation in a number of distinct
// steps. At the end of each pass, we identify the probability of the most likely sample. Any sample
// which is not within the defined threshhold of that probability can be removed, and no longer
// evaluated, since the probability of that sample ever becoming "good" enough to be resampled is
// negligable.
// Note: this first evaluation is based solely off of QuickScore- that is, the evaluation is only
// performed for a short section of the laser trace, centered on the observed endpoint. This can
// provide a good, quick heuristic for culling off bad samples, but should not be used for final
// weights. Something which looks good in this scan can very easily turn out to be low probability
// when the entire laser trace is considered.
threshold = WORST_POSSIBLE-1; // ensures that we accept anything in 1st round
for (p = 0; p < PASSES; p++){
best = 0;
for (i = 0; i < SAMPLE_NUMBER; i++) {
if (newSample[i].probability >= threshold) {
for (k = p; k < SENSE_NUMBER; k += PASSES)
newSample[i].probability = newSample[i].probability + log(QuickScore(sense, k, i));
if (newSample[i].probability > newSample[best].probability)
best = i;
}
else
newSample[i].probability = WORST_POSSIBLE;
}
threshold = newSample[best].probability - THRESH;
}
keepers = 0;
for (i = 0; i < SAMPLE_NUMBER; i++) {
if (newSample[i].probability >= threshold) {
keepers++;
// Don't let this heuristic evaluation be included in the final eval.
newSample[i].probability = 0.0;
}
else
newSample[i].probability = WORST_POSSIBLE;
}
// Letting the user know how many samples survived this first cut.
fprintf(stderr, "Better %d ", keepers);
threshold = -1;
// Now reevaluate all of the surviving samples, using the full laser scan to look for possible
// obstructions, in order to get the most accurate weights. While doing this evaluation, we can
// still keep our eye out for unlikely samples before we are finished.
keepers = 0;
for (p = 0; p < PASSES; p++){
best = 0;
for (i = 0; i < SAMPLE_NUMBER; i++) {
if (newSample[i].probability >= threshold) {
if (p == PASSES -1)
keepers++;
for (k = p; k < SENSE_NUMBER; k += PASSES)
newSample[i].probability = newSample[i].probability + log(CheckScore(sense, k, i));
if (newSample[i].probability > newSample[best].probability)
best = i;
}
else
newSample[i].probability = WORST_POSSIBLE;
}
threshold = newSample[best].probability - THRESH;
}
// Report how many samples survived the second cut. These numbers help the user have confidence that
// the threshhold values used for culling are reasonable.
fprintf(stderr, "Best of %d ", keepers);
这一段代码同样也很长,可以了解到:
1,观察注释可以了解到,粒子中有good particles和bad particles,要把bad剔除掉,衡量的准则是threshold
2,限制了good particles的数量,最多为PASSES个
3,第一轮中,threshold设定成一个负值,确保每一个粒子都能留下来,这是每个粒子的概率从0更新到
WORST_POSSIBLE,并且给出threshold的更新方式
4,用threshold来统计keepers的个数,keepers指第一轮筛选,幸存的结果
keepers还有第二轮筛选
下次分析第二次筛选的意义!