代码解读:DP-SLAM(5)
上回分析了重采样的一部分,代码显得罗嗦和冗长,但是仔细分析下来,也不觉得麻烦
那么就接着分析代码,重采样结果的归一化,
// Renormalize to ensure that the total probability is now equal to 1.
for (i=0; i < SAMPLE_NUMBER; i++)
newSample[i].probability = newSample[i].probability/total;
貌似没啥说的,接着向下看,
total = 0.0;
// Count how many children each particle will get in next generation
// This is done through random resampling.
for (i = 0; i < SAMPLE_NUMBER; i++) {
newchildren[i] = 0;
total = total + newSample[i].probability;
}
i = j = 0; // i = no. of survivors, j = no. of new samples
while ((j < SAMPLE_NUMBER) && (i < PARTICLE_NUMBER)) {
k = 0;
ftemp = MTrandDec()*total;
while (ftemp > (newSample[k].probability)) {
ftemp = ftemp - newSample[k].probability;
k++;
}
if (newchildren[k] == 0)
i++;
newchildren[k]++;
j++;
}
// Report exactly how many samples are kept as particles, since they were actually
// resampled.
fprintf(stderr, "(%d kept) ", i);
从这段代码可以了解到,
1,看第一眼while语句,可能还不知道 k++ 是什么意思
其实认真分析后,就会发现它是一个筛选语句,因为ftemp是一个随机数,所以这是一个随机抽样
2,只有当 ftemp < (newSample[k].probability)的时候,
才会有newchildren[k]++
// Do some cleaning up
// Is this even necessary?
for (i = 0; i < PARTICLE_NUMBER; i++) {
children[i] = 0;
savedParticle[i].probability = 0.0;
}
// Now copy over new particles to savedParticles
best = 0;
k = 0; // pointer into saved particles
for (i = 0; i < SAMPLE_NUMBER; i++)
if (newchildren[i] > 0) {
savedParticle[k].probability = newSample[i].probability;
savedParticle[k].x = newSample[i].x;
savedParticle[k].y = newSample[i].y;
savedParticle[k].theta = newSample[i].theta;
savedParticle[k].C = newSample[i].C;
savedParticle[k].D = newSample[i].D;
savedParticle[k].T = newSample[i].T;
// For savedParticle, the ancestryNode field actually points to the parent of this saved particle
savedParticle[k].ancestryNode = l_particle[ newSample[i].parent ].ancestryNode;
savedParticle[k].ancestryNode->numChildren++;
children[k] = newchildren[i];
if (savedParticle[k].probability > savedParticle[best].probability)
best = k;
k++;
}
// This number records how many saved particles we are currently using, so that we can ignore anything beyond this
// in later computations.
cur_saved_particles_used = k;
从这段代码中可以了解到,
1,这是一个复制过程,把 newSample[i] 赋值给 savedParticle[k]
2,因为没有看particle的数据结构,还不太清楚ancestryNode的具体作用
这里先简单了解,记一下
3,变量cur_saved_particles_used的意义容易理解
接着往下有,
// We might need to continue generating children for particles, if we reach PARTICLE_NUMBER worth of distinct parents early
// We renormalize over the chosen particles, and continue to sample from there.
if (j < SAMPLE_NUMBER) {
total = 0.0;
// Normalize particle probabilities. Note that they have already been exponentiated
for (i = 0; i < cur_saved_particles_used; i++)
total = total + savedParticle[i].probability;
for (i=0; i < cur_saved_particles_used; i++)
savedParticle[i].probability = savedParticle[i].probability/total;
total = 0.0;
for (i = 0; i < cur_saved_particles_used; i++)
total = total + savedParticle[i].probability;
while (j < SAMPLE_NUMBER) {
k = 0;
ftemp = MTrandDec()*total;
while (ftemp > (savedParticle[k].probability)) {
ftemp = ftemp - savedParticle[k].probability;
k++;
}
children[k]++;
j++;
}
}
// Some useful information concerning the current generation of particles, and the parameters for the best one.
fprintf(stderr, "-- %.3d (%.4f, %.4f, %.4f) : %.4f\n", curGeneration, savedParticle[best].x, savedParticle[best].y,
savedParticle[best].theta, savedParticle[best].probability);
}
从这段代码可以了解到:
1,对savedParticle的概率做了归一化
2,生成随机数ftemp,产生children[k]
3,把这些粒子中概率最高的(best)输出出来
总之,我把localize()这个函数分析完了,
但是仍然有一些问题,比如ftemp的意义
下次要分析一下particle的基本结构!