a particle filter (sequential Monte Carlo) and a Kalman filter

684人阅读 评论(0) 收藏 举报

What is the difference between a particle filter (sequential Monte Carlo) and a Kalman filter?

From Dan Simon's "Optimal State Estimation":

"In a linear system with Gaussian noise, the Kalman filter is optimal. In a system that is nonlinear, the Kalman filter can be used for state estimation, but the particle filter may give better results at the price of additional computational effort. In a system that has non-Gaussian noise, the Kalman filter is the optimal linear filter, but again the particle filter may perform better. The unscented Kalman filter (UKF) provides a balance between the low computational effort of the Kalman filter and the high performance of the particle filter. "

"The particle filter has some similarities with the UKF in that it transforms a set of points via known nonlinear equations and combines the results to estimate the mean and covariance of the state. However, in the particle filter the points are chosen randomly, whereas in the UKF the points are chosen on the basis of a specific algorithm *****. Because of this, the number of points used in a particle filter generally needs to be much greater than the number of points in a UKF. Another difference between the two filters is that the estimation error in a UKF does not converge to zero in any sense, but the estimation error in a particle filter does converge to zero as the number of particles (and hence the computational effort) approaches infinity.

***** The unscented transformation is a method for calculating the statistics of a random variable which undergoes a nonlinear transformation and uses the intuition (which also applies to the particle filter) that it is easier to approximate a probability distribution than it is to approximate an arbitrary nonlinear function or transformation. See also this as an example of how the points are chosen in UKF."

 Dan Simon's "Optimal State Estimation", section 15.4 (page 480 in my 2006 edition.")


Bayesian Tracking and Reasoning over Time


The project aims to provide new advances in computational methods for reasoning about many objects that evolve in a scene over time. Information about such objects arrives, typically in a real-time data feed, from sensors such as radar, sonar, LIDAR and video. The new and exciting part of this project is in automated understanding of the `social interactions' that underlie a multi-object scene. The outcomes from this ambitious project could cause a paradigm shift in tracking methodology if successful, moving away from the traditional viewpoint of a scene in which objects move independently of one another, towards an integrated viewpoint where object interactions are automatically learned and used in improved decision-making processes. Applications include vehicle tracking, mapping, animal behaviour modelling, economic models and social network modelling.

These sophisticated and difficult problems can all be posed very elegantly using probability theory, and in particular using Bayesian theory. While generic and straightforward to pose, there are substantial challenges for our problem area in terms of how to pose the underlying prior models (what is a good way to model the random behaviour of networked objects in a scene?), and how do we carry out the very demanding computational calculations that are required for many-object scenes? These modelling and computational challenges form a major part of the project, and will require substantial new theoretical and applied algorithm development over the course of the project. We will develop novel computational methods based principally around Monte Carlo computing, in which very carefully designed randomised data are used to approximate very accurately the integrations and optimisations required in the Bayesian approach.


  • F. Lindsten, M. I. Jordan and T. B. Schön "Particle Gibbs with Ancestor Sampling". Journal of Machine Learning Research (accepted for publication), 2014. (preprint available at [arXiv])

Background Material


Sequential Monte Carlo Methods

源自:http://www.stats.ox.ac.uk/~doucet/samsi_course.html * Check the more recent SMC & Particle...
  • foreverx11
  • foreverx11
  • 2015年09月09日 18:39
  • 810

kalman、particle filter直白理解

1. particle filter Sampling Importance Resampling (SIR),根据重要性重采样。下面是我对粒子滤波实现物体跟踪的算法原理的粗浅理解: 1)初始化阶段-...
  • u010608582
  • u010608582
  • 2016年08月19日 15:44
  • 600

漫谈 HMM之三:Kalman/Particle Filtering

漫谈 HMM:Kalman/Particle Filtering上次我们讲了 HMM 的 Forward-Backward 算法,得到了关于 α\alpha 和 β\beta 的递推公式。不过由于中间...
  • yc461515457
  • yc461515457
  • 2015年10月20日 17:16
  • 831

How a Kalman filter works, in pictures | Bzarg

  • donghaoascend
  • donghaoascend
  • 2016年08月26日 18:14
  • 1572

蒙特卡罗法(Monte Carlo Methods)

本文主要介绍 Monte Carlo 方法及它的一些简单应用,还有它的一些难点。 本文地址:http://blog.csdn.net/shanglianlm/article/details/4708...
  • shanglianlm
  • shanglianlm
  • 2015年07月27日 14:13
  • 1761

SLAM笔记四——Extended Kalman Filter

这是SLAM最传统的基础,是SLAM最原始的方法,虽然现在使用较少,但是还是有必要了解。What’s Kalman Filter这是一个贝叶斯滤波器,估计线性高斯模型,是对线性模型和高斯分布的优化方法...
  • qq_30159351
  • qq_30159351
  • 2016年11月30日 11:30
  • 2012

Introduction to Monte Carlo Tree Search

https://jeffbradberry.com/posts/2015/09/intro-to-monte-carlo-tree-search/ Introduction to Mon...
  • AMDS123
  • AMDS123
  • 2017年05月01日 17:27
  • 6406

【MATLAB】Extended Kalman Filter

公式来自:卡尔曼滤波——维基百科 下了好多程序,但是人家的f都是固定的,昨天师兄给了篇论文让参照里面的transitionmatrix,手动算了Jacobians矩阵,写了下面的程序。 cl...
  • apsvvfb
  • apsvvfb
  • 2013年10月12日 14:56
  • 1975

SLAM笔记六——Unscented Kalman Filter

卡尔曼滤波都需要线性模型,EKF用的是泰勒公式进行局部线性的方法,而UKF提供了另一种线性化的方法。Unscented Transform步骤: 首先选择一组点,称为sigma点 然后通过...
  • qq_30159351
  • qq_30159351
  • 2016年12月01日 10:51
  • 1201

Kalman Filter Study (include Monte Carlo Position)

Kalman FilterPreview : Monte Carlo PositionPositioning: Iteration of Measurement + Motion(Predictio...
  • weixin_40360666
  • weixin_40360666
  • 2017年10月20日 16:09
  • 77
    访问量: 6万+
    积分: 930
    排名: 5万+