关闭

a particle filter (sequential Monte Carlo) and a Kalman filter

486人阅读 评论(0) 收藏 举报
分类:

What is the difference between a particle filter (sequential Monte Carlo) and a Kalman filter?


From Dan Simon's "Optimal State Estimation":

"In a linear system with Gaussian noise, the Kalman filter is optimal. In a system that is nonlinear, the Kalman filter can be used for state estimation, but the particle filter may give better results at the price of additional computational effort. In a system that has non-Gaussian noise, the Kalman filter is the optimal linear filter, but again the particle filter may perform better. The unscented Kalman filter (UKF) provides a balance between the low computational effort of the Kalman filter and the high performance of the particle filter. "

"The particle filter has some similarities with the UKF in that it transforms a set of points via known nonlinear equations and combines the results to estimate the mean and covariance of the state. However, in the particle filter the points are chosen randomly, whereas in the UKF the points are chosen on the basis of a specific algorithm *****. Because of this, the number of points used in a particle filter generally needs to be much greater than the number of points in a UKF. Another difference between the two filters is that the estimation error in a UKF does not converge to zero in any sense, but the estimation error in a particle filter does converge to zero as the number of particles (and hence the computational effort) approaches infinity.

***** The unscented transformation is a method for calculating the statistics of a random variable which undergoes a nonlinear transformation and uses the intuition (which also applies to the particle filter) that it is easier to approximate a probability distribution than it is to approximate an arbitrary nonlinear function or transformation. See also this as an example of how the points are chosen in UKF."


 Dan Simon's "Optimal State Estimation", section 15.4 (page 480 in my 2006 edition.")

website:http://stats.stackexchange.com/questions/2149/what-is-the-difference-between-a-particle-filter-sequential-monte-carlo-and-a



Bayesian Tracking and Reasoning over Time


Background

The project aims to provide new advances in computational methods for reasoning about many objects that evolve in a scene over time. Information about such objects arrives, typically in a real-time data feed, from sensors such as radar, sonar, LIDAR and video. The new and exciting part of this project is in automated understanding of the `social interactions' that underlie a multi-object scene. The outcomes from this ambitious project could cause a paradigm shift in tracking methodology if successful, moving away from the traditional viewpoint of a scene in which objects move independently of one another, towards an integrated viewpoint where object interactions are automatically learned and used in improved decision-making processes. Applications include vehicle tracking, mapping, animal behaviour modelling, economic models and social network modelling.

These sophisticated and difficult problems can all be posed very elegantly using probability theory, and in particular using Bayesian theory. While generic and straightforward to pose, there are substantial challenges for our problem area in terms of how to pose the underlying prior models (what is a good way to model the random behaviour of networked objects in a scene?), and how do we carry out the very demanding computational calculations that are required for many-object scenes? These modelling and computational challenges form a major part of the project, and will require substantial new theoretical and applied algorithm development over the course of the project. We will develop novel computational methods based principally around Monte Carlo computing, in which very carefully designed randomised data are used to approximate very accurately the integrations and optimisations required in the Bayesian approach.



Publications

  • F. Lindsten, M. I. Jordan and T. B. Schön "Particle Gibbs with Ancestor Sampling". Journal of Machine Learning Research (accepted for publication), 2014. (preprint available at [arXiv])

Background Material


0
0

查看评论
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
    个人资料
    • 访问:45507次
    • 积分:748
    • 等级:
    • 排名:千里之外
    • 原创:27篇
    • 转载:17篇
    • 译文:0篇
    • 评论:10条
    最新评论