a particle filter (sequential Monte Carlo) and a Kalman filter

What is the difference between a particle filter (sequential Monte Carlo) and a Kalman filter?


From Dan Simon's "Optimal State Estimation":

"In a linear system with Gaussian noise, the Kalman filter is optimal. In a system that is nonlinear, the Kalman filter can be used for state estimation, but the particle filter may give better results at the price of additional computational effort. In a system that has non-Gaussian noise, the Kalman filter is the optimal linear filter, but again the particle filter may perform better. The unscented Kalman filter (UKF) provides a balance between the low computational effort of the Kalman filter and the high performance of the particle filter. "

"The particle filter has some similarities with the UKF in that it transforms a set of points via known nonlinear equations and combines the results to estimate the mean and covariance of the state. However, in the particle filter the points are chosen randomly, whereas in the UKF the points are chosen on the basis of a specific algorithm *****. Because of this, the number of points used in a particle filter generally needs to be much greater than the number of points in a UKF. Another difference between the two filters is that the estimation error in a UKF does not converge to zero in any sense, but the estimation error in a particle filter does converge to zero as the number of particles (and hence the computational effort) approaches infinity.

***** The unscented transformation is a method for calculating the statistics of a random variable which undergoes a nonlinear transformation and uses the intuition (which also applies to the particle filter) that it is easier to approximate a probability distribution than it is to approximate an arbitrary nonlinear function or transformation. See also this as an example of how the points are chosen in UKF."


 Dan Simon's "Optimal State Estimation", section 15.4 (page 480 in my 2006 edition.")

website:http://stats.stackexchange.com/questions/2149/what-is-the-difference-between-a-particle-filter-sequential-monte-carlo-and-a



Bayesian Tracking and Reasoning over Time


Background

The project aims to provide new advances in computational methods for reasoning about many objects that evolve in a scene over time. Information about such objects arrives, typically in a real-time data feed, from sensors such as radar, sonar, LIDAR and video. The new and exciting part of this project is in automated understanding of the `social interactions' that underlie a multi-object scene. The outcomes from this ambitious project could cause a paradigm shift in tracking methodology if successful, moving away from the traditional viewpoint of a scene in which objects move independently of one another, towards an integrated viewpoint where object interactions are automatically learned and used in improved decision-making processes. Applications include vehicle tracking, mapping, animal behaviour modelling, economic models and social network modelling.

These sophisticated and difficult problems can all be posed very elegantly using probability theory, and in particular using Bayesian theory. While generic and straightforward to pose, there are substantial challenges for our problem area in terms of how to pose the underlying prior models (what is a good way to model the random behaviour of networked objects in a scene?), and how do we carry out the very demanding computational calculations that are required for many-object scenes? These modelling and computational challenges form a major part of the project, and will require substantial new theoretical and applied algorithm development over the course of the project. We will develop novel computational methods based principally around Monte Carlo computing, in which very carefully designed randomised data are used to approximate very accurately the integrations and optimisations required in the Bayesian approach.



Publications

  • F. Lindsten, M. I. Jordan and T. B. Schön "Particle Gibbs with Ancestor Sampling". Journal of Machine Learning Research (accepted for publication), 2014. (preprint available at [arXiv])

Background Material


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值