动态贝叶斯网络:表示,推理和学习:摘要(Dynamic Bayesian Networks: Representation, Inderence and Learning)

ABSTRACT

Dynamic Bayesian Networks: 

Representation, Inderence and Learning

Copyright 2002

by

Kevin Patrick Murphy

Modelling sequential data is important in many areas of science and engineering. Hidden Markov models(HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linear-Gaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data. 

In particular, the main novel technical contributions of this thesis are as follows: a way of representing Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of applying Rao-Blackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.

摘要

动态贝叶斯网络:

表示,推理和学习

    对序列数据建模在多个科学和工程的领域中尤其重要。隐马尔可夫模型(HMMs)和卡尔曼滤波模型(KFMs),因其简单和灵活,在这方面很流行。比如,HMMs被用于语音识别和生物序列模式分析,KFM已用于包括飞机和导弹跟踪和预测经济问题。然而,HMMs和KFMs在他们“表达能力”方面依然受限。于是,动态贝叶斯网络(DBNs)通过允许状态空间以被分解的形式表达推广了HMMs,而不是HMMs的以单一离散变量的形式。同时,通过允许任意概率分布的贝叶斯网络推广了KFM,而不只是(单峰)线性高斯分布。在本论文中,我将讨论如何表达许多不同种类模型的贝叶斯网络,如何在DBNs中进行精确和近似的推理,以及如何从序列数据学习DBN模型。

    特别地,这篇学位论文主要的原创技术贡献如下:一种将分层隐马尔可夫模型表示为动态贝叶斯网络的方法,从而将推理过程的时间复杂度由O(T3)降至O(T),其中T是序列长度;一种精确平滑算法将时间复杂度由O(T)降至O(logT);一种用于DBNs在线推理的交叉树算法;以及DBNs在线推理的一种新的复杂度范围确定法;一种新的叫factor frontier的确定性近似推理算法;对于BK算法和loopy belief propagation算法的关系分析;将Rao-Blackwellised粒子滤波推广应用到DBNs,以及特别地,应用到SLAM(并发定位建图);将结构EM算法扩展到DBNs;以及各种DBNs的不同应用。然而,大概这篇学位论文的最主要的价值在其对于序列数据建模提出的通用表达方式。

  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值