GMM-HMM笔记

For the well-known HMM, it is a generative model. In different scenarios and and assumption, there are many methods  to infer the parameter.

Case 1: 

When we know the prior probability P(y), transition probability P(y(i) | y(i-1)), emission probability P(x|y),

it is easy to use forward/backward method to infer the joint probability, P(x1, x2, ... xn, y1, y2,...yn).

Moreover, the first-order HMM model could be infered by Viterbi algorithm more efficiently.

The same as 2nd-order HMM and 3rd-order, etc..


Case 2: X is discreet random variable 

When we could just observer the x state but not the hidden state, y, EM algorithm could be applied.

some parameters should be infered, including p(y), p(y(i) | y(i-1) ), as well as p(x|y).

Then Viterbi algorithm could be used.

Unsupervised Learning 101: the EM for the HMM, Karl Stratos


Case 3: X is continuous random variable

When we could just observer the x state but not the hidden state, y, but we know the assumption that the emission function p(x|y) is mixture Gaussian distribution. (Because the p(x|y) is continuous value, so we use mixture Gaussian distribution to estimate it)

It could also use EM to infer.

And this kind of model is called GMM-HMM

Assumption: p(x|y) is Gaussian distribution

Hidden Markov Models and Gaussian Mixture Models, Steve Renals and Peter Bell

Why use GMM to estimate the p(x|y)?




some notes:

1. Differences between Baum-Welch algorithm and Viterbi algorithm.

Viterbi: avoid repeat summation




2. 



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值