Week7-4HMM2

Observation likelihood

  • Given multiple HMMs
    • which one is most likely to generate the observation sequence
  • Naiive solution
    • try all possible state sequences

Forward algorithm

  • compute a forward trellis that encodes all possible state path

HMM learning

-supervised
- training sequences are labeled
-unsupervised
- known number of states
- semi-supervised

Supervised HMM training

  • estimate the static transition probabilities using MLE
    aij=Count(qt=si,qt+1=sj)Count(qt=si)
  • estimate the observation probabilities using MLE
    bj(k)=Count(qi=sj,oi=vk)Count(qi=sj)
  • using smoothing

Unsupervised HMM training

  • Given
    • observation sequences
  • Goal
    • build the HMM
  • Use EM methods
    • forward-backward(Baum-Welch) algorithm, an approximate solution for P(Oμ)

Outline of Baum-Welch

  • Algorithm
    • randomly set the parameters of HMM
    • Until the parameters converge repeat:
      • E step: determine the probability of the various state sequences for generating the observations
      • M step: reestimate the parameters based on these probabilities

The algorithm guarantees that at each iteration the likelihood of the data P(Oμ) increases
It converges to a local maximum

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值