HMM 实例

# 概率计算问题

P(O|λ)=∑S∈STP(O,S|λ)

# 前向算法

Si为一个具体的状态值。当前 HMM 的状态空间一共有 N 个状态值：(S1,S2,…,SN)，Si

(1)α1(i)=πibi(o1),i=1,2,…,N；

(2)递推，对于 t=1，2，…,T−1，有 αt+1(i)=[∑Nj=1αt(k)aji]bi(ot+1),i=1,2,…,N；

(3)最终 P(O|λ)=∑Ni=1αT(i),i=1,2,…,N。

(1)βT(i)=1,i=1,2,…,N——到了最终时刻，无论状态是什么，我们都规定当时的后向概率为1；

(2)对 t=T−1,T−2,…,1，有 βt(i)=∑Nj=1aijbj(ot+1)βt+1(j),i=1,2,…,N；

(3)P(O|λ)=∑Ni=1πibi(o1)β1(i),i=1,2,…,N。

# 预测算法

γt(i)=P(st=Si|O,λ)

αt(i)βt(i)=P(st=Si,O|λ)。所以有：γt(i)=αt(i)βt(i)P(O|λ)=αt(i)βt(i)∑Nj=1αt(j)βt(j)。

# 学习算法

HMM 的学习算法根据训练数据的不同，可以分为有监督学习无监督学习两种。

## 有监督学习

^bjk=Bjk∑Mk=1(Bjk),j=1,2,…,N;k=1,2,…,M

## 无监督学习

Baum-Welch 算法利用了前向-后向算法，同时还是 EM 算法的一个特例。简单点形容，大致是一个嵌套了 EM 算法的前向-后向算法。

# HMM 实例

    from __future__ import division
import numpy as np

from hmmlearn import hmm

def calculateLikelyHood(model, X):
score = model.score(np.atleast_2d(X).T)

print "\n\n[CalculateLikelyHood]: "
print "\nobservations:"
for observation in list(map(lambda x: observations[x], X)):
print " ", observation

print "\nlikelyhood:", np.exp(score)

def optimizeStates(model, X):
Y = model.decode(np.atleast_2d(X).T)
print"\n\n[OptimizeStates]:"
print "\nobservations:"
for observation in list(map(lambda x: observations[x], X)):
print " ", observation

print "\nstates:"
for state in list(map(lambda x: states[x], Y[1])):
print " ", state

states = ["Gold", "Silver", "Bronze"]
n_states = len(states)

observations = ["Ruby", "Pearl", "Coral", "Sapphire"]
n_observations = len(observations)

start_probability = np.array([0.3, 0.3, 0.4])

transition_probability = np.array([
[0.1, 0.5, 0.4],
[0.4, 0.2, 0.4],
[0.5, 0.3, 0.2]
])

emission_probability = np.array([
[0.4, 0.2, 0.2, 0.2],
[0.25, 0.25, 0.25, 0.25],
[0.33, 0.33, 0.33, 0]
])

model = hmm.MultinomialHMM(n_components=3)

# 直接指定pi: startProbability, A: transmationProbability 和B: emissionProbability

model.startprob_ = start_probability
model.transmat_ = transition_probability
model.emissionprob_ = emission_probability

X1 = [0,1,2]

calculateLikelyHood(model, X1)
optimizeStates(model, X1)

X2 = [0,0,0]

calculateLikelyHood(model, X2)
optimizeStates(model, X2)


    [CalculateLikelyHood]:

observations:
Ruby
Pearl
Coral

likelyhood: 0.021792431999999997

[OptimizeStates]:

observations:
Ruby
Pearl
Coral

states:
Gold
Silver
Bronze

[CalculateLikelyHood]:

observations:
Ruby
Ruby
Ruby

likelyhood: 0.03437683199999999

[OptimizeStates]:

observations:
Ruby
Ruby
Ruby

states:
Bronze
Gold
Bronze

04-16 278
09-08 3万+

09-17 2万+
06-26 1万+
05-25
01-10 1130
11-24 1万+
02-19 2455
12-09 20万+
12-23 343