# 隐马尔可夫模型hmm自学

### 2017.1.3

Generating Patterns

HMM

HMM 定义

HMM是一个三元组 (,A,B).

the vector of the initial state probabilities;

the state transition matrix;

the confusion matrix;

HMM的应用

(1) 评估

(2) 解码

viterbi算法也被广泛的应用在自然语言处理领域。比如词性标注。字面上的文字信息就是观察状态，而词性就是隐状态。通过HMM我们就可以找到一句话上下文中最有可能出现的句法结构。

(3) 学习

1. Matching the most likely system to a sequence of observations -evaluation, solved using the forward algorithm;

2. determining the hidden sequence most likely to have generated a sequence of observations – decoding, solved using the Viterbi algorithm;

3. determining the model parameters most likely to have generated a sequence of observations – learning, solved using the forward-backward algorithm.

Finding the probability of an observed sequence

1、穷举搜索方法

Pr(dry,damp,soggy | HMM) = Pr(dry,damp,soggy | sunny,sunny,sunny) + Pr(dry,damp,soggy | sunny,sunny ,cloudy) + Pr(dry,damp,soggy | sunny,sunny ,rainy) + . . . . Pr(dry,damp,soggy | rainy,rainy ,rainy)

2、采用递归方法降低复杂度

2a. Partial probabilities, (‘s)

t ( j ) 表示在时间t时 状态j的部分概率。计算方法如下：
t ( j )= Pr( observation | hidden state is j ) * Pr(all paths to state j at time t)

Section 3  会给出一个动态效果介绍如何计算概率。

2b.计算初始状态的部分概率

t ( j )= Pr( observation | hidden state is j ) x Pr(all paths to state j at time t)

2c.如何计算t>1时刻的部分概率

t ( j )= Pr( observation | hidden state is j ) * Pr(all paths to state j at time t)

2d.复杂度比较

=======================================================

We use the forward algorithm to calculate the probability of a T long observation sequence;

where each of the y is one of the observable set. Intermediate probabilities (‘s) are calculated recursively by first calculating  for all states at t=1.

Then for each time step, t = 2, …, T, the partial probability  is calculated for each state;

that is, the product of the appropriate observation probability and the sum over all possible routes to that state, exploiting recursion by knowing these values already for the previous time step. Finally the sum of all partial probabilities gives the probability of the observation, given the HMM, .   =======================================================

Forward Algorithm （Done）

1、穷举搜索方法

Pr(observed sequence | hidden state combination).

Pr(dry,damp,soggy | sunny,sunny,sunny), Pr(dry,damp,soggy | sunny,sunny,cloudy), Pr(dry,damp,soggy | sunny,sunny,rainy), . . . . Pr(dry,damp,soggy | rainy,rainy,rainy)

2.用递归方式减少复杂度

2.1部分概率和部分最优路径

(i,t)是所有序列中在t时刻以状态i终止的最大概率。当然它所对应那条路径就是部分最优路径。   (i,t)对于每个i,t都是存在的。这样我们就可以在时间T（序列的最后一个状态）找到整个序列的最优路径。

2b. 计算  ‘s 在t = 1的初始值

2c. 计算  ‘s 在t > 1 的部分概率

(sequence of states), . . ., A, X                                (sequence of states), . . ., B, X or (sequence of states), . . ., C, X

Pr (most probable path to A) . Pr (X | A) . Pr (observation | X)

2d. 反向指针, ‘s

2e. viterbi算法的两个优点

1）与Forward算法一样，它极大的降低了计算复杂度

2）viterbi会根据输入的观察序列，“自左向右”的根据上下文给出最优的理解。由于viterbi会在给出最终选择前考虑所有的观察序列因素，这样就避免了由于突然的噪声使得决策原理正确答案。这种情况在真实的数据中经常出现。

==================================================

1. Formal definition of algorithm

The algorithm may be summarised formally as:

For each i,, i = 1, … , n, let :

- this intialises the probability calculations by taking the product of the intitial hidden state probabilities with the associated observation probabilities.

For t = 2, …, T, and i = 1, … , n let :

- thus determining the most probable route to the next state, and remembering how to get there. This is done by considering all products of transition probabilities with the maximal probabilities already derived for the preceding step. The largest such is remembered, together with what provoked it.

Let :

- thus determining which state at system completion (t=T) is the most probable.

For t = T – 1, …, 1

Let :

- thus backtracking through the trellis, following the most probable route. On completion, the sequence i1 … iT will hold the most probable sequence of hidden states for the observation sequence in hand.

==================================================

HMM的第三个应用就是learning，这个算法就不再这里详述了，并不是因为他难于理解，而是它比前两个算法要复杂很多。这个方向在语音处理数据库上有重要的地位。因为它可以帮助我们在状态空间很大，观察序列很长的环境下找到合适HMM模型参数：初始状态、转移概率、混淆矩阵等。

• 本文已收录于以下专栏：

举报原因： 您举报文章：隐马尔可夫模型hmm自学 色情 政治 抄袭 广告 招聘 骂人 其他 (最多只允许输入30个字)