参考论文
一、原文
HMM is also a maneuver-based method that uses Markov Chain.
state transition probability:
P
(
S
n
+
1
=
s
∣
S
n
=
s
n
)
P(S_{n+1}=s~|~ S_n=s_n)
P(Sn+1=s ∣ Sn=sn)
In real life, we can onlly observe the distinct (明显的) state that is exposed on the surface, but no intuitive representation of its hidden states exists (而不是没有直观展示的隐藏状态). ---- 隐藏的状态观测不到
二、Hidden Markov Model
HMM is represented by ( S , O , A , B , π S,O,A,B, \pi S,O,A,B,π):
- S = { S 1 , S 2 , . . . , S N } S=\{S_1,S_2,...,S_N\}~ S={S1,S2,...,SN} : hidden state sequence
- O = { O 1 , O 2 , . . . , O N } O=\{O_1,O_2,...,O_N\}~ O={O1,O2,...,ON} : observation sequence
- A A A : transition probability matrix between hidden states
- B B B : output matrix,representing transition probability of hidden states to output state(表示隐藏状态到输出状态的转移概率)
-
π
\pi
π : initial probability matrix,representing the initial
probability distribution in hidden states.
Application in trajectory prediction
O
O
O : historical states of traffic participants
该论文构造HMM的意图识别问题,用forward algorithm求解。
三、forward algorithm
参考链接 https://zhuanlan.zhihu.com/p/359831957
变量含义
Q Q Q
3个基本问题
(1)概率计算问题
在给定模型
λ
=
(
A
,
B
,
π
)
\lambda=(A,B,\pi)
λ=(A,B,π)
(2)学习问题
(3)预测问题/解码问题
定义
给定隐马尔可夫模型
λ
\lambda
λ,定义到时刻
t
t
t 的部分观测序列为
o
1
,
o
2
,
.
.
.
,
o
t
o_1,o_2,...,o_t
o1,o2,...,ot,且状态为
q
i
q_i
qi 的概率为前向概率,记作
α
t
(
i
)
=
P
(
o
1
,
o
2
,
.
.
.
,
o
t
,
i
t
=
q
i
∣
λ
)
\alpha_t(i)=P(o_1,o_2,...,o_t,i_t=q_i|\lambda)
αt(i)=P(o1,o2,...,ot,it=qi∣λ)。
可递推地求得前向概率
α
t
(
i
)
\alpha_t(i)
αt(i) 及观测序列概率
P
(
O
∣
λ
)
P(O|\lambda)
P(O∣λ)。
解释: