matlab中hmm的使用示例

这篇博客主要针对初学者,详细介绍了如何在MATLAB中使用HMM函数,特别是hmmdecode的运用,同时提供了作者对MATLAB官方文档中隐马尔可夫模型解释的个人理解和补充。
摘要由CSDN通过智能技术生成

刚开始用matlab中的hmm函数的童鞋想必都对其参数或者返回值或多或少会有一些疑问,matlab官方文档 http://www.mathworks.cn/cn/help/stats/hidden-markov-models-hmm.html 对此的解释又是相当简洁,本文写了一些博主对这些函数的理解,如果有什么疑问或者指正的话欢迎提出O(∩_∩)O~

1. hmmdecode

1.1 matlab的运行

>> t = [.7, .3; .4, .6]
t =   %transmission  matrix
    0.7000    0.3000
    0.4000    0.6000

>> e = [.1, .1 .8; .8, .1, .1]
e =  %emission matrix
    0.1000    0.1000    0.8000
    0.8000    0.1000    0.1000

>> s = [1, 2, 3]  %sequence
s =
     1     2     3

>> [p,logPseq,fs,bs,scale] = hmmdecode(s,t,e)
p =  % calculates the posterior state probabilities  of the sequence  seq
    0.2488    0.5771    0.9039
    0.7512    0.4229    0.0961

logPseq =  % the logarithm of the probability of sequence  seq
   -4.2114

fs =  %fs and bs returns  forward and backward probabilities of the sequence scaled by  S .
    1.0000    0.2258    0.4677    0.9039
         0    0.7742    0.5323    0.0961

bs =
    1.0000    1.1020    1.2337    1.0000
    1.6445    0.9703    0.7946    1.0000

scale =
    1.0000    0.3100    0.1000    0.4782

1.2 返回值验证
写程序对返回值进行验证,程序见文后附录。
  • 5
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
hmm算法matlab实现和实例 hmm_em.m function [LL, prior, transmat, obsmat, nrIterations] = ... dhmm_em(data, prior, transmat, obsmat, varargin) % LEARN_DHMM Find the ML/MAP parameters of an HMM with discrete outputs using EM. % [ll_trace, prior, transmat, obsmat, iterNr] = learn_dhmm(data, prior0, transmat0, obsmat0, ...) % % Notation: Q(t) = hidden state, Y(t) = observation % % INPUTS: % data{ex} or data(ex,:) if all sequences have the same length % prior(i) % transmat(i,j) % obsmat(i,o) % % Optional parameters may be passed as 'param_name', param_value pairs. % Parameter names are shown below; default values in [] - if none, argument is mandatory. % % 'max_iter' - max number of EM iterations [10] % 'thresh' - convergence threshold [1e-4] % 'verbose' - if 1, print out loglik at every iteration [1] % 'obs_prior_weight' - weight to apply to uniform dirichlet prior on observation matrix [0] % % To clamp some of the parameters, so learning does not change them: % 'adj_prior' - if 0, do not change prior [1] % 'adj_trans' - if 0, do not change transmat [1] % 'adj_obs' - if 0, do not change obsmat [1] % % Modified by Herbert Jaeger so xi are not computed individually % but only their sum (over time) as xi_summed; this is the only way how they are used % and it saves a lot of memory. [max_iter, thresh, verbose, obs_prior_weight, adj_prior, adj_trans, adj_obs] = ... process_options(varargin, 'max_iter', 10, 'thresh', 1e-4, 'verbose', 1, ... 'obs_prior_weight', 0, 'adj_prior', 1, 'adj_trans', 1, 'adj_obs', 1); previous_loglik = -inf; loglik = 0; converged = 0; num_iter = 1; LL = []; if ~iscell(data) data = num2cell(data, 2); % each row gets its own cell end while (num_iter <= max_iter) & ~converged % E step [loglik, exp_num_trans, exp_num_visits1, exp_num_emit] = ... compute_ess_dhmm(prior, transmat, obsmat, data, obs_prior_weight); % M step if adj_prior prior = normalise(exp_num_visits1); end if adj_trans & ~isempty(exp_num_trans) transmat = mk_stochastic(exp_num_trans); end if adj_obs obsmat = mk_stochastic(exp_num_emit); end

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值