mhmm em matlab,第二章 HMM工具箱命令使用说明.docx

HOW TO USE THE HMM TOOLBOX (MATLAB)一、离散输出的隐马尔科夫模型(DHMM,HMM with discrete outputs)最大似然参数估计EM(Baum Welch算法)The script dhmm_em_demo.m gives an example of how to learn an HMM with discrete outputs. Let there be Q=2 states and O=3 output symbols. We create random stochastic matrices as follows.O = 3;Q = 2;prior0 = normalise(rand(Q,1));transmat0 = mk_stochastic(rand(Q,Q));obsmat0 = mk_stochastic(rand(Q,O));?Now we sample nex=20 sequences of length T=10 each from this model, to use as training data.T=10;?? %序列长度nex=20;? %样本序列数目data = dhmm_sample(prior0, transmat0, obsmat0, nex, T);??????Here data is?20x10. Now we make a random guess as to what the parameters are,prior1 = normalise(rand(Q,1)); %初始状态概率transmat1 = mk_stochastic(rand(Q,Q)); %初始状态转移矩阵obsmat1 = mk_stochastic(rand(Q,O)); %初始观测状态到隐藏状态间的概率转移矩阵and improve our guess using 5 iterations of EM...[LL, prior2, transmat2, obsmat2] = dhmm_em(data, prior1, transmat1, obsmat1, 'max_iter', 5);%prior2, transmat2, obsmat2 为训练好后的初始概率,状态转移矩阵及混合状态概率转移矩阵LL(t) is the log-likelihood after iteration t, so we can plot the learning curve.序列分类To evaluate the log-likelihood of a trained model given test data, proceed as follows:loglik = dhmm_logprob(data, prior2, transmat2, obsmat2)? %HMM测试Note: the discrete alphabet is assumed to be {1, 2, ..., O}, where O = size(obsmat, 2). Hence data cannot contain any 0s.To classify a sequence into one of k classes, train up k HMMs, one per class, and then compute the log-likelihood that each model gives to the test sequence; if the i'th model is the most likely, then declare the class of the sequence to be class i.Computing the most probable sequence (Viterbi)First you need to evaluate B(i,t) = P(y_t | Q_t=i) for all t,i:B = multinomial_prob(data, obsmat);Then you can use[path] = viterbi_path(prior, transmat, B)??二、具有高斯混合输出的隐马尔科夫模型(GHMM,HMM with mixture of Gaussians outputs)Maximum likelihood parameter estimation using EM (Baum Welch)Let us generate nex=50 vector-valued sequences of length T=50; each

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
hmm算法matlab实现和实例 hmm_em.m function [LL, prior, transmat, obsmat, nrIterations] = ... dhmm_em(data, prior, transmat, obsmat, varargin) % LEARN_DHMM Find the ML/MAP parameters of an HMM with discrete outputs using EM. % [ll_trace, prior, transmat, obsmat, iterNr] = learn_dhmm(data, prior0, transmat0, obsmat0, ...) % % Notation: Q(t) = hidden state, Y(t) = observation % % INPUTS: % data{ex} or data(ex,:) if all sequences have the same length % prior(i) % transmat(i,j) % obsmat(i,o) % % Optional parameters may be passed as 'param_name', param_value pairs. % Parameter names are shown below; default values in [] - if none, argument is mandatory. % % 'max_iter' - max number of EM iterations [10] % 'thresh' - convergence threshold [1e-4] % 'verbose' - if 1, print out loglik at every iteration [1] % 'obs_prior_weight' - weight to apply to uniform dirichlet prior on observation matrix [0] % % To clamp some of the parameters, so learning does not change them: % 'adj_prior' - if 0, do not change prior [1] % 'adj_trans' - if 0, do not change transmat [1] % 'adj_obs' - if 0, do not change obsmat [1] % % Modified by Herbert Jaeger so xi are not computed individually % but only their sum (over time) as xi_summed; this is the only way how they are used % and it saves a lot of memory. [max_iter, thresh, verbose, obs_prior_weight, adj_prior, adj_trans, adj_obs] = ... process_options(varargin, 'max_iter', 10, 'thresh', 1e-4, 'verbose', 1, ... 'obs_prior_weight', 0, 'adj_prior', 1, 'adj_trans', 1, 'adj_obs', 1); previous_loglik = -inf; loglik = 0; converged = 0; num_iter = 1; LL = []; if ~iscell(data) data = num2cell(data, 2); % each row gets its own cell end while (num_iter <= max_iter) & ~converged % E step [loglik, exp_num_trans, exp_num_visits1, exp_num_emit] = ... compute_ess_dhmm(prior, transmat, obsmat, data, obs_prior_weight); % M step if adj_prior prior = normalise(exp_num_visits1); end if adj_trans & ~isempty(exp_num_trans) transmat = mk_stochastic(exp_num_trans); end if adj_obs obsmat = mk_stochastic(exp_num_emit); end

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值