20140620——HCRF

1、为什么引入HCRF?

It is well known that models which include latent, or hidden-state, structure may be more expressive than fully observable models, and can often find relevant substructure in a given domain.

However, they are limited in that they cannot capture intermediate structures using hidden-state variables.

Differently, an HCRF models the distribution P(c, h/x) directly, where c is a category and h is an intermediate hidden variable modeled as a markov random field globally conditioned on observation x. The parameters µ of the model are trained discriminatively to optimize P(c/x).

The main limitation of latent generative approaches is that they require a model of local features given underlying variables, and generally presume independence of the observations

Again, a significant difference between their approach and ours is that we do not perform a pre-selection
of discriminative parts, but rather incorporate such a step during training

In our approach category labels are observed, but an additional layer of subordinate labels are
learned. These intermediate hidden variables model the latent structure of the input domain; our model
defines the joint probability of a class label and hidden state labels conditioned on the observations, with
dependencies between the hidden variables expressed by an undirected graph.




  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值