最大熵模型(Maximum Entropy Model)文献阅读指南

最大熵模型(Maximum Entropy Model)是一种机器学习方法,在自然语言处理的许多领域(如词性标注、中文分词、句子边界识别、浅层句法分析及文本分类等)都有比较好的应用效果。张乐博士的最大熵模型工具包manual里有“Further Reading”,写得不错,就放到这里作为最大熵模型文献阅读指南了。
  与《统计机器翻译文献阅读指南》不同,由于自己也正在努力学习Maximum Entropy Model中,没啥发言权,就不多说废话了。这些文献在Google上很容易找到,不过多数都比较长(30多页),甚至有两篇是博士论文,有100多页,希望初学读者不要被吓住了,毕竟经典的东西是值得反复推敲的!

Maximum Entropy Model Tutorial Reading

  This section lists some recommended papers for your further reference.

1. Maximum Entropy Approach to Natural Language Processing [Berger et al., 1996]
  (必读)A must read paper on applying maxent technique to Natural Language Processing. This paper describes maxent in detail and presents an Increment Feature Selection algorithm for increasingly construct a maxent model as well as several example in statistical Machine Translation.

2.Inducing Features of Random Fields [Della Pietra et al., 1997]
  (必读)Another must read paper on maxent. It deals with a more general frame work: Random Fields and proposes an Improved Iterative Scaling algorithm for estimating parameters of Random Fields. This paper gives theoretical background to Random Fields (and hence Maxent model). A greedy Field Induction method is presented to automatically construct a detail random elds from a set of atomic features. An word morphology application for English is developed.

3.Adaptive Statistical Language Modeling: A Maximum Entropy Approach [Rosenfeld, 1996]
  This paper applied ME technique to statistical language modeling task. More specically, it built a conditional Maximum Entropy model that incorporated traditional N-gram, distant N-gram and trigger pair features. Significantly perplexity reduction over baseline trigram model was reported. Later, Rosenfeld and his group proposed a Whole Sentence Exponential Model that overcome the computation bottleneck of conditional ME model.

4.Maximum Entropy Models For Natural Language Ambiguity Resolution [Ratnaparkhi, 1998]
  This dissertation discussed the application of maxent model to various Natural Language Disambiguity tasks in detail. Several problems were attacked within the ME framework: sentence boundary detection, part-of-speech tagging, shallow parsing and text categorization. Comparison with other machine learning technique (Naive Bayes, Transform Based Learning, Decision Tree etc.) are given.

5.The Improved Iterative Scaling Algorithm: A Gentle Introduction [Berger, 1997]
  This paper describes IIS algorithm in detail. The description is easier to understand than [Della Pietra et al., 1997], which involves more mathematical notations.

6.Stochastic Attribute-Value Grammars (Abney, 1997)
  Abney applied Improved Iterative Scaling algorithm to parameters estimation of Attribute-Value grammars, which can not be corrected calculated by ERF method (though it works on PCFG). Random Fields is the model of choice here with a general Metropolis-Hasting Sampling on calculating feature expectation under newly constructed model.

7.A comparison of algorithms for maximum entropy parameter estimation [Malouf, 2003]
  Four iterative parameter estimation algorithms were compared on several NLP tasks. L-BFGS was observed to be the most effective parameter estimation method for Maximum Entropy model, much better than IIS and GIS. [Wallach, 2002] reported similar results on parameter estimation of Conditional Random Fields.

附录:
张乐博士的最大熵模型工具包:
 http://homepages.inf.ed.ac.uk/lzhang10/maxent_toolkit.html
关于最大熵模型的两个参考网页,后者也是一个reading list,但是较早:
 1.MaxEnt and Exponential Models
 2.A maxent reading list

注:转载请注明出处“我爱自然语言处理”:www.52nlp.cn

from:http://www.52nlp.cn/maximum-entropy-model-tutorial-reading

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值