Why do you Hate CRFs

Why do you Hate CRFs?

by

After my talk at Columbia, a grad student asked me “Why do you hate CRFs?”.  This is a tough question to answer because of the failed presupposition, which assumes I hate CRFs and asks me to explain why. I want to defeat the question by saying that I don’t hate CRFs.  The right question is:

Why isn’t there a CRF package in LingPipe?

This is a good question because if you look at any of the recent tagging/chunking bakeoffs (CoNLL, Biocreative, etc.) you’ll see that the top-scoring systems are CRFs or similar richly featured conditional models.

The short, though highly idiomatic, answer is “horses for courses”.  (Etymology hint: racing horses specialize in muddy or other conditions, as explained in Abbott and Costello’s classic“Mudder and Fodder” sketch).

To put this in context, we’re looking for high recall taggers that have a decent precision at very high recall.  Our chunk-level forward-backward HMM tagger achieves 99.95% recall at 8% precision, even though it’s a whopping 15% behind the best systems in first-best performance.

There are three main problems with using CRFs for the kinds of problems in which we’re interested:

  1. CRFs need fairly extensive feature engineering to outperform a good HMM baseline,
  2. large feature sets and discriminitive training can lead to very high model complexity and variance, and
  3. these features and training regimes make estimation (training) and inference (decoding/tagging) computationally intensive.

1. Portability: Unfortunately, the performance of CRFs is directly attributable to their large sets of hand-tuned features.  Consider the following quote from(Finkel, Dingare, Nguyen, Nissim, Manning and Sinclair 2004):

Using the set of features designed for that task in CoNLL 2003 [24], our system achieves an f-score of 0.76 on the BioCreative development data, a dramatic ten points lower than its f-score of 0.86 on the CoNLL newswire data. Despite the massive size of the final feature set (almost twice as many features as used for CoNLL), its final performance of 0.83 is still below its performance on the CoNLL data

The baseline features are listed in section 3 (below).  Surprisingly, LingPipe’s HMM performed similarly to baseline CRFs out of the box on BioCreative data. Finkel et al.’s paper describes the wealth of features they applied to port their CRF system from CoNLL to biocreative.  They also describe heuristics, such as mismatched paren balancing, pruning given various known suffixes, etc.

2. Bias-Variance Tradeoff: What you see in heavily tuned conditional models such as CRFs is a strongly attenuated probability distribution.  Such models are usually much more confident in their decisions than they should be.  This overconfidence stems from a lack of modeling of dependencies, and shows up in our models, too, especially in classification tasks, where we’re surprised every time the document mentions “baseball”.  This is why speech recognition is so bad: the acoustic context-dependent phoneme mixture models are surprised every 1/100th of a second that the speaker’s still using a Texas accent.  Interestingly, 2005′s winning Spam detection entry for TREC(Bratko and Filipic 2005) mitigated this problem in an interesting way by learning on the doc being classified.  Topic-level models (such as LDA) and more highly dispersed frequency models (such as zero-inflated, hierarchal or simple over-dispersed models) are interesting approaches to this problem.  There’s also a nice discussion of this attenuation problem in section 1.2.3 of a classification overview paper byNigam, McCallum and Mitchell (2006):

"The faulty word independence assumption exacerbates the tendency of naive Bayes to produce extreme (almost 0 or 1) class probability estimates. However, classification accuracy can be quite high even when these estimates are inappropriately extreme."

In statistical terms, CRFs and other conditional models are often unbiased, but high variance.  As usual, this arises from high model complexity (large numbers of parameters with free structure).  That is, small changes in the training data or test samples leads to wide differences in models.  Our language-model based approaches, on the other hand, tend to have lower variance and higher bias.  They’re not nearly as sensitive to training data, but they tend to have built-in biases.  Another way to say this is that the CRFs are much more tightly fit to their training data than the language-model based generative approaches we’ve adopted in LingPipe.

Many models, including support vector machines, boosting, perceptrons and even active learning, are explicitly designed to bias the global statistics in such a way as to emphasize the cases near the decision boundary.  While this helps with first-best decisions, it tends to hurt any confidence-based decisions.

We see this same bias/variance tradeoff within the models supplied by LingPipe.  Our rescoring chunkers are nearly useless for confidence based extraction due to their overly attenuated models.

3. Efficiency: The current algorithms for training CRFs are slow.  As in hours if not days of CPU time.  Run time is much better, but still slow.  For instance,Finkel et al. (2004) report beam-based decoding speeds of roughly 10KB/second.  Ben Wellner’sCarafe: CRFs for IE implementation (in Objective ML, of all things!) is reportedly very fast — around 50K words/sec, with beam-based pruning, feature caching and bigram sequence stats.  So it’s clear that careful engineering of features with pruning and caching can be very useful.  In fact, this result’s surprising enough it’s making me want to do more exploration of pruning in our own first-best decoders (remember, “horses for courses”).

The problem with pruning in a high recall tagger is that it explicitly eliminates hypotheses for which the model estimates will have lower likelihood than the first-best hypotheses.  For high recall applications, these hypotheses are relevant.

The reason training and decoding is fairly slow is simple: lots of features.  For decoding, time is pretty much wholly dependent on the number of features looked up per character or token of input.  Because even our models are larger than existant CPU caches and because the models are accessed randomly, the time is mostly determined by front-side bus speed, which determines how fast data shuttles between memory and the outermost cache of the CPU.

Here’s a list of “baseline” CRF features from (Krishnan and Manning 2006):

Our baseline CRF is a sequence model in which labels for tokens directly depend only on the labels corresponding to the previous and next tokens. We use features that have been shown to be effective in NER, namely the current, previous and next words, character n-grams of the current word, Part of Speech tag of the current word and surrounding words, the shallow parse chunk of the current word, shape of the current word, the surrounding word shape sequence, the presence of a word in a left window of size 5 around the current word and the presence of a word in a left window of size 5 around the current word. This gives us a competitive baseline CRF using local information alone, whose performance is close to the best published local CRF models, for Named Entity Recognition

That is a lot of features.  LingPipe’s HMM feature set includes exactly two components to predict the tag for the current word: the current word itself and the tag of the previous word.  In the interest of full disclosure, we estimate the word probabilities a character at a time using language models, which is where the work is in our decoder.

Theoretically, if CRFs provide much sharper estimates and much better tuning, they could turn out to be faster than the kind of simple HMMs we use for first-best decoding.  This counterintuitive otucome arises in many search settings, where more expensive but tighther pruning leads to overall performance improvements.

 

来源:http://lingpipe-blog.com/2006/11/22/why-do-you-hate-crfs/

Python网络爬虫与推荐算法新闻推荐平台:网络爬虫:通过Python实现新浪新闻的爬取,可爬取新闻页面上的标题、文本、图片、视频链接(保留排版) 推荐算法:权重衰减+标签推荐+区域推荐+热点推荐.zip项目工程资源经过严格测试可直接运行成功且功能正常的情况才上传,可轻松复刻,拿到资料包后可轻松复现出一样的项目,本人系统开发经验充足(全领域),有任何使用问题欢迎随时与我联系,我会及时为您解惑,提供帮助。 【资源内容】:包含完整源码+工程文件+说明(如有)等。答辩评审平均分达到96分,放心下载使用!可轻松复现,设计报告也可借鉴此项目,该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的。 【提供帮助】:有任何使用问题欢迎随时与我联系,我会及时解答解惑,提供帮助 【附带帮助】:若还需要相关开发工具、学习资料等,我会提供帮助,提供资料,鼓励学习进步 【项目价值】:可用在相关项目设计中,皆可应用在项目、毕业设计、课程设计、期末/期中/大作业、工程实训、大创等学科竞赛比赛、初期项目立项、学习/练手等方面,可借鉴此优质项目实现复刻,设计报告也可借鉴此项目,也可基于此项目来扩展开发出更多功能 下载后请首先打开README文件(如有),项目工程可直接复现复刻,如果基础还行,也可在此程序基础上进行修改,以实现其它功能。供开源学习/技术交流/学习参考,勿用于商业用途。质量优质,放心下载使用。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值