《Pattern Recognition and Machine Learning》学习笔记 第一章(三)

Chaper 1(three)

1.3. Model Selection

    在之前提到的多项式曲线拟合中就可以看出,多项式的最高次数影响着所建模型的对测试数据(testing data)性能,项数小,拟合效果不好,项数过大,容易出现过拟合现象(over-fitting)。这就涉及到了一个模型选择的问题。

    如果我们有大量的数据,可以用来建立多个模型,然后再使用同一独立的数据集去评价各个模型的性能,选取性能最好的那个模型及其参数。如果使用小数据多次迭代进行模型比较选择,容易出现过拟的现象。但是,在许多情况下,提供给建模的训练和测试数据都十分有限,又想建个好模型,怎么办呢?

    解决这个窘境的办法之一就是使用交叉验证(cross-validation),将可用的数据集分成S份(一般是分成相同大小),用S-1份去训练各个模型,用剩下的一份去测试模型,如此重复S次,将各个模型的性能平均,选取平均性能最好的模型和参数。图解如下:


如果可用数据集规模特别的小,可以考虑使S=N,N是可用数据的样本个数,这被称为leave-one-out。

 

    交叉验证的主要缺点就是S决定了建模比较过程中的迭代次数,如果S过大的话,而且单个模型中还会有多个复杂的模型参数,这会造成大量的计算花费。

    因此,理想的情况是仅使用训练数据(training data),对多个模型和参数的选择比较在一次训练过程(training run)完成。所以我们要找到一种仅依赖于训练数据并且不会引起过拟的性能评估方法。在历史上,其中之一就是the Akaike information criterion, or AIC (Akaike, 1974),通过使下面的式子达到最大来选择模型:


其中,是最优的似然函数,是模型中的参数个数。其他的例子还有Bayesian information criterion, or BIC,不过在本书4.4.1在讲,所以本章的标题是Introduction,只是介绍性的,详细的东西在以后的章节。

 

To be continued…

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
The dramatic growth in practical applications for machine learning over the last ten years has been accompanied by many important developments in the underlying algorithms and techniques. For example, Bayesian methods have grown from a specialist niche to become mainstream, while graphical models have emerged as a general framework for describing and applying probabilistic techniques. The practical applicability of Bayesian methods has been greatly enhanced by the development of a range of approximate inference algorithms such as variational Bayes and expectation propagation, while new models based on kernels have had a significant impact on both algorithms and applications., This completely new textbook reflects these recent developments while providing a comprehensive introduction to the fields of pattern recognition and machine learning. It is aimed at advanced undergraduates or first-year PhD students, as well as researchers and practitioners. No previous knowledge of pattern recognition or machine learning concepts is assumed. Familiarity with multivariate calculus and basic linear algebra is required, and some experience in the use of probabilities would be helpful though not essential as the book includes a self-contained introduction to basic probability theory., The book is suitable for courses on machine learning, statistics, computer science, signal processing, computer vision, data mining, and bioinformatics. Extensive support is provided for course instructors, including more than 400 exercises, graded according to difficulty. Example solutions for a subset of the exercises are available from the book web site, while solutions for the remainder can be obtained by instructors from the publisher. The book is supported by a great deal of additional material, and the reader is encouraged to visit the book web site for the latest information.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值