Surrogate Loss Functions in Machine Learning

转载 2015年11月19日 18:33:34

TL; DR These are some notes on calibration of surrogate loss functions in the context of machine learning. But mostly it is an excuse to post some images I made.

In the binary-class classification setting we are given n training samples {(X1,Y1),,(Xn,Yn)}, where Xi belongs to some sample space X, usually Rp but for the purpose of this post we can keep i abstract, and yi{1,1} is an integer representing the class label.

We are also given a loss function :{1,1}×{1,1}R that measures the error of a given prediction. The value of the loss function  at an arbitrary point (y,y^) is interpreted as the cost incurred by predicting y^ when the true label is y. In classification this function is often the zero-one loss, that is, (y,y^) is zero when y=y^ and one otherwise.

The goal is to find a function h:X[k], the classifier, with the smallest expected loss on a new sample. In other words, we seek to find a function h that minimizes the expected -risk, given by

R(h)=EX×Y[(Y,h(X))]

In theory, we could directly minimize the -risk and we would have the optimal classifier, also known as Bayes predictor. However, there are several problems associated with this approach. One is that the probability distribution of X×Y is unknown, thus computing the exact expected value is not feasible. It must be approximated by the empirical risk. Another issue is that this quantity is difficult to optimize because the function  is discontinuous. Take for example a problem in which X=R2,k=2, and we seek to find the linear function f(X)=sign(Xw),wR2 and that minimizes the -risk. As a function of the parameter w this function looks something like

loss as function of w

This function is discontinuous with large, flat regions and is thus extremely hard to optimize using gradient-based methods. For this reason it is usual to consider a proxy to the loss called a surrogate loss function. For computational reasons this is usually convex function Ψ:RR+. An example of such surrogate loss functions is the hinge lossΨ(t)=max(1t,0), which is the loss used by Support Vector Machines (SVMs). Another example is the logistic loss,Ψ(t)=1/(1+exp(t)), used by the logistic regression model. If we consider the logistic loss, minimizing the Ψ-risk, given by EX×Y[Ψ(Y,f(X))], of the function f(X)=Xw becomes a much more more tractable optimization problem:

In short, we have replaced the -risk which is computationally difficult to optimize with the Ψ-risk which has more advantageous properties. A natural questions to ask is how much have we lost by this change. The property of whether minimizing the Ψ-risk leads to a function that also minimizes the -risk is often referred to as consistency or calibration. For a more formal definition see [1] and [2]. This property will depend on the surrogate function Ψ: for some functions Ψ it will be verified the consistency property and for some not. One of the most useful characterizations was given in [1] and states that if Ψ is convex then it is consistent if and only if it is differentiable at zero and Ψ(0)<0. This includes most of the commonly used surrogate loss functions, including hinge, logistic regression and Huber loss functions.


  1. P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe, “Convexity , Classification , and Risk Bounds,” J. Am. Stat. Assoc., pp. 1–36, 2003. 

  2. A. Tewari and P. L. Bartlett, “On the Consistency of Multiclass Classification Methods,” J. Mach. Learn. Res., vol. 8, pp. 1007–1025, 2007. 

相关文章推荐

Andrew NG 《machine learning》week 5,class1 —Cost functions and Backpropagation

Andrew NG 《machine learning》week 5,class1 —Cost functions and Backpropagation1.1 cost function如下图所示,...

《Machine Learning in Action》 读书笔记之三:朴素贝叶斯(naive Bayes)

1.之所以叫naive bayes,是因为该分类器基于一些naive的假设,即假设数据集的各个特征之间是独立无关的。 2.利用贝叶斯分类器进行分档分类: 1)由训练集文档生成词典: de...

读《Machine Learning in Action》的感想

今天刚刚读完了一本叫《Machine Learning in Action》的书,历时还是挺长的,中途因为课程紧张,停顿了几个月的时间。总体感觉还不错,让我开了眼界。 不过这不是一本入门的书,主要讲...

"Machine Learning in Action" Chapter 2

This chapter covers  • The k-Nearest Neighbors classification algorithm  • Parsing and importing dat...

《Machine Learning in Action》 读书笔记之五:AdaBoost Classification

1.bagging:对给定的原始数据集,“有放回”地随机取S次数量为N的数据集,这样得到S个新dataset,学习训练得到S个classifiers,对测试集进行均等投票得到分类结果。 2.boos...

Machine Learning in Action - 决策树

代码要求参数dataSet数据集最后一列是当前实例的类别标签。按照给定特征提取数据集按照给定的特征axis,提取特征的值等于value的数据集。def splitDataSet(dataSet, a...

Machine Learning in iOS

本文来自简书,原文地址:http://www.jianshu.com/p/981774d46d93 人工智能今年到底有多火?谁都不知道,但一定是炙手可热。自从AlphaGo打败世界最顶级的围棋选...

Machine Learning In Action - Chapter 2 KNN

Chapter 2 - KNN KNN伪代码 For every point in our dataset: calculate the distance between inX and th...

What is regularization in Machine Learning

转自:https://www.quora.com/What-is-regularization-in-machine-learning Regularization is a technique...

Machine Learning In Action:KNN(Python)

python3.4实现简单的KNN
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:深度学习:神经网络中的前向传播和反向传播算法推导
举报原因:
原因补充:

(最多只允许输入30个字)