关闭

Surrogate Loss Functions in Machine Learning

标签: 机器学习
333人阅读 评论(0) 收藏 举报
分类:

TL; DR These are some notes on calibration of surrogate loss functions in the context of machine learning. But mostly it is an excuse to post some images I made.

In the binary-class classification setting we are given n training samples {(X1,Y1),,(Xn,Yn)}, where Xi belongs to some sample space X, usually Rp but for the purpose of this post we can keep i abstract, and yi{1,1} is an integer representing the class label.

We are also given a loss function :{1,1}×{1,1}R that measures the error of a given prediction. The value of the loss function  at an arbitrary point (y,y^) is interpreted as the cost incurred by predicting y^ when the true label is y. In classification this function is often the zero-one loss, that is, (y,y^) is zero when y=y^ and one otherwise.

The goal is to find a function h:X[k], the classifier, with the smallest expected loss on a new sample. In other words, we seek to find a function h that minimizes the expected -risk, given by

R(h)=EX×Y[(Y,h(X))]

In theory, we could directly minimize the -risk and we would have the optimal classifier, also known as Bayes predictor. However, there are several problems associated with this approach. One is that the probability distribution of X×Y is unknown, thus computing the exact expected value is not feasible. It must be approximated by the empirical risk. Another issue is that this quantity is difficult to optimize because the function  is discontinuous. Take for example a problem in which X=R2,k=2, and we seek to find the linear function f(X)=sign(Xw),wR2 and that minimizes the -risk. As a function of the parameter w this function looks something like

loss as function of w

This function is discontinuous with large, flat regions and is thus extremely hard to optimize using gradient-based methods. For this reason it is usual to consider a proxy to the loss called a surrogate loss function. For computational reasons this is usually convex function Ψ:RR+. An example of such surrogate loss functions is the hinge lossΨ(t)=max(1t,0), which is the loss used by Support Vector Machines (SVMs). Another example is the logistic loss,Ψ(t)=1/(1+exp(t)), used by the logistic regression model. If we consider the logistic loss, minimizing the Ψ-risk, given by EX×Y[Ψ(Y,f(X))], of the function f(X)=Xw becomes a much more more tractable optimization problem:

In short, we have replaced the -risk which is computationally difficult to optimize with the Ψ-risk which has more advantageous properties. A natural questions to ask is how much have we lost by this change. The property of whether minimizing the Ψ-risk leads to a function that also minimizes the -risk is often referred to as consistency or calibration. For a more formal definition see [1] and [2]. This property will depend on the surrogate function Ψ: for some functions Ψ it will be verified the consistency property and for some not. One of the most useful characterizations was given in [1] and states that if Ψ is convex then it is consistent if and only if it is differentiable at zero and Ψ(0)<0. This includes most of the commonly used surrogate loss functions, including hinge, logistic regression and Huber loss functions.


  1. P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe, “Convexity , Classification , and Risk Bounds,” J. Am. Stat. Assoc., pp. 1–36, 2003. 

  2. A. Tewari and P. L. Bartlett, “On the Consistency of Multiclass Classification Methods,” J. Mach. Learn. Res., vol. 8, pp. 1007–1025, 2007. 

0
0

猜你在找
【直播】机器学习&数据挖掘7周实训--韦玮
【套餐】系统集成项目管理工程师顺利通关--徐朋
【直播】3小时掌握Docker最佳实战-徐西宁
【套餐】机器学习系列套餐(算法+实战)--唐宇迪
【直播】计算机视觉原理及实战--屈教授
【套餐】微信订阅号+服务号Java版 v2.0--翟东平
【直播】机器学习之矩阵--黄博士
【套餐】微信订阅号+服务号Java版 v2.0--翟东平
【直播】机器学习之凸优化--马博士
【套餐】Javascript 设计模式实战--曾亮
查看评论
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
    个人资料
    • 访问:303042次
    • 积分:3679
    • 等级:
    • 排名:第8787名
    • 原创:22篇
    • 转载:385篇
    • 译文:0篇
    • 评论:26条
    最新评论