Objective function:目标函数
Loss function:损失函数
Cost function:代价函数
Loss function一般是针对单个样本测量其惩罚值。例如:
平方损失Square Loss l(f(xi|θ),yi)=(f(xi|θ)−yi)2 l ( f ( x i | θ ) , y i ) = ( f ( x i | θ ) − y i ) 2 , 用于线性回归Linear Regression
Hinge Loss l(f(xi|θ),yi)=max(0,1−f(xi|θ)yi)l(f(xi|θ),yi)=max(0,1−f(xi|θ)yi) l ( f ( x i | θ ) , y i ) = m a x ( 0 , 1 − f ( x i | θ ) y i ) l ( f ( x i | θ ) , y i ) = m a x ( 0 , 1 − f ( x i | θ ) y i ) ,
用于SVM- 0/1 loss - l(f(xi|θ),yi)=1⟺f(xi|θ)≠yil(f(xi|θ),yi)=1⟺f(xi|θ)≠yi l ( f ( x i | θ ) , y i ) = 1 ⟺ f ( x i | θ ) ≠ y i l ( f ( x i | θ ) , y i ) = 1 ⟺ f ( x i | θ ) ≠ y i , 用于理论分析theoretical analysis 和定义准确率
Cost function一般是针对总体,它可能是你的训练集的损失函数的总和加上一些模型复杂性惩罚(正则化)。例如:
- Mean Squared Error MSE(θ)=1N∑Ni=1(f(xi|θ)−yi)2 M S E ( θ ) = 1 N ∑ N i = 1 ( f ( x i | θ ) − y i ) 2
- SVM代价函数
SVM(θ)=∥θ∥2+C∑Ni=1ξiSVM(θ)=∥θ∥2+C∑i=1Nξi S V M ( θ ) =∥ θ ∥ 2 + C ∑ N i = 1 ξ i S V M ( θ ) = ‖ θ ‖ 2 + C ∑ i = 1 N ξ i (there are additional constraints connecting ξiξi with CC and with training set)
- Objective function目标函数是你进行机器学习训练过程中优化的任何函数的最通用说法,例如,根据训练集生成极大似然估计的概率就是一个定义地很好的目标函数
- 极大似然估计方法(Maximum Likelihood Estimate,MLE)是一种类型的目标函数(求极大值)
- Divergence between classes can be an objective function but it is barely a cost function, unless you define something artificial, like 1-Divergence, and name it a cost
总之长话短说,在一个机器学习训练中,损失函数是代价函数的一部分,但其仅仅是一种目标函数。A loss function is a part of a cost function which is a type of an objective function.