背景:
L1损失函数:
损失函数最简单的定义就是L1损失函数,计算预测值(
y^
)和真实值(
y
)之间的绝对差值,并累计求和作为模型的代价函数。通过优化算法如梯度下降算法来使得代价函数L1最小化,从而获取模型。
L1 损失函数的定义:
代码实现:
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.abs(yhat - y))
#np.sum函数如果不指定axis的话,则默认是所有元素累计求和
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
运行结果:
L1 = 1.1
L2损失函数:
- L2 损失函数的定义:
L2(y^,y)=∑i=0m(y(i)−y^(i))2(2)
在代码实现的时候,我们可以使用np.dot()
x=[x1,x2,...,xn] , thennp.dot(x,x)
= ∑nj=0x2j .
代码实现:
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.dot(yhat - y, yhat - y))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
运行结果:
L2 = 0.43