开篇
今天我们聊一下linear_svm线性支持向量机。首先我们需要知道线性svm的损失函数,这也是作业中要求实现的主要部分。
线性支持向量机
svm的损失函数基本上都是hinge损失:loss = max(0,y_pred - y + 1)
这个1表示的是我们损失函数中的margin,意思为我们最多可以接受这么多的距离误差。二者的差值超过了1就会被我们当作错误样本来计算损失了。
作业里面也要求我们使用两种方法,一种是naive版本,遍历每一个样本,计算在每个类别上的得分,然后与真实类别的得分做差并+1,得到hinge损失的后半部分,至于前半部分,无非是与0进行比较。
第二种方法是以向量的形式做差并求解,更新dw。对于预测正确的样本,dw应该减去总体的损失值,因为预测正确的样本的权重不用更新太多。
具体代码中的细节我会以注释的形式说明。
代码
import numpy as np
from random import shuffle
def svm_loss_naive(W, X, y, reg):
"""
Structured SVM loss function, naive implementation (with loops).
Inputs have dimension D, there are C classes, and we operate on minibatches
of N examples.
Inputs:
- W: A numpy array of shape (D, C) containing weights.
- X: A numpy array of shape (N, D) containing a minibatch of data.
- y: A numpy array of shape (N,) containing training labels; y[i] = c means
that X[i] has label c, where 0 <= c < C.
- reg: (float) regularization strength
Returns a tuple of:
- loss as single float 损失
- gradient with respect to weights W; an array of same shape as W dW
"""
dW = np.zeros(W.shape) # initialize the gradient as zero
# compute the loss and the gradient
num_classes = W.shape[1]
num_train = X.shape[0]
loss = 0.0
for i in range(num_train):
scores = X[i].dot(W)
correct_class_score = scores[y[i]]
for j in range(num_classes):
if j == y[i]:
continue
margin = scores[j] - correct_class_score + 1 # note delta = 1
if margin > 0:
loss += margin
dW[:, j] += X[i].T
dW[:, y[i]] -= X[i].T
# Right now the loss is a sum over all training examples, but we want it
# to be an average instead so we divide by num_train.
loss /= num_train
dW /= num_train
# Add regularization to the loss.
# 正则化
loss += reg * np.sum(W * W)
# 损失函数求导
dW += 2 * reg * W
#############################################################################
# TODO: #
# Compute the gradient of the loss function and store it dW. #
# Rather that first computing the loss and then computing the derivative, #
# it may be simpler to compute the derivative at the same time that the #
# loss is being computed. As a result you may need to modify some of the #
# code above to compute the gradient. #
#############################################################################
return loss, dW
def svm_loss_vectorized(W, X, y, reg):
"""
Structured SVM loss function, vectorized implementation.
Inputs and outputs are the same as svm_loss_naive.
"""
loss = 0.0
dW = np.zeros(W.shape) # initialize the gradient as zero
num_classes = W.shape[1]
num_train = X.shape[0]
#############################################################################
# TODO: #
# Implement a vectorized version of the structured SVM loss, storing the #
# result in loss. #
#############################################################################
scores = X.dot(W)
score_y = scores[np.arange(num_train), y]
score_y = np.tile(score_y.reshape(num_train, 1), (1, num_classes))
margin = scores - score_y + 1
margin[np.arange(num_train), y] = 0
margin[margin < 0] = 0
loss += np.sum(margin) / num_train
loss += reg * np.sum(W * W)
#############################################################################
# END OF YOUR CODE #
#############################################################################
#############################################################################
# TODO: #
# Implement a vectorized version of the gradient for the structured SVM #
# loss, storing the result in dW. #
# #
# Hint: Instead of computing the gradient from scratch, it may be easier #
# to reuse some of the intermediate values that you used to compute the #
# loss. #
#############################################################################
mask_margin = np.zeros((num_train, num_classes)) # where margin[i,j]>0=1
# margin > 0的全部设置为1
mask_margin[margin > 0] = 1
mask_XW = np.ones((num_train, num_classes))
# 这里相当于把mask_margin复制了一遍再给mask_XW
mask_XW = mask_XW * mask_margin
# 将所有损失,即margin算出来
y_sum = np.sum(mask_margin, axis=1)
# 正确的分类的不需要更新太多
mask_XW[np.arange(num_train), y] -= y_sum
dW = np.dot(X.T, mask_XW) / num_train
# 正则化求导
dW += reg * W
#############################################################################
# END OF YOUR CODE #
#############################################################################
return loss, dW
总结
前两次作业都是比较简单的,基本上都是计算损失函数和一些数据的处理,比如大于0的margin全部设为1,这样可以使矩阵中只含有0和1两个值,还有就是计算损失的两种方法,一种就是一一计算样本每个类别的得分,第二种就是向量化的计算,即将所有训练样本与W直接相乘,然后将正确类别的得分也变为同尺度的矩阵,做差算出margin,再接着处理,可以所有训练样本一起计算,不需要单独拿出来一一计算。
好啦这次作业说完了,下次我们说另一种分类器softmax。