cs231n assignment1_Q2_SVM

在这里插入图片描述

本次作业中的难点在于计算的dW,首先来总结一下整个流程。

1、(加载数据等略)把数据集分为训练集,验证集,测试集。
2、对图像预处理,减去平均值。
3、处理W,f(x)=WX+b处理的技巧,将b拼接到W中一起计算。
4、计算loss值,dW值。
5、用数值梯度检验dW。
6、调优学习率和正则化强度
7、使用SGD(随机梯度下降)来优化Loss函数。
8、评估训练集、验证集的结果。
9、可视化训练后的W矩阵。

难点攻破

下面挑出难点计算dW来说明。

在这里插入图片描述
在上面的公式中,假设每个图像数据都被拉长为一个长度为D的列向量,大小为[D x 1]。其中大小为[K x D]的矩阵W和大小为[K x 1]列向量b为该函数的参数(parameters)。还是以CIFAR-10为例,Xi就包含了第i个图像的所有像素信息,这些信息被拉成为一个[3072 x 1]的列向量,W大小为[10x3072],b的大小为[10x1]。因此,3072个数字(原始像素数值)输入函数,函数输出10个数字(不同分类得到的分值)。参数W被称为权重(weights)。b被称为偏差向量(bias vector),这是因为它影响输出数值,但是并不和原始数据Xi产生关联。在实际情况中,人们常常混用权重和参数这两个术语。
下面给出Loss函数:
在这里插入图片描述
加入惩罚项后的Loss函数:
在这里插入图片描述

def svm_loss_vectorized(W, X, y, reg):
  """
  Structured SVM loss function, vectorized implementation.

  Inputs and outputs are the same as svm_loss_naive.
  """
  loss = 0.0
  dW = np.zeros(W.shape) # initialize the gradient as zero
  num_train = X.shape[0]  #500
  scores = np.dot(X, W) #点乘,得到评分
  #print(scores.shape) #(500,10)
  correct_class_scores = scores[np.arange(num_train), y] #变成了 (num_train,y)的矩阵
  correct_class_scores = np.reshape(correct_class_scores, (num_train, -1))
  #print(correct_class_scores.shape)  # (500,1)
  
  margin = scores - correct_class_scores + 1.0
  margin[np.arange(num_train), y] = 0.0 #把所有y的位置置0
  margin[margin <= 0] = 0.0  #  max()公式的实现

  loss += np.sum(margin) / num_train #计算loss
  loss += 0.5 * reg * np.sum(W * W)




  #只有s(j) - s(y(i))+1>0 且 j != y(i) 的位置为1
  margin[margin > 0] = 1.0

  row_sum = np.sum(margin, axis=1)  # 计算1行有多少个1,即满足了公式max(,),需要更新的数量
  margin[np.arange(num_train), y] = -row_sum  # 为了在y的位置,减掉更新的 -xi*row_sum

  #现在得到了更新矩阵margin,用来和X.T相乘得到dW矩阵
  dW = 1.0 / num_train * np.dot(X.T, margin) + reg * W   #综合得到dW


  return loss, dW

那么求score的过程用矩阵来表示如下:
在这里插入图片描述
结合图和代码,可以理解求loss的过程。
为了能让我们的Loss最小,于是想到了求导梯度下降的方法来减小loss值。

在这里插入图片描述

知道方法后,接下来关键就是理解代码中的margin,其实这个margin矩阵就是更新dW矩阵所要用到的向量排成一个更新矩阵。网上有个老哥讲的还挺清楚的,链接在此: https://blog.csdn.net/AlexXie1996/article/details/79184596?utm_source=blogxgwz9

点2,减去平均值

图像数据预处理:在上面的例子中,所有图像都是使用的原始像素值(从0到255)。在机器学习中,对于输入的特征做归一化(normalization)处理是常见的套路。而在图像分类的例子中,图像上的每个像素可以看做一个特征。在实践中,对每个特征减去平均值来中心化数据是非常重要的。在这些图片的例子中,该步骤意味着根据训练集中所有的图像计算出一个平均图像值,然后每个图像都减去这个平均值,这样图像的像素值就大约分布在[-127, 127]之间了。下一个常见步骤是,让所有数值分布的区间变为[-1, 1]。零均值的中心化是很重要的,等我们理解了梯度下降后再来详细解释。

点3,偏差和权重的合并技巧:

一般常用的方法是把两个参数放到同一个矩阵中,同时向量就要增加一个维度,这个维度的数值是常量1,这就是默认的偏差维度。这样新的公式就简化成下面这样:
在这里插入图片描述
以CIFAR-10为例,那么Xi的大小就变成[3073x1],而不是[3072x1]了,多出了包含常量1的1个维度)。W大小就是[10x3073]了。W中多出来的这一列对应的就是偏差值,具体见下图:
在这里插入图片描述

偏差技巧的示意图。左边是先做矩阵乘法然后做加法,右边是将所有输入向量的维度增加1个含常量1的维度,并且在权重矩阵中增加一个偏差列,最后做一个矩阵乘法即可。左右是等价的。通过右边这样做,我们就只需要学习一个权重矩阵,而不用去学习两个分别装着权重和偏差的矩阵了。

贴上代码:

svm.ipynb(部分)

#   调节超参数
#  Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of about 0.4 on the validation set.
learning_rates = [1e-7, 5e-5]
regularization_strengths = [2.5e4, 5e4]

# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1   # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.

################################################################################
# TODO:                                                                        #
# Write code that chooses the best hyperparameters by tuning on the validation #
# set. For each combination of hyperparameters, train a linear SVM on the      #
# training set, compute its accuracy on the training and validation sets, and  #
# store these numbers in the results dictionary. In addition, store the best   #
# validation accuracy in best_val and the LinearSVM object that achieves this  #
# accuracy in best_svm.                                                        #
#                                                                              #
# Hint: You should use a small value for num_iters as you develop your         #
# validation code so that the SVMs don't take much time to train; once you are #
# confident that your validation code works, you should rerun the validation   #
# code with a larger value for num_iters.                                      #
################################################################################

for learning_rate in learning_rates :
    for regularization_strength in regularization_strengths :
        svm = LinearSVM()
        loss_hist = svm.train(X_train, y_train, learning_rate=learning_rate, 
                              reg=regularization_strength,num_iters=1500)
        
        y_train_pred = svm.predict(X_train)
        y_val_pred = svm.predict(X_val)
        
        y_train_acc = np.mean(y_train_pred == y_train)
        y_val_acc = np.mean(y_val_pred == y_val)
        
        results[(learning_rate,regularization_strength)] = (y_train_acc, y_val_acc)
        if y_val_acc > best_val:
            best_val = y_val_acc
            best_svm = svm
        

################################################################################
#                              END OF YOUR CODE                                #
################################################################################
    
# Print out results.
for lr, reg in sorted(results):
    train_accuracy, val_accuracy = results[(lr, reg)]
    print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
                lr, reg, train_accuracy, val_accuracy))
    
print('best validation accuracy achieved during cross-validation: %f' % best_val)

linear_classifier.py

from __future__ import print_function

import numpy as np
from cs231n.classifiers.linear_svm import *
from cs231n.classifiers.softmax import *
from past.builtins import xrange


class LinearClassifier(object):

  def __init__(self):
    self.W = None

  def train(self, X, y, learning_rate=1e-3, reg=1e-5, num_iters=100,
            batch_size=200, verbose=False):
    """
    考虑到训练数据X庞大,为了一个参数的更新而计算整个训练集太浪费了。
    一个常用的方法是计算训练集中的小批量数据(相当于样本n,有点类似)。
    X_batch 和 y_batch 即每次选的小批数据来更新权重W。
    下面函数将每次更新完的loss保存在数组loss_hist中,方便画图看不同迭代次数后的loss。
    """
    """
    Train this linear classifier using stochastic gradient descent.

    Inputs:
    - X: A numpy array of shape (N, D) containing training data; there are N
      training samples each of dimension D.
    - y: A numpy array of shape (N,) containing training labels; y[i] = c
      means that X[i] has label 0 <= c < C for C classes.
    - learning_rate: (float) learning rate for optimization.
    - reg: (float) regularization strength.
    - num_iters: (integer) number of steps to take when optimizing ,优化时训练的步数.
    - batch_size: (integer) number of training examples to use at each step.
    - verbose: (boolean) If true, print progress during optimization. 若为真,优化时打印过程

    Outputs:
    一个存储每次训练的损失函数中值的list
    A list containing the value of the loss function at each training iteration.
    """
    num_train, dim = X.shape
    num_classes = np.max(y) + 1 # assume y takes values 0...K-1 where K is number of classes
    if self.W is None:
      # lazily initialize W
      self.W = 0.001 * np.random.randn(dim, num_classes)

    # Run stochastic gradient descent to optimize W
    loss_history = []
    for it in xrange(num_iters):
      #X_batch = None
      #y_batch = None

      #########################################################################
      # TODO:                                                                 #
      # Sample batch_size elements from the training data and their           #
      # corresponding labels to use in this round of gradient descent.        #
      # Store the data in X_batch and their corresponding labels in           #
      # y_batch; after sampling X_batch should have shape (dim, batch_size)   #
      # and y_batch should have shape (batch_size,)                           #
      #                                                                       #
      # Hint: Use np.random.choice to generate indices. Sampling with         #
      # replacement is faster than sampling without replacement.              #
      #
      # 从训练集中采样, 数据存在x_batch中,对应的标签采样在y_batch中               #
      # 用np.random.choice来生成indices,有放回的采样速度比无放回的采样速度快.     #
      #                                                                       #
      #########################################################################

      sample_index = np.random.choice(num_train,batch_size,replace=False)
      X_batch = X[sample_index,:]
      y_batch = y[sample_index]
      #########################################################################
      #                       END OF YOUR CODE                                #
      #########################################################################

      # evaluate loss and gradient
      loss, grad = self.loss(X_batch, y_batch, reg)
      loss_history.append(loss)

      # perform parameter update
      #########################################################################
      # TODO:                                                                 #
      #
      # Update the weights using the gradient and the learning rate.
      # 使用梯度和学习率来更新权重 #
      #########################################################################

      self.W = self.W - learning_rate*grad

      #########################################################################
      #                       END OF YOUR CODE                                #
      #########################################################################

      if verbose and it % 100 == 0:
        print('iteration %d / %d: loss %f' % (it, num_iters, loss))

    return loss_history

  def predict(self, X):
    """
    Use the trained weights of this linear classifier to predict labels for
    data points.

    Inputs:
    - X: A numpy array of shape (N, D) containing training data; there are N
      training samples each of dimension D.

    Returns:
    - y_pred: Predicted labels for the data in X. y_pred is a 1-dimensional
      array of length N, and each element is an integer giving the predicted
      class.
    """
    y_pred = np.zeros(X.shape[0])
    ###########################################################################
    # TODO:                                                                   #
    # Implement this method. Store the predicted labels in y_pred.            #
    ###########################################################################

    score = X.dot(self.W)
    y_pred = np.argmax(score, axis=1) #得到得分最高的标签


    ###########################################################################
    #                           END OF YOUR CODE                              #
    ###########################################################################
    return y_pred
  
  def loss(self, X_batch, y_batch, reg):
    """
    Compute the loss function and its derivative. 
    Subclasses will override this.

    Inputs:
    - X_batch: A numpy array of shape (N, D) containing a minibatch of N
      data points; each point has dimension D.
    - y_batch: A numpy array of shape (N,) containing labels for the minibatch.
    - reg: (float) regularization strength.

    Returns: A tuple containing:
    - loss as a single float
    - gradient with respect to self.W; an array of the same shape as W
    """
    pass


class LinearSVM(LinearClassifier):
  """ A subclass that uses the Multiclass SVM loss function """

  def loss(self, X_batch, y_batch, reg):
    return svm_loss_vectorized(self.W, X_batch, y_batch, reg)


class Softmax(LinearClassifier):
  """ A subclass that uses the Softmax + Cross-entropy loss function """

  def loss(self, X_batch, y_batch, reg):
    return softmax_loss_vectorized(self.W, X_batch, y_batch, reg)

linea_svm.py

import numpy as np
from random import shuffle
from past.builtins import xrange

def svm_loss_naive(W, X, y, reg):
  """
  Structured SVM loss function, naive implementation (with loops).

  Inputs have dimension D, there are C classes, and we operate on minibatches
  of N examples.

  Inputs:
  - W: A numpy array of shape (D, C) containing weights.
  - X: A numpy array of shape (N, D) containing a minibatch of data.
  - y: A numpy array of shape (N,) containing training labels; y[i] = c means
    that X[i] has label c, where 0 <= c < C.
  - reg: (float) regularization strength

  Returns a tuple of:
  - loss as single float
  - gradient with respect to weights W; an array of same shape as W
  """
  dW = np.zeros(W.shape) # initialize the gradient as zero

  # compute the loss and the gradient
  num_classes = W.shape[1] #列  10类
  num_train = X.shape[0] #行 500个训练样本
  loss = 0.0
  for i in xrange(num_train): #500
    scores = X[i].dot(W)  #X[500][3073],W[3073][10]
    correct_class_score = scores[y[i]]  #第y[i]类的得分 或 第i个样本的所属类别的得分
    for j in xrange(num_classes): #10
      if j == y[i]:
        continue   #跳出当前循环,执行下一次循环
      margin = scores[j] - correct_class_score + 1 # note delta = 1
      if margin > 0:   # Σmax(0,sj-syi+Δ)
        loss += margin #计算data loss

        #在计算损失的同时,可以更新偏导数dW
        dW [:,j] += X[i,:].T  #公式中对Wj求偏导数,结果为Xi
        dW [:,y[i]] += -X[i,:].T  #公式中对Wyi求偏导数,结果为-Xi

  # Right now the loss is a sum over all training examples, but we want it
  # to be an average instead so we divide by num_train.
  loss /= num_train     #由于样本太多,loss值累积,所以取均值

  # Add regularization to the loss.
  loss += reg * np.sum(W * W)

  #############################################################################
  # TODO:                                                                     #
  # Compute the gradient of the loss function and store it dW.                #
  # Rather that first computing the loss and then computing the derivative,   #
  # it may be simpler to compute the derivative at the same time that the     #
  # loss is being computed. As a result you may need to modify some of the    #
  # code above to compute the gradient.                                       #
  #############################################################################


  return loss, dW


def svm_loss_vectorized(W, X, y, reg):
  """
  Structured SVM loss function, vectorized implementation.

  Inputs and outputs are the same as svm_loss_naive.
  """
  loss = 0.0
  dW = np.zeros(W.shape) # initialize the gradient as zero

  #############################################################################
  # TODO:                                                                     #
  # Implement a vectorized version of the structured SVM loss, storing the    #
  # result in loss.                                                           #
  #############################################################################
  num_train = X.shape[0]  #500
  scores = np.dot(X, W) #点乘,得到评分
  #print(scores.shape) #(500,10)
  correct_class_scores = scores[np.arange(num_train), y] #变成了 (num_train,y)的矩阵
  correct_class_scores = np.reshape(correct_class_scores, (num_train, -1))
  #print(correct_class_scores.shape)  # (500,1)

  margin = scores - correct_class_scores + 1.0
  margin[np.arange(num_train), y] = 0.0 #把所有y的位置置0
  margin[margin <= 0] = 0.0  #  max()公式的实现

  loss += np.sum(margin) / num_train #计算loss
  loss += 0.5 * reg * np.sum(W * W)



  #############################################################################
  #                             END OF YOUR CODE                              #
  #############################################################################


  #############################################################################
  # TODO:                                                                     #
  # Implement a vectorized version of the gradient for the structured SVM     #
  # loss, storing the result in dW.                                           #
  #                                                                           #
  # Hint: Instead of computing the gradient from scratch, it may be easier    #
  # to reuse some of the intermediate values that you used to compute the     #
  # loss.                                                                     #
  #############################################################################
  #只有s(j) - s(y(i))+1>0 且 j != y(i) 的位置为1
  margin[margin > 0] = 1.0

  row_sum = np.sum(margin, axis=1)  # 计算1行有多少个1,即满足了公式max(,),需要更新的数量
  margin[np.arange(num_train), y] = -row_sum  # 为了在y的位置,减掉更新的 -xi*row_sum

  #现在得到了更新矩阵margin,用来和X.T相乘得到dW矩阵
  dW = 1.0 / num_train * np.dot(X.T, margin) + reg * W   #综合得到dW

  #############################################################################
  #                             END OF YOUR CODE                              #
  #############################################################################

  return loss, dW

最后,想说还需要熟悉numpy的操作,这两次作业下来用的最多就是矩阵的操作,用矩阵操作来代替普通的循环等。

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值