CS231n 课程作业 Assignment One(六)小结

小结

一、原理分析

1.1 KNN

1)计算测试集每张图片的每个像素点与训练集每张图片的每个像素点的距离,本文采用了欧氏距离;
2)将距离排序,输出与测试集距离最小的前k个训练集图像的类别
3)对得到的k个数进行投票,选取出现次数最多的类别作为最后的预测类别,当k=1时,closest_y=y_pred。

1.2 SVM

对于每一个图像样本,正确得分至少要比其它分类高▲(实验中取1)
根据实验需求,难点在于求解损失函数关于权重矩阵的梯度
公式推导:
loss与梯度的计算

1.3 Softmax

通过概率表示每个类别被选中的几率,能够对分类结果进行量化。
在SVM的基础上修改了loss函数
在这里插入图片描述
基于这个损失函数的梯度推导就比较麻烦了,参见

1.4 两层神经网络

基础理论见另一篇文章–全连接神经网络
这个的求导实际上是将softmax与SVM合并,即softmax求导和Max函数的求导
在这里插入图片描述
参见:矩阵的梯度计算

二、代码分析(key)

2.1 KNN

KNN的代码难点在于需要依次完成两层循环、一层循环,最后完全用 numpy 的矩阵操作来实现逻辑。

1)两层循环计算欧氏距离

    def compute_distances_two_loops(self, X):

        num_test = X.shape[0]
        num_train = self.X_train.shape[0]
        dists = np.zeros((num_test, num_train))
        for i in range(num_test):
            for j in range(num_train):
                # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
                dists[i, j] = np.sqrt(np.dot(X[i] - self.X_train[j], X[i] - self.X_train[j]))
                pass
                # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
        return dists

代码分析:

输入X:包含测试数据的形状(num_test,D)的numpy数组。
返回值:dists:形状为(num_test,num_train)的numpy数组
其中dists [i,j] 是第i个测试点与第j个训练之间的欧氏距离

在self.X_train中,使用对训练数据和测试数据,计算每个测试点与每个训练点之间的距离。

2)一层循环计算欧氏距离

    def compute_distances_one_loop(self, X):

        num_test = X.shape[0]
        num_train = self.X_train.shape[0]
        dists = np.zeros((num_test, num_train))
        for i in range(num_test):
            # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
            dists[i, :] = np.sqrt(np.sum(np.square(X[i] - self.X_train), axis = 1))
            pass
            # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
        return dists

代码分析:

原理上和 1)是完全一致的,但通过axis = 1设定方向,减少了一层循环遍历

3)无循环计算欧氏距离

    def compute_distances_no_loops(self, X):

        num_test = X.shape[0]
        num_train = self.X_train.shape[0]
        dists = np.zeros((num_test, num_train))
    
        # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
        #dists = np.sqrt(self.getNormMatrix(X, num_train).T + self.getNormMatrix(self.X_train, num_test) - 2 * np.dot(X, self.X_train.T))
        dists += np.sum(np.multiply(X, X), axis = 1, keepdims = True).reshape(num_test, 1) 
        dists += np.sum(np.multiply(self.X_train, self.X_train), axis = 1, keepdims = True).reshape(1, num_train)
        dists += -2 * np.dot(X, self.X_train.T)
        dists = np.sqrt(dists)
        pass
        # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
        return dists

代码分析:

由于仅用了spicy和numpy中计算欧氏距离的简单函数,这里不使用循环,就要完全依靠矩阵乘法的运算和boardcast,做一个数学上的推演。可以参考[矩阵间欧式距离计算](https://blog.csdn.net/IT_forlearn/article/details/100022244?utm_medium=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.add_param_isCf&depth_1-utm_source=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.add_param_isCf)
明白原理之后,代码表达就比较简单了

经过test我们发现,三种方式在正确率上表现一致,但时间成本差距较大
输出:

Two loop version took 38.029159 seconds
One loop version took 84.845219 seconds
No loop version took 0.500713 seconds

很明显,不使用循环时效率最高
4)选取合适的K

num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []

# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
X_train_folds=np.array_split(X_train,num_folds)
y_train_folds=np.array_split(y_train,num_folds)
pass

# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

k_to_accuracies = {}

# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for validation_loc in range(0,num_folds):
    selectfold = list(range(0,validation_loc)) + list(range(validation_loc+1,num_folds))
    for ii in selectfold:
        train = []
        if len(train) :
            train  = np.vstack((train,X_train_folds[ii]))
            label  = np.vstack((label,y_train_folds[ii]))
        else:
            train = X_train_folds[ii]
            label = y_train_folds[ii]
            
    classifier.train(train, label)
    dists = classifier.compute_distances_no_loops(X_train_folds[validation_loc])
    for k  in k_choices:
        y_test_pred = classifier.predict_labels(dists, k=k)
        num_correct = np.sum(y_test_pred == y_train_folds[validation_loc])
        accuracy = float(num_correct) / len(y_train_folds[validation_loc])
        if(k in k_to_accuracies):
            k_to_accuracies[k].append(accuracy)
        else:
            k_to_accuracies[k] = [accuracy]
pass

# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

# Print out the computed accuracies
for k in sorted(k_to_accuracies):
    for accuracy in k_to_accuracies[k]:
        print('k = %d, accuracy = %f' % (k, accuracy))

代码分析:

第一段代码,使用 numpy array_split 将训练数据分成多个folds。 拆分后,X_train_folds和y_train_folds都应为长度为num_folds的列表,其中y_train_folds [i]是X_train_folds [i]中各点的矢量标签。用以实现交叉验证
第二段代码,执行k次交叉验证,以找到k的最佳值。 对于每个可能的k值,运行KNN算法,在每种情况下,将除其中一个fold以外的所有folds用作训练数据,将最后一个fold用作验证集。 将所有折叠和所有k值的精度存储在k_to_accuracies字典中。 

在得到如上k_to_accuracies字典后,以k值-正确率作图,找到best k
本文得到:best_k = 8 while the acc is 0.273
详见:KNN实验过程

2.2 SVM

1)实现SVM的朴素方法

def svm_loss_naive(W, X, y, reg):
   
    dW = np.zeros(W.shape) # initialize the gradient as zero

    # compute the loss and the gradient
    num_classes = W.shape[1]
    num_train = X.shape[0]
    loss = 0.0
    for i in range(num_train):
        scores = X[i].dot(W)
        correct_class_score = scores[y[i]]
        for j in range(num_classes):
            if j == y[i]:
                continue
            margin = scores[j] - correct_class_score + 1 # note delta = 1
            if margin > 0:
                loss += margin
                dW[:, y[i]] += -X[i,:].T     
                dW[:,j] += X[i,:].T
    # Right now the loss is a sum over all training examples, but we want it
    # to be an average instead so we divide by num_train.
    loss /= num_train
    dW /= num_train
    # Add regularization to the loss.
    loss += reg * np.sum(W * W)

    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    dW += 2 * reg * W
    pass

    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    
    return loss, dW

代码解释:

使用循环实现SVM loss函数
输入维度为D,有C类,我们使用N个样本作为一批输入
输入:
	-W:shape(D, C) 权重矩阵  权重矩阵一列作为一组特征
	-X: shape(N, D) 数据      输入数据为一行一个样本
	-y:shape(N, )  标签 
	-reg:float,正则化强度 

return: tuple 
	- 存储为float的loss 
	- 权重W的梯度,和W大小相同的array 
	
第一层循环,每一行乘以权重矩阵,得到正确的得分
第二层循环,逐类别判断margin,更新loss
最后添加正则项和求导

2)实现SVM的向量方法

def svm_loss_vectorized(W, X, y, reg):

    loss = 0.0
    dW = np.zeros(W.shape) # initialize the gradient as zero
   
    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    num_train = X.shape[0]  # 得到样本的数目
    scores = np.dot(X, W)  # 计算所有的得分
    y_score = scores[np.arange(num_train), y].reshape((-1, 1))  # 得到每个样本对应label的得分
    mask = (scores - y_score + 1) > 0  # 有效的score下标,多算label
    scores = (scores - y_score + 1) * mask  # 有效的得分
    loss = (np.sum(scores) - num_train * 1) / num_train  # 去掉每个样本多加的对应label得分,然后平均
    loss += reg * np.sum(W * W)

    pass

    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****    
    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    # dw = x.T * dl/ds
    ds = np.ones_like(scores)  # 初始化ds
    ds *= mask  # 有效的score梯度为1,无效的为0
    ds[np.arange(num_train), y] = -1 * (np.sum(mask, axis=1) - 1)  # 每个样本对应label的梯度计算了(有效的score次),取负号
    dW = np.dot(X.T, ds) / num_train   # 平均
    dW += 2 * reg * W  # 加上正则项的梯度

    pass
    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    return loss, dW

代码分析:

代码含义上没有太大变化,向量计算直接翻译公式就可以了,公式的推导在原理部分中

3)SGD

class LinearClassifier(object):

    def __init__(self):
        self.W = None

    def train(self, X, y, learning_rate=1e-3, reg=1e-5, num_iters=100,
              batch_size=200, verbose=False):

        num_train, dim = X.shape
        num_classes = np.max(y) + 1 # assume y takes values 0...K-1 where K is number of classes
        if self.W is None:
            # lazily initialize W
            self.W = 0.001 * np.random.randn(dim, num_classes)

        # Run stochastic gradient descent to optimize W
        loss_history = []
        for it in range(num_iters):		#每次随机取batch的数据来进行梯度下降
            X_batch = None
            y_batch = None
            # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
            
            batch_idx = np.random.choice(num_train, batch_size, replace=False)
            X_batch = X[batch_idx, :]   # batch_size by D
            y_batch = y[batch_idx]      # 1 by batch_size

            pass

            # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

            # evaluate loss and gradient
            loss, grad = self.loss(X_batch, y_batch, reg)
            loss_history.append(loss)

            # perform parameter update
            
            # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
            # perform parameter update  梯度下降
            self.W += -learning_rate * grad
            pass
            # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

            if verbose and it % 100 == 0:
                print('iteration %d / %d: loss %f' % (it, num_iters, loss))

        return loss_history

代码分析:

输入:
     -X:包含训练数据的形状(N,D)的numpy数组; 有N
       训练每个维度D的样本。
     -y:包含训练标签的形状(N,)的小数数组; y [i] = c
       表示对于C类,X [i]的标号为0 <= c <C。
     -learning_rate :学习率。
     -num_iters :优化时要采取的次数
     -batch_size :每个步骤要使用的训练示例数。
输出:
     包含每次训练迭代中损失函数值的列表。

详见:SVM实验过程

2.3 Softmax

1)用朴素方法实现Softmax

def softmax_loss_naive(W, X, y, reg):

    # Initialize the loss and gradient to zero.
    loss = 0.0
    dW = np.zeros_like(W)
    
    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    num_train = X.shape[0]
    num_class = W.shape[1]
    for i in range(num_train):
        score = X[i].dot(W)
        score -= np.max(score)  # 提高计算中的数值稳定性
        correct_score = score[y[i]]  # 取分类正确的评分值
        exp_sum = np.sum(np.exp(score))
        loss += np.log(exp_sum) - correct_score
        for j in xrange(num_class):
            if j == y[i]:
                dW[:, j] += np.exp(score[j]) / exp_sum * X[i] - X[i]
            else:
                dW[:, j] += np.exp(score[j]) / exp_sum * X[i]
    loss /= num_train
    loss += reg * np.sum(W * W)
    dW /= num_train
    dW += 2 * reg * W
    pass
    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    return loss, dW

代码分析:

 输入:
 -W:包含权重的形状(D,C)的Numpy数组。
 -X:包含少量数据的形状(N,D)的numpy数组。
 -y:包含训练标签的形状(N,)的小数数组; y [i] = c表示
   X [i]的标号为c,其中0 <= c <C。
 -reg :正则化强度

 返回:以元组形式
 	- loss
 	-关于权重W的梯度; 与W形状相同的数组
 	
使用显式循环计算softmax损失及其梯度。 将损耗存储在损耗中,将梯度存储在dW中

2)用向量方法实现Softmax

def softmax_loss_vectorized(W, X, y, reg):
    # Initialize the loss and gradient to zero.
    loss = 0.0
    dW = np.zeros_like(W)
    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    num_train = X.shape[0]
    score = X.dot(W)
    # axis = 1每一行的最大值,score仍为500*10
    score -= np.max(score,axis=1)[:,np.newaxis]
    # correct_score变为500 * 1
    correct_score = score[range(num_train), y]
    exp_score = np.exp(score)
    # sum_exp_score维度为500*1
    sum_exp_score = np.sum(exp_score,axis=1)
    # 计算loss
    loss = np.sum(np.log(sum_exp_score) - correct_score)
    loss /= num_train
    loss += reg * np.sum(W * W)

    # 计算梯度
    margin = np.exp(score) / sum_exp_score.reshape(num_train,1)
    margin[np.arange(num_train), y] += -1
    dW = X.T.dot(margin)
    dW /= num_train
    dW += 2 * reg * W
    
    pass
    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    return loss, dW

详见:Softmax实验过程

2.4 两层神经网络
    def loss(self, X, y=None, reg=0.0):

        # Unpack variables from the params dictionary
        W1, b1 = self.params['W1'], self.params['b1']
        W2, b2 = self.params['W2'], self.params['b2']
        N, D = X.shape
        # Compute the forward pass
        scores = None
       
        # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
        h_output = np.maximum(0,X.dot(W1)+b1) #第一层输出(N,H) relu激活
        scores = h_output.dot(W2)+b2 #第二层线性输出 (N,C) 之后接softmax        
        pass
        # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

        # If the targets are not given then jump out, we're done
        if y is None:
            return scores

        # Compute the loss
        loss = None
        
        # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****        
        #经过softmax层计算loss
        shift_scores = scores - np.max(scores,axis=1).reshape((-1,1))
        softmax_output = np.exp(shift_scores)/np.sum(np.exp(shift_scores),axis=1).reshape(-1,1)
        loss = -np.sum(np.log(softmax_output[range(N),list(y)]))
        loss/=N
        loss+=reg*(np.sum(W1*W1)+np.sum(W2*W2))
        
        pass

        # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
        # Backward pass: compute gradients
        grads = {}
        
        # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
        dscores = softmax_output.copy()
        dscores[range(N),list(y)]-=1
        dscores/=N
        grads['W2'] = h_output.T.dot(dscores) + 2*reg*W2
        grads['b2'] = np.sum(dscores,axis=0)

        dh = dscores.dot(W2.T)
        dh_ReLu = (h_output>0)*dh
        grads['W1'] = X.T.dot(dh_ReLu) + 2*reg*W1
        grads['b1'] = np.sum(dh_ReLu,axis = 0)
        pass

        # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

        return loss, grads

代码分析:

关键点在于y是否为none
此函数用于计算loss,以及各个参数的梯度。就是把两个神经层以及最后一层softmax串联起来

详见:两层神经网络实验过程

三、性能对比

3.1 KNN

输出:Got 147 / 500 correct => accuracy: 0.294000
大于28%

3.2 SVM

输出:linear SVM on raw pixels final test set accuracy: 0.370000
(best validation accuracy achieved during cross-validation: 0.391000)

3.3 Softmax

输出:softmax on raw pixels final test set accuracy: 0.340000
(best validation accuracy achieved during cross-validation: 0.353000)

3.4 两层神经网络

输出:Test accuracy: 0.5

四、性能分析

4.1 KNN

优点:

1.简单(原因在于几乎不存在训练,测试时直接计算);
2.适用于样本无法一次性拿到的情况;
3.KNN是根据周围邻近样本的标签进行分类的,所以适合于样本类别交叉或重叠较	多的情况;

缺点:

1.测试时间太长,需要计算所有样本与测试样本的距离,因此需要提前去除对分类结果影响不大的样本;
2.不存在概率评分,仅根据样本标签判别;
3.当不同类别的样本数目差异较大时,数目较大的那一类别对KNN判别结果影响较大,因此可能产生误判;
4.无法解决高维问题
4.2 SVM

相比KNN,具有学习能力,分类速度快,依据训练样本概率进行分类,具有一定的鲁棒性。但是只支持线性

4.3 Softmax

类似SVM,但SVM更加局部目标化
在学习上,softmax是不知道满足的

4.4 两层神经网络

毫无疑问的表现最佳,随着A2作业的深入,应该能够得到更高的正确率

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值