cs231n assignment1-knn

参考资料1
参考资料2

前提代码:

import numpy as np

class KNearestNeighbor(object):
  """ a kNN classifier with L2 distance """

  def __init__(self):
    pass

  def train(self, X, y):
    """
    Train the classifier. For k-nearest neighbors this is just 
    memorizing the training data.

    Inputs:
    - X: A numpy array of shape (num_train, D) containing the training data
      consisting of num_train samples each of dimension D.
    - y: A numpy array of shape (N,) containing the training labels, where
         y[i] is the label for X[i].
    """
    self.X_train = X
    self.y_train = y
    
  def predict(self, X, k=1, num_loops=0):
    """
    Predict labels for test data using this classifier.

    Inputs:
    - X: A numpy array of shape (num_test, D) containing test data consisting
         of num_test samples each of dimension D.
    - k: The number of nearest neighbors that vote for the predicted labels.
    - num_loops: Determines which implementation to use to compute distances
      between training points and testing points.

    Returns:
    - y: A numpy array of shape (num_test,) containing predicted labels for the
      test data, where y[i] is the predicted label for the test point X[i].  
    """
    if num_loops == 0:
      dists = self.compute_distances_no_loops(X)
    elif num_loops == 1:
      dists = self.compute_distances_one_loop(X)
    elif num_loops == 2:
      dists = self.compute_distances_two_loops(X)
    else:
      raise ValueError('Invalid value %d for num_loops' % num_loops)

    return self.predict_labels(dists, k=k)

二重循环计算l2距离

    def compute_distances_two_loops(self, X):
        """
    Compute the distance between each test point in X and each training point
    in self.X_train using a nested loop over both the training data and the
    test data.

    Inputs:
    - X: A numpy array of shape (num_test, D) containing test data.

    Returns:
    - dists: A numpy array of shape (num_test, num_train) where dists[i, j]
      is the Euclidean distance between the ith test point and the jth training
      point.
    """
        num_test = X.shape[0]
        num_train = self.X_train.shape[0]
        dists = np.zeros((num_test, num_train))
        for i in range(num_test):
            for j in range(num_train):
                #####################################################################
                # TODO:                                                             #
                # Compute the l2 distance between the ith test point and the jth    #
                # training point, and store the result in dists[i, j]. You should   #
                # not use a loop over dimension.                                    #
                #####################################################################
               	dists[i][j] = np.sqrt(np.sum(np.square(self.X_train[j, :] - X[i, :])))  
                #####################################################################
                #                       END OF YOUR CODE                            #
                #####################################################################
        return dists
  • dists的第n行代表第n个测试集分别与不同训练集的l2距离
  • dists[i][j] = np.sqrt(np.sum(np.square(self.X_train[j, :] - X[i, :]))):
    求第i个测试集与第j个训练集之间的l2距离

一重循环计算l2距离

    def compute_distances_one_loop(self, X):
        """
    Compute the distance between each test point in X and each training point
    in self.X_train using a single loop over the test data.

    Input / Output: Same as compute_distances_two_loops
    """
        num_test = X.shape[0]
        num_train = self.X_train.shape[0]
        dists = np.zeros((num_test, num_train))
        for i in range(num_test):
            #######################################################################
            # TODO:                                                               #
            # Compute the l2 distance between the ith test point and all training #
            # points, and store the result in dists[i, :].                        #
            #######################################################################
            dists[i,] = np.sqrt(np.sum(np.square(self.X_train- X[i, :]),axis=1))
            #######################################################################
            #                         END OF YOUR CODE                            #
            #######################################################################
        return dists

  • np.square(self.X_train- X[i, :])利用了python的广播特性,直接求出第i个测试集与其他训练集之间的l2距离
  • np.sum(np.square(self.X_train- X[i, :]),axis=1)中axis=1指定了求的是行的和,返回一个一维向量,否则np.sum函数会直接求整个数组的和,返回一个整数

无循环计算l2距离

    def compute_distances_no_loops(self, X):
        """
    Compute the distance between each test point in X and each training point
    in self.X_train using no explicit loops.

    Input / Output: Same as compute_distances_two_loops
    """
        #########################################################################
        # TODO:                                                                 #
        # Compute the l2 distance between all test points and all training      #
        # points without using any explicit loops, and store the result in      #
        # dists.                                                                #
        #                                                                       #
        # You should implement this function using only basic array operations; #
        # in particular you should not use functions from scipy.                #
        #                                                                       #
        # HINT: Try to formulate the l2 distance using matrix multiplication    #
        #       and two broadcast sums.                                         #
        #########################################################################
        num_test = X.shape[0]
        num_train = self.X_train.shape[0]
        dists = np.zeros((num_test, num_train))
        d1 = np.multiply(np.dot(X, self.X_train.T), -2)  # shape (num_test, num_train)
        d2 = np.sum(np.square(X), axis=1, keepdims=True)  # shape (num_test, 1)
        d3 = np.sum(np.square(self.X_train), axis=1)  # shape (1, num_train)

        dists = np.sqrt(d1 + d2 + d3)
        #########################################################################
        #                         END OF YOUR CODE                              #
        #########################################################################
        return dists

在这里插入图片描述
图源点击跳转

在这里插入图片描述

  • 根据上图公式将距离拆为三个矩阵的和,利用广播机制,(m,1)自动拓展为n列m,(1,n)自动拓展为m行n
  • keepdims主要用于保持矩阵的二维特性 更多关于keepdims
    在这里插入图片描述
    图源点击跳转

预测标签

    def predict_labels(self, dists, k=1):
        """
    Given a matrix of distances between test points and training points,
    predict a label for each test point.

    Inputs:
    - dists: A numpy array of shape (num_test, num_train) where dists[i, j]
      gives the distance betwen the ith test point and the jth training point.

    Returns:
    - y: A numpy array of shape (num_test,) containing predicted labels for the
      test data, where y[i] is the predicted label for the test point X[i].
    """
        num_test = dists.shape[0]
        y_pred = np.zeros(num_test)
        for i in range(num_test):
            # A list of length k storing the labels of the k nearest neighbors to
            # the ith test point.
            closest_y = []
            #########################################################################
            # TODO:                                                                 #
            # Use the distance matrix to find the k nearest neighbors of the ith    #
            # testing point, and use self.y_train to find the labels of these       #
            # neighbors. Store these labels in closest_y.                           #
            # Hint: Look up the function numpy.argsort.                             #
            #########################################################################
            a = np.argsort(dists[i])[:k]
            closet_y = self.y_train[a]
            #########################################################################
            # TODO:                                                                 #
            # Now that you have found the labels of the k nearest neighbors, you    #
            # need to find the most common label in the list closest_y of labels.   #
            # Store this label in y_pred[i]. Break ties by choosing the smaller     #
            # label.                                                                #
            #########################################################################
            np.sort(closet_y)
            num_same = 0
            num_temp = 0
            for j in range(k):
                if (j == 0):
                    num_same = num_temp = 1
                    y_pred[i] = closet_y[0]
                else:
                    if (closet_y[j] == closet_y[j - 1]):
                        num_temp = num_temp + 1
                    elif (closet_y[j] != closet_y[j - 1]):
                        num_temp = 1
                    if num_temp > num_same:
                        num_same = num_temp
                        y_pred[i] = closet_y[j]
            #########################################################################
            #                           END OF YOUR CODE                            #
            #########################################################################

        return y_pred

以上代码是参考链接2中的,还可以使用numpy中的bincount和argmax简化第二段操作

m = np.bincount(closet_y)
y_pred[i] = np.argmax(m)
  • argsort函数:将x中的元素从小到大排列,提取其对应的index(索引),然后输出到y更多关于argsort

  • 故而a = np.argsort(dists[i])[:k]求出了前k个距离最小的索引,closet_y = self.y_train[a]求出他们对应的标签

  • sort函数:从小到大排序

  • bincount函数:统计输入数组各元素的频次更多关于bincount函数
    例如输入[1,1,1,2,5],输出[0,3,1,0,0,1] (0出现0次,1出现3次,2出现1次……)

  • argmax函数:获得numpy数组中最大元素的索引

总结

最简单的第一次作业的第一道题,也还是花了一些时间,主要是对于python的numpy函数不熟悉,广播机制和相关函数还需要好好学习,漫漫长路~

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值