CS231n-assignment1-KNN篇

最近在学习斯坦福大学的CS231n课程,此为作业一的KNN,能力有限,也是第一次写博客记录自己的学习心得,望各位看官勿喷,其他的作业之后也会陆续给出自己的答案。

附上源视频和笔记:

网易云课程视频及笔记链接: http://study.163.com/course/courseMain.htm?courseId=1003223001

想听原版的可以转B站

作业网址:http://cs231n.stanford.edu/

knn原理很简单,想必大家都知道,和物以类聚,人以群分一个道理。在这里就不介绍了,

环境为Spyder(Python2.7)不过相信其他也应该没啥问题,官方给出的knn.ipynb很有用,我也是参考了很多这里面的内容,推荐大家一定要好好看看,里面的代码很简洁也易懂

NOTE:为了方便起见,把data_utils.py和k_nearest_neighbor.py,同时在此目录下新加了knntest.py用于cifar-10数据集的训练和测试.

废话不多说,直接上代码

k_neraest_neighbor.py

import numpy as np
#from past.builtins import xrange


class KNearestNeighbor(object):
  """ a kNN classifier with L2 distance """

  def __init__(self):
    pass

  def train(self, X, y):
    """
    Train the classifier. For k-nearest neighbors this is just 
    memorizing the training data.

    Inputs:
    - X: A numpy array of shape (num_train, D) containing the training data
      consisting of num_train samples each of dimension D.
    - y: A numpy array of shape (N,) containing the training labels, where
         y[i] is the label for X[i].
    """
    self.X_train = X
    self.y_train = y
    
  def predict(self, X, k=1, num_loops=0):
    """
    Predict labels for test data using this classifier.

    Inputs:
    - X: A numpy array of shape (num_test, D) containing test data consisting
         of num_test samples each of dimension D.
    - k: The number of nearest neighbors that vote for the predicted labels.
    - num_loops: Determines which implementation to use to compute distances
      between training points and testing points.

    Returns:
    - y: A numpy array of shape (num_test,) containing predicted labels for the
      test data, where y[i] is the predicted label for the test point X[i].  
    """
    if num_loops == 0:
      dists = self.compute_distances_no_loops(X)
    elif num_loops == 1:
      dists = self.compute_distances_one_loop(X)
    elif num_loops == 2:
      dists = self.compute_distances_two_loops(X)
    else:
      raise ValueError('Invalid value %d for num_loops' % num_loops)

    return self.predict_labels(dists, k=k)

  def compute_distances_two_loops(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using a nested loop over both the training data and the 
    test data.

    Inputs:
    - X: A numpy array of shape (num_test, D) containing test data.

    Returns:
    - dists: A numpy array of shape (num_test, num_train) where dists[i, j]
      is the Euclidean distance between the ith test point and the jth training
      point.
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train))
    for i in xrange(num_test):
      for j in xrange(num_train):
        #####################################################################
        # TODO:                                                             #
        # Compute the l2 distance between the ith test point and the jth    #
        # training point, and store the result in dists[i, j]. You should   #
        # not use a loop over dimension.                                    #
        #####################################################################
        dists[i,j]=np.sqrt(np.sum(np.square((self.X_train[j,:]-X[i,:]))))
        pass
        #####################################################################
        #                       END OF YOUR CODE                            #
        #####################################################################
    return dists

  def compute_distances_one_loop(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using a single loop over the test data.

    Input / Output: Same as compute_distances_two_loops
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train))
    for i in xrange(num_test):
      #######################################################################
      # TODO:                                                               #
      # Compute the l2 distance between the ith test point and all training #
      # points, and store the result in dists[i, :].                        #
      #######################################################################
      dists[i,:]=np.sqrt(np.sum(np.square(self.X_train-X[i,:]),axis=1))
      pass
      #######################################################################
      #                         END OF YOUR CODE                            #
      #######################################################################
    return dists

  def compute_distances_no_loops(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using no explicit loops.

    Input / Output: Same as compute_distances_two_loops
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train)) 
    #########################################################################
    # TODO:                                                                 #
    # Compute the l2 distance between all test points and all training      #
    # points without using any explicit loops, and store the result in      #
    # dists.                                                                #
    #                                                                       #
    # You should implement this function using only basic array operations; #
    # in particular you should not use functions from scipy.                #
    #                                                                       #
    # HINT: Try to formulate the l2 distance using matrix multiplication    #
    #       and two broadcast sums.                                         #
    #########################################################################
    dists=np.multiply(np.dot(X,self.X_train.T),-2)
    Xsq=np.sum(np.square(X),axis=1,keepdims=True)
    X_trainsq=np.sum(np.square(self.X_train),axis=1)
    dists=np.add(dists,Xsq)
    dists=np.add(dists,X_trainsq)
    dists=np.sqrt(dists)
    pass
    #########################################################################
    #                         END OF YOUR CODE                              #
    #########################################################################
    return dists

  def predict_labels(self, dists, k=1):
    """
    Given a matrix of distances between test points and training points,
    predict a label for each test point.

    Inputs:
    - dists: A numpy array of shape (num_test, num_train) where dists[i, j]
      gives the distance betwen the ith test point and the jth training point.

    Returns:
    - y: A numpy array of shape (num_test,) containing predicted labels for the
      test data, where y[i] is the predicted label for the test point X[i].  
    """
    num_test = dists.shape[0]
    y_pred = np.zeros(num_test)
    for i in xrange(num_test):
      # A list of length k storing the labels of the k nearest neighbors to
      # the ith test point.
      closest_y = []
      #########################################################################
      # TODO:                                                                 #
      # Use the distance matrix to find the k nearest neighbors of the ith    #
      # testing point, and use self.y_train to find the labels of these       #
      # neighbors. Store these labels in closest_y.                           #
      # Hint: Look up the function numpy.argsort.                             #
      #########################################################################
      ##closest_y[i]=self.y_train[argmax(dists[i,:])]
      closest_y=np.argsort(dists[i,:])     
      closest_y=self.y_train[closest_y[:k]]
      pass
      #########################################################################
      # TODO:                                                                 #
      # Now that you have found the labels of the k nearest neighbors, you    #
      # need to find the most common label in the list closest_y of labels.   #
      # Store this label in y_pred[i]. Break ties by choosing the smaller     #
      # label.                                                                #
      #########################################################################
      d = dict.fromkeys(closest_y,0)
      for key in closest_y:
        d[key]+=1
      y_pred[i]=max(d,key=lambda x: d[x])
      pass
      #########################################################################
      #                           END OF YOUR CODE                            # 
      #########################################################################

    return y_pred

k_neraest_neighbor.py里面的内容不需要说什么的,官方已经写好很多了,也很详细,自己需要添加的没多少。读者可以自己看,本人编程有限,写的代码丑勿喷。

knntest.py

from k_nearest_neighbor import KNearestNeighbor
#import random
import numpy as np
from data_utils import load_CIFAR10
import matplotlib.pyplot as plt

# Load the raw CIFAR-10 data.
#注意更改为自己cifar-10-batch-py的路径
cifar10_dir = 'C:/to/you/path/assignment1/cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
#我没有使用全部数据集,只取了1/10,大家自己用的时候可以使用全部的,即把下面4条语句注释掉即可
X_train=X_train[:5000,]
y_train=y_train[:5000]
X_test=X_test[:1000,]
y_test=y_test[:1000]

num_test=X_test.shape[0]
num_train=X_train.shape[0]
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape

# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape

# Create a kNN classifier instance. 
# Remember that training a kNN classifier is a noop: 
# the Classifier simply remembers the data and does no further processing 
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
'''
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)

# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
'''
###Cross-validation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
#k_choices=range(1,11)

X_train_folds = []
y_train_folds = []
################################################################################
# TODO:                                                                        #
# Split up the training data into folds. After splitting, X_train_folds and    #
# y_train_folds should each be lists of length num_folds, where                #
# y_train_folds[i] is the label vector for the points in X_train_folds[i].     #
# Hint: Look up the numpy array_split function.                                #
################################################################################
X_train_folds = np.array_split(X_train, num_folds, axis=0)
y_train_folds = np.array_split(y_train, num_folds, axis=0)
################################################################################
#                                 END OF YOUR CODE                             #
################################################################################

# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}


################################################################################
# TODO:                                                                        #
# Perform k-fold cross validation to find the best value of k. For each        #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times,   #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all     #
# values of k in the k_to_accuracies dictionary.                               #
################################################################################
for k in k_choices:
    
    k_to_accuracies[k] = []
    for val_idx in range(num_folds):
        
        X_val = X_train_folds[val_idx]
        y_val = y_train_folds[val_idx]
        
        X_tra = X_train_folds[:val_idx] + X_train_folds[val_idx + 1:]
        X_tra = np.reshape(X_tra, (X_train.shape[0] - X_val.shape[0], -1))
        y_tra = y_train_folds[:val_idx] + y_train_folds[val_idx + 1:]
        y_tra = np.reshape(y_tra, (X_train.shape[0] - X_val.shape[0],))
        
        classifier.train(X_tra, y_tra)
        y_val_pre = classifier.predict(X_val, k, 0)
        
        right_arr = y_val_pre == y_val
        accuracy = float(np.sum(right_arr)) / y_val.shape[0]
        k_to_accuracies[k].append(accuracy)
        
    
################################################################################
#                                 END OF YOUR CODE                             #
################################################################################



# plot the raw observations
print k_choices
print k_to_accuracies
for k in k_choices:
  accuracies = k_to_accuracies[k]
  plt.scatter([k] * len(accuracies), accuracies)

# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()

#下面这个循环是我自己写的,目的是找出经过交叉验证后最佳K值
temp=0
best_k=0
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
    sumaccu=0
    for accuracy in k_to_accuracies[k]:
        sumaccu+=accuracy     #        print 'k = %d, accuracy = %f' % (k, accuracy)
    if sumaccu>temp:
    	best_k=k
        temp=sumaccu
        #temp=sumaccu
print 'the best K is ',best_k#最佳K值

# Based on the cross-validation results above, choose the best value for k,   
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
# best_k=3,#这是knn.ipynb给的k值

classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)

# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
NOTE:注意更改自己的数据集路径,

接下来,提出自己的结果







评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值