[CS231n@Stanford] Assignment1-Q1

作业网址:http://cs231n.github.io/assignment1/

k_nearest_neighbor.py:

import numpy as np

class KNearestNeighbor:
  """ a kNN classifier with L2 distance """

  def __init__(self):
    pass

  def train(self, X, y):
    """
    Train the classifier. For k-nearest neighbors this is just 
    memorizing the training data.

    Input:
    X - A num_train x dimension array where each row is a training point.
    y - A vector of length num_train, where y[i] is the label for X[i, :]
    """
    self.X_train = X
    self.y_train = y
    
  def predict(self, X, k=1, num_loops=0):
    """
    Predict labels for test data using this classifier.

    Input:
    X - A num_test x dimension array where each row is a test point.
    k - The number of nearest neighbors that vote for predicted label
    num_loops - Determines which method to use to compute distances
                between training points and test points.

    Output:
    y - A vector of length num_test, where y[i] is the predicted label for the
        test point X[i, :].
    """
    if num_loops == 0:
      dists = self.compute_distances_no_loops(X)
    elif num_loops == 1:
      dists = self.compute_distances_one_loop(X)
    elif num_loops == 2:
      dists = self.compute_distances_two_loops(X)
    else:
      raise ValueError('Invalid value %d for num_loops' % num_loops)

    return self.predict_labels(dists, k=k)

  def compute_distances_two_loops(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using a nested loop over both the training data and the 
    test data.

    Input:
    X - An num_test x dimension array where each row is a test point.

    Output:
    dists - A num_test x num_train array where dists[i, j] is the distance
            between the ith test point and the jth training point.
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train))
    for i in xrange(num_test):
      for j in xrange(num_train):
        #####################################################################
        # TODO:                                                             #
        # Compute the l2 distance between the ith test point and the jth    #
        # training point, and store the result in dists[i, j]               #
        #####################################################################
        dists[i][j] = np.sqrt(np.sum(np.square(self.X_train[j,:] - X[i,:])))

        #####################################################################
        #                       END OF YOUR CODE                            #
        #####################################################################
    return dists

  def compute_distances_one_loop(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using a single loop over the test data.

    Input / Output: Same as compute_distances_two_loops
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train))
    for i in xrange(num_test):
      #######################################################################
      # TODO:                                                               #
      # Compute the l2 distance between the ith test point and all training #
      # points, and store the result in dists[i, :].                        #
      #######################################################################
      dists[i,:] = np.sqrt(np.sum(np.square(self.X_train-X[i,:]),axis = 1))
      
      #######################################################################
      #                         END OF YOUR CODE                            #
      #######################################################################
    return dists

  def compute_distances_no_loops(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using no explicit loops.

    Input / Output: Same as compute_distances_two_loops
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train)) 
    #########################################################################
    # TODO:                                                                 #
    # Compute the l2 distance between all test points and all training      #
    # points without using any explicit loops, and store the result in      #
    # dists.                                                                #
    # HINT: Try to formulate the l2 distance using matrix multiplication    #
    #       and two broadcast sums.                                         #
    #########################################################################
    dists = np.multiply(np.dot(X,self.X_train.T),-2)
    sq1 = np.sum(np.square(X),axis=1,keepdims = True)
    sq2 = np.sum(np.square(self.X_train),axis=1)
    dists = np.add(dists,sq1)
    dists = np.add(dists,sq2)
    dists = np.sqrt(dists)
    #########################################################################
    #                         END OF YOUR CODE                              #
    #########################################################################
    return dists

  def predict_labels(self, dists, k=1):
    """
    Given a matrix of distances between test points and training points,
    predict a label for each test point.

    Input:
    dists - A num_test x num_train array where dists[i, j] gives the distance
            between the ith test point and the jth training point.

    Output:
    y - A vector of length num_test where y[i] is the predicted label for the
        ith test point.
    """
    num_test = dists.shape[0]
    y_pred = np.zeros(num_test)
    for i in xrange(num_test):
      # A list of length k storing the labels of the k nearest neighbors to
      # the ith test point.
      closest_y = []
      #########################################################################
      # TODO:                                                                 #
      # Use the distance matrix to find the k nearest neighbors of the ith    #
      # training point, and use self.y_train to find the labels of these      #
      # neighbors. Store these labels in closest_y.                           #
      # Hint: Look up the function numpy.argsort.                             #
      #########################################################################
      closest_y = self.y_train[np.argsort(dists[i,:])[:k]]
 
      #########################################################################
      # TODO:                                                                 #
      # Now that you have found the labels of the k nearest neighbors, you    #
      # need to find the most common label in the list closest_y of labels.   #
      # Store this label in y_pred[i]. Break ties by choosing the smaller     #
      # label.                                                                #
      #########################################################################
      y_pred[i] = np.argmax(np.bincount(closest_y))
	
      #########################################################################
      #                           END OF YOUR CODE                            # 
      #########################################################################

    return y_pred

Cross-validation:


num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO:                                                                        #
# Split up the training data into folds. After splitting, X_train_folds and    #
# y_train_folds should each be lists of length num_folds, where                #
# y_train_folds[i] is the label vector for the points in X_train_folds[i].     #
# Hint: Look up the numpy array_split function.                                #
################################################################################

X_train_folds = np.split(X_train,5)
y_train_folds = np.split(y_train,5)

################################################################################
#                                 END OF YOUR CODE                             #
################################################################################

# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}


################################################################################
# TODO:                                                                        #
# Perform k-fold cross validation to find the best value of k. For each        #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times,   #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all     #
# values of k in the k_to_accuracies dictionary.                               #
################################################################################
for k in k_choices:
    k_to_accuracies[k] = np.zeros(5)
    for i in range(5):   
        X_tr =   X_train_folds[0:i] +  X_train_folds[(i+1):5]
        X_tr = np.reshape(X_tr, (X_train.shape[0]*4/5, -1))
        y_tr = y_train_folds[0:i] +  y_train_folds[(i+1):5]
        y_tr = np.reshape(y_tr, (X_train.shape[0]*4/5 ))
        X_tt = np.reshape(X_train_folds[i], (X_train.shape[0]*1/5, -1))
        classifier.train(X_tr,y_tr)
        y_test_pred = classifier.predict(X_tt, k)
        num_correct = np.sum(y_test_pred == y_train_folds[i])
        accuracy = float(num_correct) / num_test
        k_to_accuracies[k][i] = accuracy
################################################################################
#                                 END OF YOUR CODE                             #
################################################################################

# Print out the computed accuracies
for k in sorted(k_to_accuracies):
    for accuracy in k_to_accuracies[k]:
        print 'k = %d, accuracy = %f' % (k, accuracy)


(初次使用python,不熟悉操作,代码质量无法保证,仅供参考!)

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
### 回答1: Stanford CoreNLP是一个开源的自然语言处理工具包,提供了一系列的NLP工具和库,用于文本分析、信息提取、语义标注、句法分析等任务。而stanford-corenlp-full-2015-12-09则是这个工具包的一个特定版本。 stanford-corenlp-full-2015-12-09包含了所有Stanford CoreNLP工具和库的完整集合。它包括了多种NLP模型,用于处理不同的语言和任务。这个版本发布于2015年12月09日,并且提供了丰富的功能和性能优化。它支持英语、中文等多种语言的文本处理,并且可以用于词性标注、命名实体识别、关系抽取、情感分析、依存句法分析等多种任务。 使用stanford-corenlp-full-2015-12-09,我们可以通过简单的调用API接口来使用各种NLP功能。它可以处理单个文本、文本集合甚至是大规模的文本数据。我们可以提取文本的关键信息,如实体识别、情感分析和关键词提取等。此外,它还提供了丰富的语言处理技术,如分词、词性标注、命名实体识别和依存句法分析,可以帮助研究人员和开发者进行更深入的文本分析和语义理解。 总而言之,stanford-corenlp-full-2015-12-09是一个功能强大且广泛使用的NLP工具包,提供了多种NLP任务的解决方案。它可以帮助使用者快速准确地分析文本,提取有用的信息,并为后续的文本处理和语义分析任务提供基础支持。 ### 回答2: Stanford CoreNLP是斯坦福大学开发的一款自然语言处理工具包,其完整版2015-12-09是指CoreNLP的一个特定版本,发布于2015年12月9号。Stanford CoreNLP提供了一系列强大的功能,包括分词、词性标注、命名实体识别、句法分析、依存关系分析等。这些功能能够帮助用户对文本进行深入的语言理解和分析。 Stanford CoreNLP使用Java编写,可以通过命令行或API接口进行使用。它支持多种语言,包括英语、中文、阿拉伯语等。用户可以通过简单的调用相应的功能模块,实现对文本的处理和分析。 在中文处理方面,Stanford CoreNLP通过使用中文分词器以及中文词性标注器,能够将中文文本进行分词和词性标注。此外,它还能够进行中文的命名实体识别,例如识别人名、地名、时间等实体。同时,Stanford CoreNLP还提供了句法分析和依存关系分析功能,可以帮助用户理解句子的结构和句法关系。 总之,Stanford CoreNLP完整版2015-12-09是一款功能强大的自然语言处理工具,能够帮助用户对文本进行深入的语言分析和理解。它具有广泛的应用领域,包括信息提取、机器翻译、文本分类等。用户可以使用它来处理中文文本,并通过其提供的多种功能模块对文本进行处理和分析。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值