k邻近算法就解释完了,按照惯例,还是应该给出k邻近算法的官方定义,我从百度百科上拷贝过来了
K近邻算法:在特征空间中,如果一个样本附近的k个最近(即特征空间中最邻近)样本的大多数属于某一个类别,则该样本也属于这个类别。
下面在Jupyter Notebook中一步一步实现k邻近算法
可在我的github下载notebook文件
knn.py的代码如下:
# -*- coding: utf-8 -*-
import numpy as np
from math import sqrt
from collections import Counter
def knn_classify(k, X_train, y_train, x):
assert 1 <= k <= X_train.shape[0], "k要大于等于1,小于等于数组X_train第一维大小"
assert X_train.shape[0] == y_train.shape[0], "数组X_train第一维大小要等于数组y_train第一维大小"
assert X_train.shape[1] == x.shape[0], "数组X_train第二维大小要等于预测点x第一维大小"
distances = [sqrt(np.sum((dot -x)**2)) for dot in X_train]
nearest = np.argsort(distances)
top_k_y = [y_train[i] for i in nearest[:k]]
votes = Counter(top_k_y)
return votes.most_common(1)[0][0]