kNN算法效果好,细想简单,作为初接触机器学习算法的同学来说,是很好的一个切入点。
kNN算法的核心思想就是如果一个样本在特征空间中的k个最相邻的样本中的大多数属于某一个类别,则该样本也属于这个类别,并具有这个类别上样本的特性。
kNN主要用来解决监督学习中的分类问题。
下面通过一个例子,来看看kNN算法的具体过程
首先我们给出一个数据集,raw_data_X表示样本特征集合,raw_data_y为样本所属的类别:
raw_data_X = [[3.393533211, 2.331273381],
[3.110073483, 1.781539638],
[1.343808831, 3.368360954],
[3.582294042, 4.679179110],
[2.280362439, 2.866990263],
[7.423436942, 4.696522875],
[5.745051997, 3.533989803],
[9.172168622, 2.511101045],
[7.792783481, 3.424088941],
[7.939820817, 0.791637231]
]
raw_data_y = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
在这里我们把原始数据集作为训练集
X_train = np.array(raw_data_X)
y_train = np.array(raw_data_y)
然后kNN要做的事情是,新来一个数据x,对它进行分类,具体便是计算x与x_train的距离,看x与哪些样本的距离离的最近
x = np.array([8.093607318, 3.365731514])
distances = [sqrt(np.sum((x_train - x)**2)) for x_train in X_train]
接下来找到离新来的x点最近的6个点的y值是多少
np.argsort(distances)
k = 6
nearest = np.argsort(distances)
topK_y = [y_train[i] for i in nearest[:k]]
然后投票选出x的类别
votes = Counter(topK_y)
return votes.most_common(1)[0][0]
这样我们就会得到结果1,表示对新来的x,kNN将其分类为1。
在这里用到的argsort是Numpy里一个函数,可以返回排序的序号下标,方便我们投票使用。下面是完整的Python代码
import numpy as np
from math import sqrt
from collections import Counter
def kNN_classify(k, X_train, y_train, x):
assert 1 <= k <= X_train.shape[0], "k must be valid"
assert X_train.shape[0] == y_train.shape[0], \
"the size of X_train must equal to the size of y_train"
assert X_train.shape[1] == x.shape[0], \
"the feature number of x must be equal to X_train"
distances = [sqrt(np.sum((x_train - x) ** 2)) for x_train in X_train]
nearest = np.argsort(distances)
topK_y = [y_train[i] for i in nearest[:k]]
votes = Counter(topK_y)
return votes.most_common(1)[0][0]
封装成一个类
import numpy as np
from math import sqrt
from collections import Counter
class KNNClassifier:
def __init__(self, k):
"""初始化kNN分类器"""
assert k >= 1, "k must be valid"
self.k = k
self._X_train = None
self._y_train = None
def fit(self, X_train, y_train):
"""根据训练数据集X_train和y_train训练kNN分类器"""
assert X_train.shape[0] == y_train.shape[0], \
"the size of X_train must be equal to the size of y_train"
assert self.k <= X_train.shape[0], \
"the size of X_train must be at least k."
self._X_train = X_train
self._y_train = y_train
return self
def predict(self, X_predict):
"""给定待预测数据集X_predict,返回表示X_predict的结果向量"""
assert self._X_train is not None and self._y_train is not None, \
"must fit before predict!"
assert X_predict.shape[1] == self._X_train.shape[1], \
"the feature number of X_predict must be equal to X_train"
y_predict = [self._predict(x) for x in X_predict]
return np.array(y_predict)
def _predict(self, x):
"""给定单个待预测数据x,返回x的预测结果值"""
assert x.shape[0] == self._X_train.shape[1], \
"the feature number of x must be equal to X_train"
distances = [sqrt(np.sum((x_train - x) ** 2))
for x_train in self._X_train]
nearest = np.argsort(distances)
topK_y = [self._y_train[i] for i in nearest[:self.k]]
votes = Counter(topK_y)
return votes.most_common(1)[0][0]
def __repr__(self):
return "KNN(k=%d)" % self.k