k k k-近邻算法
k k k 近邻( k k k -Nearest Neighbor, k N N k \mathrm{NN} kNN)是一种监督学习
优点:精度高、对异常值不敏感、无数据输入假定
缺点:计算复杂度高、空间复杂度高
工作原理
给定测试样本, 基于某种距离度量找出训练集中与其最靠近的 k k k 个训练样本, 然后基于这 k k k 个邻居的信息来进行预测
一般流程包括:
- 计算已知类别数据集中的点与当前点之间的距离
- 按照距离递增次序排序
- 选取与当前点距离最小的k个点
- 确定前k个点所在类别的出现频率
- 返回前k个点出现频率最高的类别作为当前点的预测分类
分类任务中可选择这 k k k 个样本中出现最多的类别标记作为预测结果
回归任务中可将这 k k k 个样本的实值输出标记的平均值作为预测结果
k k k值的选择
当 k k k取不同值时, 分类结果会有显著不同
K值过小:特征空间被划分为更多子空间,整体模型变复杂,预测结果会对近邻点十分敏感,预测就会出错容易发生过拟合
K值过大:近邻误差会偏大,距离较远的点也会同样对预测结果产生影响,使得预测结果产生较大偏差,此时模型容易发生欠拟合
距离度量
若采用不同的距离计算方式也会导致分类结果有显著不同
对函数 dist ( ⋅ , ⋅ ) , \operatorname{dist}(\cdot, \cdot), dist(⋅,⋅), 若它是一个距离度量, 则需满足一些基本性质
非 负 性 : dist ( x i , x j ) ⩾ 0 非负性:\operatorname{dist}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right) \geqslant 0 非负性:dist(xi,xj)⩾0
同一性:
dist
(
x
i
,
x
j
)
=
0
当且仅当
x
i
=
x
j
\text { 同一性: } \operatorname{dist}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right)=0 \text { 当且仅当 } \boldsymbol{x}_{i}=\boldsymbol{x}_{j}
同一性: dist(xi,xj)=0 当且仅当 xi=xj
对称性:
dist
(
x
i
,
x
j
)
=
dist
(
x
j
,
x
i
)
直递性:
dist
(
x
i
,
x
j
)
⩽
dist
(
x
i
,
x
k
)
+
dist
(
x
k
,
x
j
)
\begin{aligned} &\text { 对称性: } \operatorname{dist}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right)=\operatorname{dist}\left(\boldsymbol{x}_{j}, \boldsymbol{x}_{i}\right)\\ &\text { 直递性: } \operatorname{dist}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right) \leqslant \operatorname{dist}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{k}\right)+\operatorname{dist}\left(\boldsymbol{x}_{k}, \boldsymbol{x}_{j}\right) \end{aligned}
对称性: dist(xi,xj)=dist(xj,xi) 直递性: dist(xi,xj)⩽dist(xi,xk)+dist(xk,xj)
给定样本
x
i
=
(
x
i
1
;
x
i
2
;
…
;
x
i
n
)
\boldsymbol{x}_{i}=\left(x_{i 1} ; x_{i 2} ; \ldots ; x_{i n}\right)
xi=(xi1;xi2;…;xin) 与
x
j
=
(
x
j
1
;
x
j
2
;
…
;
x
j
n
)
,
\boldsymbol{x}_{j}=\left(x_{j 1} ; x_{j 2} ; \ldots ; x_{j n}\right),
xj=(xj1;xj2;…;xjn), 最常用的是闵可夫斯基距离(Minkowski distance)
dist
m
k
(
x
i
,
x
j
)
=
(
∑
u
=
1
n
∣
x
i
u
−
x
j
u
∣
p
)
1
p
\operatorname{dist}_{\mathrm{mk}}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right)=\left(\sum_{u=1}^{n}\left|x_{i u}-x_{j u}\right|^{p}\right)^{\frac{1}{p}}
distmk(xi,xj)=(u=1∑n∣xiu−xju∣p)p1
p
=
2
p=2
p=2 时, 闵可夫斯基距离即欧氏距离 (Euclidean distance) 即两点之间的直线距离
欧几里得空间中两点间直线距离、真实距离或者向量的自然长度
dist
e
d
(
x
i
,
x
j
)
=
∥
x
i
−
x
j
∥
2
=
∑
u
=
1
n
∣
x
i
u
−
x
j
u
∣
2
\operatorname{dist}_{\mathrm{ed}}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right)=\left\|\boldsymbol{x}_{i}-\boldsymbol{x}_{j}\right\|_{2}=\sqrt{\sum_{u=1}^{n}\left|x_{i u}-x_{j u}\right|^{2}}
disted(xi,xj)=∥xi−xj∥2=u=1∑n∣xiu−xju∣2
p
=
1
p=1
p=1 时, 闵可夫斯基距离即曼哈顿距离(Manhattan distance)
在欧几里德空间的固定直角坐标系上两点所形成的线段对轴产生的投影的距离总和,也称街区距离
dist
man
(
x
i
,
x
j
)
=
∥
x
i
−
x
j
∥
1
=
∑
u
=
1
n
∣
x
i
u
−
x
j
u
∣
\operatorname{dist}_{\operatorname{man}}\left(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}\right)=\left\|\boldsymbol{x}_{i}-\boldsymbol{x}_{j}\right\|_{1}=\sum_{u=1}^{n}\left|x_{i u}-x_{j u}\right|
distman(xi,xj)=∥xi−xj∥1=u=1∑n∣xiu−xju∣
代码实现
FROM IMOOC机器学习
import numpy as np
from math import sqrt
from collections import Counter
class KNNClassifier:
def __init__(self, k):
"""初始化kNN分类器"""
assert k >= 1, "k must be valid"
self.k = k
self._X_train = None
self._y_train = None
def fit(self, X_train, y_train):
"""根据训练数据集X_train和y_train训练kNN分类器"""
assert X_train.shape[0] == y_train.shape[0], \
"the size of X_train must be equal to the size of y_train"
assert self.k <= X_train.shape[0], \
"the size of X_train must be at least k."
self._X_train = X_train
self._y_train = y_train
return self
def predict(self, X_predict):
"""给定待预测数据集X_predict,返回表示X_predict的结果向量"""
assert self._X_train is not None and self._y_train is not None, \
"must fit before predict!"
assert X_predict.shape[1] == self._X_train.shape[1], \
"the feature number of X_predict must be equal to X_train"
y_predict = [self._predict(x) for x in X_predict]
return np.array(y_predict)
def _predict(self, x):
"""给定单个待预测数据x,返回x的预测结果值"""
assert x.shape[0] == self._X_train.shape[1], \
"the feature number of x must be equal to X_train"
distances = [sqrt(np.sum((x_train - x) ** 2))
for x_train in self._X_train]
nearest = np.argsort(distances)
topK_y = [self._y_train[i] for i in nearest[:self.k]]
votes = Counter(topK_y)
return votes.most_common(1)[0][0]
def score(self, X_test, y_test):
"""根据测试数据集 X_test 和 y_test 确定当前模型的准确度"""
y_predict = self.predict(X_test)
return self.accuracy_score(y_test, y_predict)
def __repr__(self):
return "KNN(k=%d)" % self.k
def accuracy_score(y_true, y_predict):
"""计算y_true和y_predict之间的准确率"""
assert len(y_true) == len(y_predict), \
"the size of y_true must be equal to the size of y_predict"
return np.sum(y_true == y_predict) / len(y_true)
参考
机器学习-周志华
Machine Learning in Action by Peter Harrington