KNN
对有标签的数据进行分类。计算所有样本和x的距离,得到距离最近的k(k为超参数)个样本点,哪种类别占据多数,x就被划分为哪一类。
k值一般选择为奇数,方便进行多数类别的统计。
欧式距离(Euclidean distance):
d
=
(
x
1
−
y
1
)
2
+
(
x
2
−
y
2
)
2
+
.
.
.
+
(
x
n
−
y
n
)
2
d = \sqrt {(x_1 - y_1)^2 + (x_2 - y_2)^2 + ... + (x_n - y_n)^2}
d=(x1−y1)2+(x2−y2)2+...+(xn−yn)2
其他距离:余弦相似度,相关度,曼哈顿距离等。
算法缺点
- 算法复杂度高,需要计算所有样本和未知分类的距离
- 样本分布不均时(比如某一类样本特别多,某一类样本特别少),未知分类容易被误分类
例子
数据可视化
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets
iris = datasets.load_iris()
print('feature:\n', iris.data[:5])
print('target:\n', iris.target)
# output
feature:
[[5.1 3.5 1.4 0.2]
[4.9 3. 1.4 0.2]
[4.7 3.2 1.3 0.2]
[4.6 3.1 1.5 0.2]
[5. 3.6 1.4 0.2]]
target:
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
# 只选择花萼长,花萼宽两个特征 绘制三种鸢尾花
# plt.figure(figsize=(16,5))
# ax = plt.subplot(122)
# ax1 = plt.subplot(121)
# ax1.scatter(iris.data[:50,0], iris.data[:50,1], alpha=.6, label='0')
# ax1.scatter(iris.data[50:100,0], iris.data[50:100,1], alpha=.6, label='1')
# ax1.scatter(iris.data[100:150,0], iris.data[100:150,1], alpha=.6, label='1')
# ax1.set_xlabel('sepal length(cm)')
# ax1.set_ylabel('sepal width(cm)')
# ax1.legend();
# 只选择花瓣长,花瓣宽两个特征 绘制三种鸢尾花
ax2 = plt.subplot(111)
ax2.scatter(iris.data[:50,2], iris.data[:50,3], alpha=.6, label='0')
ax2.scatter(iris.data[50:100,2], iris.data[50:100,3], alpha=.6, label='1')
ax2.scatter(iris.data[100:150,2], iris.data[100:150,3], alpha=.6, label='1')
ax2.set_xlabel('petal length(cm)')
ax2.set_ylabel('petal width(cm)')
ax2.legend();
sklearn实现
# 导入数据
> iris = datasets.load_iris()
> X_train, X_test, y_train, y_test = train_test_split(
iris.data, iris.target, test_size=0.3, random_state=42)
> print('feature:\n', X_train[:5])
> print('target:\n', y_train[:10])
feature:
[[5.5 2.4 3.7 1. ]
[6.3 2.8 5.1 1.5]
[6.4 3.1 5.5 1.8]
[6.6 3. 4.4 1.4]
[7.2 3.6 6.1 2.5]]
target:
[1 2 2 1 2 1 2 1 0 2]
> neigh = KNeighborsClassifier(n_neighbors=5)
> neigh.fit(X_train, y_train)
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=None, n_neighbors=5, p=2,
weights='uniform')
# 对测试集进行预测
> neigh.predict(X_test)
array([1, 0, 2, 1, 1, 0, 1, 2, 1, 1, 2, 0, 0, 0, 0, 1, 2, 1, 1, 2, 0, 2,
0, 2, 2, 2, 2, 2, 0, 0, 0, 0, 1, 0, 0, 2, 1, 0, 0, 0, 2, 1, 1, 0,
0])
# 测试集的准确率
> neigh.score(X_test, y_test)
1.0