简介
下图中,绿色圆要被决定赋予哪个类,是红色三角形还是蓝色四方形?如果K=3,由于红色三角形所占比例为2/3,绿色圆将被赋予红色三角形那个类,如果K=5,由于蓝色四方形比例为3/5,因此绿色圆被赋予蓝色四方形类。
算法步骤
代码实现
import numpy as np
import matplotlib.pyplot as plt
from math import sqrt
raw_data_x=[
[3.393533211, 2.331273381],
[3.110073483, 1.781539638],
[1.343808831, 3.368360954],
[3.582294042, 4.679179110],
[2.280362439, 2.866990263],
[7.423436942, 4.696522875],
[5.745051997, 3.533989803],
[9.172168622, 2.511101045],
[7.792783481, 3.424088941],
[7.939820817, 0.791637231]
]
raw_data_y = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
x_train=np.array(raw_data_x)
y_train=np.array(raw_data_y)
x=np.array([8.093607318,3.365731514])
# 欧拉距离
distincts=[sqrt(np.sum((i-x)**2)) for i in x_train]
# 求出距离最小的索引
nearst= np.argsort(distincts)
k=6
# 前k个距离最小的标签的点集
topK_y=[y_train[i] for i in nearst[:k]]
from collections import Counter
# 投票统计
votes=Counter(topK_y)
predict_y=votes.most_common(1)[0][0]
print(predict_y)
sklearn实现
KNeighborsClassifier函数
使用KNeighborsClassifier创建K临近分类器:
sklearn.neighbors.KNeighborsClassifier(n_neighbors=5, weights=’uniform’, algorithm=’auto’, leaf_size=30,
p=2, metric=’minkowski’, metric_params=None, n_jobs=None, **kwargs)
参数注释:
1,n_neighbors
临近的节点数量,默认值是5
2,weights
权重,默认值是uniform,
uniform:表示每个数据点的权重是相同的;
distance:离一个簇中心越近的点,权重越高;
callable:用户定义的函数,用于表示每个数据点的权重
3,algorithm
auto:根据值选择最合适的算法
ball_tree:使用BallTree
kd_tree:KDTree
brute:使用Brute-Force查找
4,leaf_size
leaf_size传递给BallTree或者KDTree,表示构造树的大小,用于影响模型构建的速度和树需要的内存数量,最佳值是根据数据来确定的,默认值是30。
5,p,metric,metric_paras
p参数用于设置Minkowski 距离的Power参数,当p=1时,等价于manhattan距离;当p=2等价于euclidean距离,当p>2时,就是Minkowski 距离。
metric参数:设置计算距离的方法
metric_paras:传递给计算距离方法的参数
6,n_jobs
并发执行的job数量,用于查找邻近的数据点。默认值1,选取-1占据CPU比重会减小,但运行速度也会变慢,所有的core都会运行。
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
# 数据集
raw_data_x=[
[3.393533211, 2.331273381],
[3.110073483, 1.781539638],
[1.343808831, 3.368360954],
[3.582294042, 4.679179110],
[2.280362439, 2.866990263],
[7.423436942, 4.696522875],
[5.745051997, 3.533989803],
[9.172168622, 2.511101045],
[7.792783481, 3.424088941],
[7.939820817, 0.791637231]
]
raw_data_y = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
new_data=[8.093607318,3.365731514]
# 处理数据
x_train=np.array(raw_data_x)
y_train=np.array(raw_data_y)
x=np.array(new_data)
x_predict=x.reshape(1,-1)
# print(raw_data_y)
# 使用k值为6,用最近的6个点进行分析
kNeighborsClassifier=KNeighborsClassifier(n_neighbors=6)
# 传入训练数据
kNeighborsClassifier.fit(x_train,y_train)
# 进行预测
y_predict=kNeighborsClassifier.predict(x_predict)
# 预测结果:
print(y_predict)
鸢尾花示例
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
# 导入knn的模块KNeighborsClassifier模块
from sklearn.neighbors import KNeighborsClassifier
# 导入计算正确率的模块
from sklearn.metrics import accuracy_score
iris=datasets.load_iris()
x=iris.data
y=iris.target
# 拆分数据
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=666)
# 创建分类器
knn_clf = KNeighborsClassifier(n_neighbors=3)
# 模型拟合,传入训练数据
knn_clf.fit(x_train, y_train)
# 对测试数据进行预测,返回预测结果集
y_predict = knn_clf.predict(x_test)
# print(y_predict)
# 模型评估 传入真实结果和测试结果,返回正确率
score=accuracy_score(y_test, y_predict)
print(score)
选出最优的K值(邻居个数)
from sklearn.datasets import load_iris
from sklearn.model_selection import cross_val_score
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
#读取鸢尾花数据集
iris = load_iris()
x = iris.data
y = iris.target
k_range = range(1, 31)
k_error = []
#循环,取k=1到k=31,查看误差效果
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
#cv参数决定数据集划分比例,这里是按照5:1划分训练集和测试集
scores = cross_val_score(knn, x, y, cv=6, scoring='accuracy')
k_error.append(1 - scores.mean())
#画图,x轴为k值,y值为误差值
plt.plot(k_range, k_error)
plt.xlabel('Value of K for KNN')
plt.ylabel('Error')
plt.show()