K-Nearest Neighbor(KNN)可以翻译为K最近邻算法,是机器学习中最简单的分类算法。为了更好的理解这个算法,本帖使用Python实现这个K-Nearest Neighbor算法 ,最后和scikit-learn中的k-Nearest Neighbor算法进行简单对比。
KNN算法基本原理
假设我有如下两个数据集:
dataset = {'black':[ [1,2], [2,3], [3,1] ], 'red':[ [6,5], [7,7], [8,6] ] }
上面画出了两组数据点:black,red。假设你在上图任意添加一个点,如(3.5, 5.3),KNN的任务就是判断这个点(下图中的绿点)该划分到哪个组。
KNN分类算法超级简单:只需使用初中所学的两点距离公式(欧拉距离公式),计算绿点到各组的距离,看绿点和哪组更接近。K代表取离绿点最近的k个点,这k个点如果其中属于红点个数占多数,我们就认为绿点应该划分为红组,反之,则划分为黑组。
如果有两组数据(如上图),k值最小应为3;如果有三组数据(如下图),k值最小应为5。scikit-learn默认k值为5。
上面使用的是二维数据,同样的逻辑可以推广到三维或任意纬度。
除了K-Nearest Neighbor之外还有其它分组的方法,如Radius-Based Neighbor。
使用Python实现KNN算法
# -*- coding:utf-8 -*-
import math
import numpy as np
from matplotlib import pyplot
from collections import Counter
import warnings
# k-Nearest Neighbor算法
def k_nearest_neighbors(data, predict, k=5):
if len(data) >= k:
warnings.warn("k is too small")
# 计算predict点到各点的距离
distances = []
for group in data:
for features in data[group]:
# euclidean_distance = np.sqrt(np.sum((np.array(features)-np.array(predict))**2)) # 计算欧拉距离,这个方法没有下面一行代码快
euclidean_distance = np.linalg.norm(np.array(features) - np.array(predict))
distances.append([euclidean_distance, group])
print(sorted(distances))
sorted_distances = [i[1] for i in sorted(distances)]
top_nearest = sorted_distances[:k]
# print(top_nearest) ['red','black','red']
group_res = Counter(top_nearest).most_common(1)[0][0]
confidence = Counter(top_nearest).most_common(1)[0][1] * 1.0 / k
# confidences是对本次分类的确定程度,例如(red,red,red),(red,red,black)都分为red组,但是前者显的更自信
return group_res, confidence
if __name__ == '__main__':
dataset = {'black': [[1, 2], [2, 3], [3, 1]], 'red': [[6, 5], [7, 7], [8, 6]]}
new_features = [3.5, 5.2] # 判断这个样本属于哪个组
for i in dataset:
for ii in dataset[i]:
pyplot.scatter(ii[0], ii[1], s=50, color=i)
which_group, confidence = k_nearest_neighbors(dataset, new_features, k=3)
print(which_group, confidence)
pyplot.scatter(new_features[0], new_features[1], s=100, color=which_group)
pyplot.show()
执行结果:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-jbAzqpdG-1576323000530)(http://upload-images.jianshu.io/upload_images/12504508-f5829945cbe86017.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)]
使用现实数据测试上面实现的knn算法
数据集(Breast Cancer):https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Original%29
数据集字段:
# 属性 Domain
-- -----------------------------------------
1. Sample code number id # 这一列没啥用
2. Clump Thickness 1 - 10
3. Uniformity of Cell Size 1 - 10
4. Uniformity of Cell Shape 1 - 10
5. Marginal Adhesion 1 - 10
6. Single Epithelial Cell Size 1 - 10
7. Bare Nuclei 1 - 10
8. Bland Chromatin 1 - 10
9. Normal Nucleoli 1 - 10
10. Mitoses 1 - 10
11. Class(分类): (2 代表良性, 4 代表恶性)
我们的任务是使用knn分类数据,预测肿瘤是良性的还是恶性的。
代码:
# -*- coding:utf-8 -*-
import math
import numpy as np
from collections import Counter
import warnings
import pandas as pd
import random
# k-Nearest Neighbor算法
def k_nearest_neighbors(data, predict, k=5):
if len(data) >= k:
warnings.warn("k is too small")
# 计算predict点到各点的距离
distances = []
for group in data:
for features in data[group]:
euclidean_distance = np.linalg.norm(np.array(features) - np.array(predict))
distances.append([euclidean_distance, group])
sorted_distances = [i[1] for i in sorted(distances)]
top_nearest = sorted_distances[:k]
group_res = Counter(top_nearest).most_common(1)[0][0]
confidence = Counter(top_nearest).most_common(1)[0][1] * 1.0 / k
return group_res, confidence
if __name__ == '__main__':
df = pd.read_csv('breast-cancer-wisconsin.data') # 加载数据
# print(df.head())
print(df.shape)
'''inplace=True:不创建新的对象,直接对原始对象进行修改;
inplace=False:对数据进行修改,创建并返回新的对象承载其修改结果。'''
df.replace('?', np.nan, inplace=True) # -99999
df.dropna(inplace=True) # 去掉无效数据
print(df.shape)
df.drop(['id'], 1, inplace=True)
# 把数据分成两部分,训练数据和测试数据
full_data = df.astype(float).values.tolist()# 先将数据类型转为float类型,在转为列表
random.shuffle(full_data)
test_size = 0.2 # 测试数据占20%
train_data = full_data[:-int(test_size * len(full_data))]
test_data = full_data[-int(test_size * len(full_data)):]
# print(test_data)
train_set = {2: [], 4: []}
test_set = {2: [], 4: []}
for i in train_data:
train_set[i[-1]].append(i[:-1])
for i in test_data:
test_set[i[-1]].append(i[:-1])
correct = 0
total = 0
for group in test_set:
for data in test_set[group]:
# 你可以调整这个k看看准确率的变化,你也可以使用matplotlib画出k对应的准确率,找到最好的k值
res, confidence = k_nearest_neighbors(train_set, data,k=5)
if group == res:
correct += 1
else:
print(confidence)
total += 1
print(correct / total) # 准确率
print(k_nearest_neighbors(train_set, [4, 2, 1, 1, 1, 2, 3, 2, 1], k=5)) # 预测一条记录
执行结果:
$ python breast_cancer_knn.py
1.0 # 分类错误时对应的自信程度,100%自信但是分类错误,这是我们要注意的
0.6 #
0.6 #
0.9779411764705882 # 预测准确率
(2, 1.0) # 良性
使用scikit-learn中k邻近算法
# -*- coding:utf-8 -*-
import numpy as np
# cross_validation已deprecated,使用model_selection替代
from sklearn import preprocessing, model_selection, neighbors
import pandas as pd
df = pd.read_csv('breast-cancer-wisconsin.data')
# print(df.head())
# print(df.shape)
df.replace('?', np.nan, inplace=True) # -99999
df.dropna(inplace=True)
# print(df.shape)
df.drop(['id'], 1, inplace=True)
X = np.array(df.drop(['class'], 1))
Y = np.array(df['class'])
X_trian, X_test, Y_train, Y_test = model_selection.train_test_split(X, Y, test_size=0.2)
clf = neighbors.KNeighborsClassifier()
clf.fit(X_trian, Y_train)
accuracy = clf.score(X_test, Y_test)
print(accuracy)
sample = np.array([4, 2, 1, 1, 1, 2, 3, 2, 1])
print(sample.reshape(1, -1))
print(clf.predict(sample.reshape(1, -1)))
执行结果:
$ python breast_cancer.py
0.970802919708 # 预测准确率
[2] # 良性
scikit-learn中的算法和我们上面实现的算法原理完全一样,只是它的效率更高,支持的参数更全。
来源:http://blog.topspeedsnail.com/archives/10287#more-10287