一.理论
1.简介
1)最邻近规则分类KNN,是Cover和Hart在1968年提出了最初的临近算法;
2)分类算法
3)输入基于实例的学习,懒惰学习
2.例子
3.算法详述
1)步骤 为了判别未知实例的类别,以所有已知类别的实例作为参照; 选择参数K 计算未知实例与所有已知实例的距离 选择最近K个已知实例 根据少数服从多数的投票法则,让未知实例归类为K个最邻近样本中最多数的类别
2)实现细节 关于K(训练之后获得) 关于距离的衡量方法
4.算法优缺点
1)优点 简单、易于理解、容易实现、通过对K的选择可具备丢噪音数据的健壮性
2)缺点 需要大量空间存储所有已知实例 算法复杂度高 当样本分布不平衡时,新的样本会归类为主导样本,从而不能更好的接近实际分类结果5.改进版本 考虑距离,根据距离加上权重 比如:1/d(d:距离)
二.实践
1.调用系统自带的KNN模块进行实现
1)代码
from sklearn import neighbors from sklearn import datasets #获取KNN分类器 knn = neighbors.KNeighborsClassifier() #加载数据 iris = datasets.load_iris() print(iris) #建立模型 knn.fit(iris.data,iris.target) #输入预测模型进行预测 predictedLabel = knn.predict([[0.1,0.2,0.3,0.4]]) #输出预测结果 print(predictedLabel)
2)结果
{'data': array([[ 5.1, 3.5, 1.4, 0.2], [ 4.9, 3. , 1.4, 0.2], [ 4.7, 3.2, 1.3, 0.2], [ 4.6, 3.1, 1.5, 0.2], [ 5. , 3.6, 1.4, 0.2], [ 5.4, 3.9, 1.7, 0.4], [ 4.6, 3.4, 1.4, 0.3], [ 5. , 3.4, 1.5, 0.2], [ 4.4, 2.9, 1.4, 0.2], [ 4.9, 3.1, 1.5, 0.1], [ 5.4, 3.7, 1.5, 0.2], [ 4.8, 3.4, 1.6, 0.2], [ 4.8, 3. , 1.4, 0.1], [ 4.3, 3. , 1.1, 0.1], [ 5.8, 4. , 1.2, 0.2], [ 5.7, 4.4, 1.5, 0.4], [ 5.4, 3.9, 1.3, 0.4], [ 5.1, 3.5, 1.4, 0.3], [ 5.7, 3.8, 1.7, 0.3], [ 5.1, 3.8, 1.5, 0.3], [ 5.4, 3.4, 1.7, 0.2], [ 5.1, 3.7, 1.5, 0.4], [ 4.6, 3.6, 1. , 0.2], [ 5.1, 3.3, 1.7, 0.5], [ 4.8, 3.4, 1.9, 0.2], [ 5. , 3. , 1.6, 0.2], [ 5. , 3.4, 1.6, 0.4], [ 5.2, 3.5, 1.5, 0.2], [ 5.2, 3.4, 1.4, 0.2], [ 4.7, 3.2, 1.6, 0.2], [ 4.8, 3.1, 1.6, 0.2], [ 5.4, 3.4, 1.5, 0.4], [ 5.2, 4.1, 1.5, 0.1], [ 5.5, 4.2, 1.4, 0.2], [ 4.9, 3.1, 1.5, 0.1], [ 5. , 3.2, 1.2, 0.2], [ 5.5, 3.5, 1.3, 0.2], [ 4.9, 3.1, 1.5, 0.1], [ 4.4, 3. , 1.3, 0.2], [ 5.1, 3.4, 1.5, 0.2], [ 5. , 3.5, 1.3, 0.3], [ 4.5, 2.3, 1.3, 0.3], [ 4.4, 3.2, 1.3, 0.2], [ 5. , 3.5, 1.6, 0.6], [ 5.1, 3.8, 1.9, 0.4], [ 4.8, 3. , 1.4, 0.3], [ 5.1, 3.8, 1.6, 0.2], [ 4.6, 3.2, 1.4, 0.2], [ 5.3, 3.7, 1.5, 0.2], [ 5. , 3.3, 1.4, 0.2], [ 7. , 3.2, 4.7, 1.4], [ 6.4, 3.2, 4.5, 1.5], [ 6.9, 3.1, 4.9, 1.5], [ 5.5, 2.3, 4. , 1.3], [ 6.5, 2.8, 4.6, 1.5], [ 5.7, 2.8, 4.5, 1.3], [ 6.3, 3.3, 4.7, 1.6], [ 4.9, 2.4, 3.3, 1. ], [ 6.6, 2.9, 4.6, 1.3], [ 5.2, 2.7, 3.9, 1.4], [ 5. , 2. , 3.5, 1. ], [ 5.9, 3. , 4.2, 1.5], [ 6. , 2.2, 4. , 1. ], [ 6.1, 2.9, 4.7, 1.4], [ 5.6, 2.9, 3.6, 1.3], [ 6.7, 3.1, 4.4, 1.4], [ 5.6, 3. , 4.5, 1.5], [ 5.8, 2.7, 4.1, 1. ], [ 6.2, 2.2, 4.5, 1.5], [ 5.6, 2.5, 3.9, 1.1], [ 5.9, 3.2, 4.8, 1.8], [ 6.1, 2.8, 4. , 1.3], [ 6.3, 2.5, 4.9, 1.5], [ 6.1, 2.8, 4.7, 1.2], [ 6.4, 2.9, 4.3, 1.3], [ 6.6, 3. , 4.4, 1.4], [ 6.8, 2.8, 4.8, 1.4], [ 6.7, 3. , 5. , 1.7], [ 6. , 2.9, 4.5, 1.5], [ 5.7, 2.6, 3.5, 1. ], [ 5.5, 2.4, 3.8, 1.1], [ 5.5, 2.4, 3.7, 1. ], [ 5.8, 2.7, 3.9, 1.2], [ 6. , 2.7, 5.1, 1.6], [ 5.4, 3. , 4.5, 1.5], [ 6. , 3.4, 4.5, 1.6], [ 6.7, 3.1, 4.7, 1.5], [ 6.3, 2.3, 4.4, 1.3], [ 5.6, 3. , 4.1, 1.3], [ 5.5, 2.5, 4. , 1.3], [ 5.5, 2.6, 4.4, 1.2], [ 6.1, 3. , 4.6, 1.4], [ 5.8, 2.6, 4. , 1.2], [ 5. , 2.3, 3.3, 1. ], [ 5.6, 2.7, 4.2, 1.3], [ 5.7, 3. , 4.2, 1.2], [ 5.7, 2.9, 4.2, 1.3], [ 6.2, 2.9, 4.3, 1.3], [ 5.1, 2.5, 3. , 1.1], [ 5.7, 2.8, 4.1, 1.3], [ 6.3, 3.3, 6. , 2.5], [ 5.8, 2.7, 5.1, 1.9], [ 7.1, 3. , 5.9, 2.1], [ 6.3, 2.9, 5.6, 1.8], [ 6.5, 3. , 5.8, 2.2], [ 7.6, 3. , 6.6, 2.1], [ 4.9, 2.5, 4.5, 1.7], [ 7.3, 2.9, 6.3, 1.8], [ 6.7, 2.5, 5.8, 1.8], [ 7.2, 3.6, 6.1, 2.5], [ 6.5, 3.2, 5.1, 2. ], [ 6.4, 2.7, 5.3, 1.9], [ 6.8, 3. , 5.5, 2.1], [ 5.7, 2.5, 5. , 2. ], [ 5.8, 2.8, 5.1, 2.4], [ 6.4, 3.2, 5.3, 2.3], [ 6.5, 3. , 5.5, 1.8], [ 7.7, 3.8, 6.7, 2.2], [ 7.7, 2.6, 6.9, 2.3], [ 6. , 2.2, 5. , 1.5], [ 6.9, 3.2, 5.7, 2.3], [ 5.6, 2.8, 4.9, 2. ], [ 7.7, 2.8, 6.7, 2. ], [ 6.3, 2.7, 4.9, 1.8], [ 6.7, 3.3, 5.7, 2.1], [ 7.2, 3.2, 6. , 1.8], [ 6.2, 2.8, 4.8, 1.8], [ 6.1, 3. , 4.9, 1.8], [ 6.4, 2.8, 5.6, 2.1], [ 7.2, 3. , 5.8, 1.6], [ 7.4, 2.8, 6.1, 1.9], [ 7.9, 3.8, 6.4, 2. ], [ 6.4, 2.8, 5.6, 2.2], [ 6.3, 2.8, 5.1, 1.5], [ 6.1, 2.6, 5.6, 1.4], [ 7.7, 3. , 6.1, 2.3], [ 6.3, 3.4, 5.6, 2.4], [ 6.4, 3.1, 5.5, 1.8], [ 6. , 3. , 4.8, 1.8], [ 6.9, 3.1, 5.4, 2.1], [ 6.7, 3.1, 5.6, 2.4], [ 6.9, 3.1, 5.1, 2.3], [ 5.8, 2.7, 5.1, 1.9], [ 6.8, 3.2, 5.9, 2.3], [ 6.7, 3.3, 5.7, 2.5], [ 6.7, 3. , 5.2, 2.3], [ 6.3, 2.5, 5. , 1.9], [ 6.5, 3. , 5.2, 2. ], [ 6.2, 3.4, 5.4, 2.3], [ 5.9, 3. , 5.1, 1.8]]), 'target': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]), 'target_names': array(['setosa', 'versicolor', 'virginica'], dtype='<U10'), 'DESCR': 'Iris Plants Database\n====================\n\nNotes\n-----\nData Set Characteristics:\n :Number of Instances: 150 (50 in each of three classes)\n :Number of Attributes: 4 numeric, predictive attributes and the class\n :Attribute Information:\n - sepal length in cm\n - sepal width in cm\n - petal length in cm\n - petal width in cm\n - class:\n - Iris-Setosa\n - Iris-Versicolour\n - Iris-Virginica\n :Summary Statistics:\n\n ============== ==== ==== ======= ===== ====================\n Min Max Mean SD Class Correlation\n ============== ==== ==== ======= ===== ====================\n sepal length: 4.3 7.9 5.84 0.83 0.7826\n sepal width: 2.0 4.4 3.05 0.43 -0.4194\n petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)\n petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)\n ============== ==== ==== ======= ===== ====================\n\n :Missing Attribute Values: None\n :Class Distribution: 33.3% for each of 3 classes.\n :Creator: R.A. Fisher\n :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)\n :Date: July, 1988\n\nThis is a copy of UCI ML iris datasets.\nhttp://archive.ics.uci.edu/ml/datasets/Iris\n\nThe famous Iris database, first used by Sir R.A Fisher\n\nThis is perhaps the best known database to be found in the\npattern recognition literature. Fisher\'s paper is a classic in the field and\nis referenced frequently to this day. (See Duda & Hart, for example.) The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant. One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\nReferences\n----------\n - Fisher,R.A. "The use of multiple measurements in taxonomic problems"\n Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to\n Mathematical Statistics" (John Wiley, NY, 1950).\n - Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.\n (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.\n - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System\n Structure and Classification Rule for Recognition in Partially Exposed\n Environments". IEEE Transactions on Pattern Analysis and Machine\n Intelligence, Vol. PAMI-2, No. 1, 67-71.\n - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions\n on Information Theory, May 1972, 431-433.\n - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II\n conceptual clustering system finds 3 classes in the data.\n - Many, many more ...\n', 'feature_names': ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']} [0]
2.KNN自定义实现
1)数据准备(E:\MachineLearning-data\iris.data.txt)
部分数据如下所示:
2)代码
'''
KNN:自定义实现
'''
import csv
import random
import math
import operator
#1.加载数据集
def loadDataset(filename,split,traingSet,testSet):
with open(filename,"r") as csvfile:
lines = csv.reader(csvfile)
#将读入数据转换为列表进行处理
dataset = list(lines)
#print(dataset)
#x取:0-149
for x in range(len(dataset)-1):
#y取:0-3
for y in range(4):
#将数据浮点化
dataset[x][y] = float(dataset[x][y])
#随机函数产生0-1的值,将数据集分为训练集和测试集
if random.random()<split:
traingSet.append(dataset[x])
else:
testSet.append(dataset[x])
#2.计算最近距离
def euclideanDistance(instance1,instance2,length):
distance = 0
for x in range(length):
distance += pow((instance1[x]-instance2[x]),2)
return math.sqrt(distance)
#3.获取最近的K个邻居
def getNeighbors(trainingSet,testInstance,k):
distances = []
length = len(testInstance)-1
for x in range(len(trainingSet)):
dist = euclideanDistance(testInstance,trainingSet[x],length)
distances.append((trainingSet[x],dist))
distances.sort(key=operator.itemgetter(1))
neighbors = []
for x in range(k):
neighbors.append(distances[x][0])
return neighbors
#4.根据返回的邻居,将其分类
def getResponse(neighbors):
classVotes = {}
for x in range(len(neighbors)):
response = neighbors[x][-1]
if response in classVotes:
classVotes[response]+=1
else:
classVotes[response] = 1
sortedNotes = sorted(classVotes.items(),key=operator.itemgetter(1),reverse=True)
return sortedNotes[0][0]
#5.计算准确率
def getAccuracy(testSet,predictions):
correct = 0
for x in range(len(testSet)):
if(testSet[x][-1]==predictions[x]):
correct+=1
return (correct/float(len(testSet)))*100.0
#6.主函数
def main():
trainingSet = []
testSet = []
#三分之二为训练集,三分之一为测试集
split = 0.67
#加入r代表忽略地址中的特殊符号的影响
loadDataset(r"E:\MachineLearning-data\iris.data.txt",split,trainingSet,testSet)
print("Train set:",repr(len(trainingSet)))
print("Test set:", repr(len(testSet)))
predictions = []
k = 3
for x in range(len(testSet)):
neighbors = getNeighbors(trainingSet,testSet[x],k)
result = getResponse(neighbors)
predictions.append(result)
print(">predicted="+repr(result)+", actual="+repr(testSet[x][-1]))
accuracy = getAccuracy(testSet,predictions)
print("Accuracy:"+repr(accuracy)+"%")
main()
3)结果
Train set: 101 Test set: 48 >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-virginica', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-versicolor', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' Accuracy:95.83333333333334% runfile('E:/Python项目/MachineLearning/KNN/KNN02.py', wdir='E:/Python项目/MachineLearning/KNN') Train set: 102 Test set: 47 >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-setosa', actual='Iris-setosa' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-versicolor', actual='Iris-versicolor' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-versicolor', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-versicolor', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' >predicted='Iris-virginica', actual='Iris-virginica' Accuracy:95.74468085106383%
4)总结
在实现中,需要注意细节问题,如代码的书写问题;