kNN的概念
kNN是一种较为简单的监督学习方法,输入没有标注的新数据后,将新数据的特征与样本集中的每个数据对应的特征比较,然后算法选择出最接近的k的个数据,根据这k个数据判断新数据。如果是分类问题,投票法,加权投票法。回归问题可以是平均法。
实验
这次实践,采用最简单的欧式距离才度量特征间的相似性。数据集来源是“手写数字数据集的光学识别”。
# -*- coding: utf-8 -*-
"""
kNN.py
Created on Thu May 04 12:43:21 2017
@author: holy
"""
from numpy import *
import operator
from os import listdir
def classify0(inX,dataSet,labels,k):
dataSetSize=dataSet.shape[0]
diffMat=tile(inX,(dataSetSize,1))-dataSet
sqDiffMat=diffMat**2
sqDistances=sqDiffMat.sum(axis=1)
distances=sqDistances**0.5
sortedDistIndicies=distances.argsort()
classCount={}
for i in range(k):
voteIlabel=labels[sortedDistIndicies[i]]
classCount[voteIlabel]=classCount.get(voteIlabel,0)+1
sortedClassCount=sorted(classCount.iteritems(),key=operator.itemgetter(1),reverse=True)
return sortedClassCount[0][0]
def img2vector(filename):
returnVect = zeros((1,1024))
fr = open(filename)
for i in range(32):
lineStr = fr.readline()
for j in range(32):
returnVect[0,32*i+j] = int(lineStr[j])
return returnVect
def handwritingClassTest():
hwLabels = []
trainingFileList = listdir('trainingDigits') #load the training set
m = len(trainingFileList)
trainingMat = zeros((m,1024))
for i in range(m):
fileNameStr = trainingFileList[i]
fileStr = fileNameStr.split('.')[0] #take off .txt
classNumStr = int(fileStr.split('_')[0])
hwLabels.append(classNumStr)
trainingMat[i,:] = img2vector('trainingDigits/%s' % fileNameStr)
testFileList = listdir('testDigits') #iterate through the test set
errorCount = 0.0
mTest = len(testFileList)
for i in range(mTest):
fileNameStr = testFileList[i]
fileStr = fileNameStr.split('.')[0] #take off .txt
classNumStr = int(fileStr.split('_')[0])
vectorUnderTest = img2vector('testDigits/%s' % fileNameStr)
classifierResult = classify0(vectorUnderTest, trainingMat, hwLabels, 3)
print "the classifier came back with: %d, the real answer is: %d" % (classifierResult, classNumStr)
if (classifierResult != classNumStr): errorCount += 1.0
print "\nthe total number of errors is: %d" % errorCount
print "\nthe total error rate is: %f" % (errorCount/float(mTest))
下面是测试代码
# -*- coding: utf-8 -*-
"""
Created on Thu May 04 15:31:17 2017
@author: holy
"""
import kNN
kNN.handwritingClassTest()
对于1000个数据集,错误率为0.011。
问题分析
从实验中可以体会到,每个样例预测都要使用到整个测试集且每个要对面的样本要进行距离的计算。
那么kNN的优缺点是什么呢?
另外,
k值的选取?
相似性的度量?
分类决策规则?
有没有对kNN的改进算法?