机器学习实战之分类篇 一 k-近邻算法(从电影分类到海伦约会)

k-近邻算法(kNN):

       存在一个样本数据集合,也称作为训练样本集(trainSet),并且样本集中每个数据都存在标签(label),即我们知道样本集中每一个数据与所属分类的对应关系。输入没有标签的新数据(testSet)后,将新的数据的每个特征与样本集中数据对应的特征进行比较,然后算法提取样本最相似数据(最近邻)的分类标签。一般来说,我们只选择样本数据集中前k个最相似的数据,这就是k-近邻算法中k的出处,通常k是不大于20的整数。最后,选择k个最相似数据中出现次数最多的分类,作为新数据的分类。

kNN算法流程:

  1. 计算未知点与训练数据集的欧氏距离;
  2. 对所得距离进行升序排序;
  3. 选取前k个距离对应的标签;
  4. 返回前k个标签频率统计最大的类作为预测类别;

kNN算法特性:

        优点:精度高、对异常数据不敏感、无数据输入假定;

        缺点:计算复杂度高,计算量较大;

        适用情形:数值型和标称型数据

电影分类问题:

       给定已知电影的数据信息(打斗镜头和接吻接吻镜头)及分类情况(爱情片/动作片),给定未知电影数据信息,通过数据分析,确定未知电影的分类情况。

        距离结果如下:

         设k = 3,则前3个距离分别为[18.7, 19.2, 20.5],对应的标签为[爱情片,爱情片,爱情片], 频率统计:爱情片 100%, 动作片0%,返回位置电影的可能类别为爱情片。

        python3编程如下:

import numpy as np
import operator

'''
Function : 	createDataSet()
	Args: 	None
	Rets: 	group, dataset 
			lables, classes
'''
def createDataSet():
	group = np.array([[3, 104], [2, 100], [1, 81], [101, 10], [99, 5], [98, 2]])
	lables = ['Ramatic', 'Ramatic', 'Action', 'Action', 'Action']
	return group, lables

'''
Function : 	kNN(test, group, lables, k)
	Args :	test, test set
			group, train set
			lables, classes
			k, k nearest setting
	Rets :	pred[0][0], predict class
'''
def kNN(test, group, lables, k):
	dataSize = group.shape[0]
	#tile: to copy data as Array_m*n
	diff = np.tile(test, (dataSize, 1)) - group
	sqrdiff = diff**2
	#sum(axis = 1) : sum as row
	sumdiff = sqrdiff.sum(axis = 1)
	dist = sumdiff**0.5
	#argsort : return the index of dist order, [3, 1, 2] -> [1, 2, 0] asceeding order
	dist_order = dist.argsort()
	classes = {}
	for i in range(k):
		voteLable = lables[dist_order[i]]
		classes[voteLable] = classes.get(voteLable, 0) + 1
	#sorted : for any iterator, return list
	pred = sorted(classes.items(), key = operator.itemgetter(1), reverse = True)
	return pred[0][0] 

if __name__ == '__main__':
	group, lables = createDataSet()
	test = [18, 90]
	print('test : ', test)
	pred_class = kNN(test, group, lables, 3)
	print('predict class : ', pred_class)

一个稍复杂的例子:海伦约会

       海伦女士一直使用在线约会网站寻找适合自己的约会对象。尽管约会网站会推荐不同的任选,但她并不是喜欢每一个人。经过一番总结,她发现自己交往过的人可以进行如下分类:

  1. 不喜欢的人(didntLike)
  2. 魅力一般的人(smallDoses)
  3. 极具魅力的人(largeDoses)

    海伦收集约会数据已经有了一段时间,她把这些数据存放在文本文件datingTestSet.txt中,每个样本数据占据一行,总共有1000行。数据信息分别为:每年飞行旅程里程、玩视频游戏时间百分比、每周消耗冰激凌公升数及喜好程度。数据集下载:info.txt

    考虑到给定特征的数量级相差较大,因此需要对各特征进行归一化处理,对应公式为:x' = \frac{x- min(x))}{max(x)-min(x)}.

    按照kNN算法分析,python3实现代码如下:

import numpy as np
import operator
from matplotlib.font_manager import FontProperties
import matplotlib.lines as mlines
import matplotlib.pyplot as plt

'''
Function : file2matrix(filename)
	Description : 	to covert file into matrix
	Args:	filename
	Rets:	featureMatrix, the matrix format from file coverting
			labels, the label for info
'''

def file2matrix(filename):
	fread = open(filename)
	info = fread.readlines()
	featureMatrix = np.zeros((len(info), 3))
	labels = []
	index = 0
	for line in info:
		line = line.strip()
		listline = line.split('\t')
		featureMatrix[index, :] = listline[0:3]
		if listline[-1] == 'didntLike':
			labels.append(1)
		if listline[-1] == 'smallDoses':
			labels.append(2)
		if listline[-1] == 'largeDoses':
			labels.append(3)
		index += 1
	return featureMatrix, labels

'''
Function : normalize(featureMatrix)
	Description : to normalize data
	Args :	featureMatrix
	Rets : normFeatureMatrix
'''
def normalize(featureMatrix):
	#get every column minVal
	minVal = featureMatrix.min(0)
	#get every row maxVal
	maxVal = featureMatrix.max(0)
	ranges = maxVal - minVal
	normFeatureMatrix = np.zeros(np.shape(featureMatrix))
	row = normFeatureMatrix.shape[0]
	normFeatureMatrix = featureMatrix - np.tile(minVal, (row, 1))
	normFeatureMatrix = normFeatureMatrix / np.tile(ranges, (row, 1))
	return normFeatureMatrix

'''
Function : visualize(featureMatrix, labels)
	Description : to visualize data
	Args :	featureMatrix
			labels
	Rets :	None
'''

def visualize(featureMatrix, labels):
	font = FontProperties(size = 14)
	#fig: figure object, axs : subplot
	fig, axs = plt.subplots(nrows = 2, ncols = 2, sharex = False, sharey = False, figsize = (10, 10))
	labelColors = []
	for i in range(len(labels)):
		if i == 1:
			labelColors.append('black')
		if i == 2:
			labelColors.append('orange')
		if i == 3:
			labelColors.append('red')
	#subplot(0,0) scatter, s : scatter size, alpha : transparency
	axs[0][0].scatter(x = featureMatrix[:,0], y = featureMatrix[:, 1], color = labelColors, s = 15, alpha = 0.5)
	axs_0_title = axs[0][0].set_title(u'Route vs Game', FontProperties = font)
	axs_0_x = axs[0][0].set_xlabel(u'Route (km/year)', FontProperties = font)
	axs_0_y = axs[0][0].set_ylabel(u'Game (hours/week)', FontProperties = font)
	plt.setp(axs_0_title, size = 9, weight = 'bold', color = 'red')
	plt.setp(axs_0_x, size = 7, weight = 'bold', color = 'black')
	plt.setp(axs_0_y, size = 7, weight = 'bold', color = 'black')
	
	#subplot(0,1) scatter, s : scatter size, alpha : transparency
	axs[0][1].scatter(x = featureMatrix[:,0], y = featureMatrix[:, 2], color = labelColors, s = 15, alpha = 0.5)
	axs_1_title = axs[0][1].set_title(u'Route vs Icecream', FontProperties = font)
	axs_1_x = axs[0][1].set_xlabel(u'Route (km/year)', FontProperties = font)
	axs_1_y = axs[0][1].set_ylabel(u'Icecream (g/week)', FontProperties = font)
	plt.setp(axs_1_title, size = 9, weight = 'bold', color = 'red')
	plt.setp(axs_1_x, size = 7, weight = 'bold', color = 'black')
	plt.setp(axs_1_y, size = 7, weight = 'bold', color = 'black')
	
	#subplot(1,0) scatter, s : scatter size, alpha : transparency
	axs[1][0].scatter(x = featureMatrix[:,1], y = featureMatrix[:, 2], color = labelColors, s = 15, alpha = 0.5)
	axs_2_title = axs[1][0].set_title(u'Game vs Icecream', FontProperties = font)
	axs_2_x = axs[1][0].set_xlabel(u'Game (hours/week)', FontProperties = font)
	axs_2_y = axs[1][0].set_ylabel(u'Icecream (g/week)', FontProperties = font)
	plt.setp(axs_2_title, size = 9, weight = 'bold', color = 'red')
	plt.setp(axs_2_x, size = 7, weight = 'bold', color = 'black')
	plt.setp(axs_2_y, size = 7, weight = 'bold', color = 'black')
	
	#set legend
	didntLike = mlines.Line2D([], [], color = 'black', marker = '.', markersize = 6, label = 'didntLike')
	smallDoses = mlines.Line2D([], [], color = 'orange', marker = '.', markersize = 6, label = 'smallDoses')
	largeDoses = mlines.Line2D([], [], color = 'red', marker = '.', markersize = 6, label = 'largeDoses')
	axs[0][0].legend(handles = [didntLike, smallDoses, largeDoses])
	axs[0][1].legend(handles = [didntLike, smallDoses, largeDoses])
	axs[1][0].legend(handles = [didntLike, smallDoses, largeDoses])
	plt.show()

'''
Function : kNN(test, featureMatrix, labels, k)
	Description : to use kNN algorithm predict test result
	Args:	test	#test vector
			featureMatrix
			labels
			k	k classes
	Rets :	pred_class
'''
def kNN(test, featureMatrix, labels, k):
	row = featureMatrix.shape[0]
	diff = np.tile(test, (row, 1)) - featureMatrix
	sqdiff = diff**2
	dist = sqdiff.sum(axis = 1)
	dist = dist**0.5
	dist_order = dist.argsort()
	classes = {}
	for i in range(k):
		voteLabel = labels[dist_order[i]]
		classes[voteLabel] = classes.get(voteLabel, 0) + 1
	pred = sorted(classes.items(), key = operator.itemgetter(1), reverse = True)
	return pred[0][0]

'''
Function : train()
	Description : to train test data and record result
	Args : None
	Rets : None
'''
def train():
	filename = 'info.txt'
	featureMatrix, labels = file2matrix(filename)
	normFeatureMatrix = normalize(featureMatrix)
	inRote = 0.1
	row = normFeatureMatrix.shape[0]
	numTest = int(inRote * row)
	errorcount = 0.0
	for i in range(numTest):
		result = kNN(normFeatureMatrix[i,:], normFeatureMatrix[numTest:row, :], labels[numTest:row], 4)
		print('pred : %d vs real : %d'%(result, labels[i]))
		if result != labels[i]:
			errorcount += 1.0
	print('Error rate : %f %%'%(errorcount / float(numTest)*100))

'''
Function : score()
	Description :	to score for input info
		Args :	None
		Rets :	None
'''
def score():
	filename = 'info.txt'
	featureMatrix, labels = file2matrix(filename)
	#get every column minVal
	minVal = featureMatrix.min(0)
	#get every row maxVal
	maxVal = featureMatrix.max(0)
	resultList = ['didntLike', 'smallDoses', 'largeDoses']
	normFeatureMatrix = normalize(featureMatrix)
	route = float(input('Enter your routing precent : '))
	game = float(input('Enter your gaming precent : '))
	iceCream = float(input('Enter your iceCreaming precent : '))
	test = np.array([route, game, iceCream])
	normTest = (test - minVal) / (maxVal - minVal)
	result = kNN(normTest, normFeatureMatrix, labels, 3)
	print('Score : %s'%(resultList[result - 1]))

if __name__ == '__main__':
        #visualize()
	#train()
	score()

实验结果如下:

      预测结果与训练结果一致。在visualize()函数中实现了数据可视化,在train()函数中完成了测试统计,其对应的误差率为3.00%,即准确率为97.00%(相当不错)。

 

 

 

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值