k-means是一种无监督学习聚类算法,其方法通俗易懂,易于实现,以下将对算法原理进行介绍并用python实现。
1.k-means原理
k均值算法的计算过程非常直观:
1.根据数据集dataSet各列的最大最小值随机生产k个元素,作为k个簇的各自的中心;
2.分别计算剩下的元素到k个簇中心的相异度,将这些元素分别划归到相异度最低的簇;
3.根据聚类结果,重新计算k个簇各自的中心,计算方法是取簇中所有元素各自维度的算术平均数;
4.将dataSet中全部元素按照新的中心重新聚类;
5.重复第4步,直到聚类结果不再变化;
6.将结果输出。
2.算法优缺点
优点在于:
1.原理通俗易懂,实现方便,收敛速度快;
2.聚类效果较优,可解释性较强;
3.调参只需要改变k值。
缺点在于:
1.k值难以确定;
2.大规模数据收敛较慢;
3.采用的是迭代的方法,只能得到局部最优解;
4.对异常值较敏感。
3.python代码实现
#pyhton3.6
#随机生成聚类中心函数
from numpy import *
from pandas import *
def randomCenters(dataSet,k):
n = shape(dataSet)[1]
centers = mat(zeros((k,n)))
for j in range(n):
minJ = min(dataSet[:,j])
rangeJ = float(max(dataSet[:,j]) - minJ)
centers[:,j] = minJ + rangeJ * random.rand(k,1)
return centers
#pyhton3.6
#距离计算函数
def eucliDist(A,B):
return sqrt(sum(power((A - B),2)))
# K-means聚类算法
def whlKmeans(dataSet, k, dist = eucliDist, centers = randomCenters):
n = shape(dataSet)[0]
m = shape(dataSet)[1]
distMat = mat(zeros((n,2))) #存放该样本点的质心及距离
randomCents = centers(dataSet,k)
centerChanged = True
while centerChanged:
centerChanged = False
for i in range(n):
minDist = inf
minIndex = -1
for j in range(k):
distMean = dist(dataSet[i,:],randomCents[j,:])
if distMean < minDist:
minDist = distMean
minIndex = j
if distMat[i,0] != minIndex: #通过聚类的结果是否有变化判断是否需要继续循环
centerChanged = True
distMat[i,0] = minIndex
distMat[i,1] = distMean
print(randomCents) #输出当前的聚类中心
dataNew = column_stack((dataSet,distMat[:,0])) #合并数据集,重新计算聚类中心
dataUse = DataFrame(dataNew)
for i in range(k):
dataMean = dataUse[dataUse[m] == i]
l = []
for j in range(m):
means = mean(dataMean[j])
l.append(means)
randomCents[i] = l
return randomCents,distMat
测试:
#pyhton3.6
c1 = [1,2]
c2 = [10,11]
c3 = [2,3]
c4 = [9,10]
dataTest = mat([c1,c2,c3,c4])
whlKmeans(dataTest , 2, dist = eucliDist, centers = randomCenters)
#输出结果为:
[[4.13037972 8.55935784]
[3.49351157 5.39618773]]
[[ 9.5 10.5]
[ 1.5 2.5]]
(matrix([[ 9.5, 10.5],
[ 1.5, 2.5]]),
matrix([[ 1. , 0.70710678],
[ 0. , 12.02081528],
[ 1. , 0.70710678],
[ 0. , 10.60660172]]))