K-Means中K的选择

K-Means中K的选择

对这个问题一直很陌生,1到正无穷,肯定有那么几个k能够使得数据有比较好的聚类结果,怎么找一下呢?

File:DataClustering ElbowCriterion.JPG

先把一些 已经存在的比较经典的方法收藏下

1、stackoverflow上面的答案

You can maximize the Bayesian Information Criterion (BIC):
BIC(C|X)=L(X|C)(p/2)logn
whereL(X|C) is the log-likelihood of the dataset X according to model Cp is the number of parameters in the model C, and n is the number of points in the dataset. See "X-means: extending K-means with efficient estimation of the number of clusters" by Dan Pelleg and Andrew Moore in ICML 2000.

Another approach is to start with a large value for k and keep removing centroids (reducing k) until it no longer reduces the description length. See "MDL principle for robust vector quantisation" by Horst Bischof, Ales Leonardis, and Alexander Selb in Pattern Analysis and Applications vol. 2, p. 59-72, 1999.

Finally, you can start with one cluster, then keep splitting clusters until the points assigned to each cluster have a Gaussian distribution. In "Learning the k in k-means" (NIPS 2003), Greg Hamerly and Charles Elkan show some evidence that this works better than BIC, and that BIC does not penalize the model's complexity strongly enough.

Bayesian k-means may be a solution when you don't know the number of clusters. There's a related paper given in the website and the corresponding MATLAB code is also given.

2、wiki上有专题:

Determining the number of clusters in a data set

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/28624243/viewspace-758957/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/28624243/viewspace-758957/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值