对这个问题一直很陌生,1到正无穷,肯定有那么几个k能够使得数据有比较好的聚类结果,怎么找一下呢?
先把一些 已经存在的比较经典的方法收藏下
You can maximize the Bayesian Information Criterion (BIC):
BIC(C|X)=L(X|C)−(p/2)∗logn
where�L(X|C) is the log-likelihood of the dataset X according to model C, p is the number of parameters in the model C, and n is the number of points in the dataset. See "X-means: extending K-means with efficient estimation of the number of clusters" by Dan Pelleg and Andrew Moore in ICML 2000.
Another approach is to start with a large value for k and keep removing centroids (reducing k) until it no longer reduces the description length. See "MDL principle for robust vector quantisation" by Horst Bischof, Ales Leonardis, and Alexander Selb in Pattern Analysis and Applications vol. 2, p. 59-72, 1999.
Finally, you can start with one cluster, then keep splitting clusters until the points assigned to each cluster have a Gaussian distribution. In "Learning the k in k-means" (NIPS 2003), Greg Hamerly and Charles Elkan show some evidence that this works better than BIC, and that BIC does not penalize the model's complexity strongly enough.
Bayesian k-means may be a solution when you don't know the number of clusters. There's a related paper given in the website and the corresponding MATLAB code is also given.
2、wiki上有专题:
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/28624243/viewspace-758957/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/28624243/viewspace-758957/