Spectral Clustering
利用拉普拉斯矩阵的数值域衡量两类的区分关联度RatioCut,
以关联度小为目标,转化为以特征值偏序的倒序(从小到大),
使用相应特征向量作为K-means的k_features进行聚类。
由于小特征向量度量的是数据扁平方向的相应特征(坐标轴),
当然也严格对应于关联度小,考虑到此时,相应特征向量的元素就是
样本点所属类别的表示,故利用K-means对这些坐标进行聚类的合理性
是明显的。
利用拉普拉斯矩阵的数值域衡量两类的区分关联度RatioCut,
以关联度小为目标,转化为以特征值偏序的倒序(从小到大),
使用相应特征向量作为K-means的k_features进行聚类。
由于小特征向量度量的是数据扁平方向的相应特征(坐标轴),
当然也严格对应于关联度小,考虑到此时,相应特征向量的元素就是
样本点所属类别的表示,故利用K-means对这些坐标进行聚类的合理性
是明显的。
此算法的缺点也是明显的,由于求特征值特征向量是非线性运算,当样本量足够
大时求解很慢。(高次方程跳点频繁,收敛慢)
大时求解很慢。(高次方程跳点频繁,收敛慢)
np.indices(ndrarray):返回ndarray所示形式的网格坐标,先按维度分。
对Affinity Propagation的例子稍加修改得到下面示例代码:
注意这里使用了欧氏距离的相反数上升取正来设定样本间相似度的度量阵,取非半正定阵出错。
注意这里使用了欧氏距离的相反数上升取正来设定样本间相似度的度量阵,取非半正定阵出错。
#from sklearn.cluster import AffinityPropagation
from sklearn.cluster import spectral_clustering
from sklearn import metrics
from sklearn.datasets.samples_generator import make_blobs
import numpy as np
centers = np.array([[1, 1], [-1, -1], [1, -1]])
X, labels_true = make_blobs(n_samples = 3000, centers = centers, cluster_std = 0.5, random_state = 0)
X, labels_true = make_blobs(n_samples = 3000, centers = centers, cluster_std = 0.5, random_state = 0)
metrics_metrix = (-1 * metrics.pairwise.pairwise_distances(X)).astype(np.int32)
metrics_metrix += -1 * metrics_metrix.min()
metrics_metrix += -1 * metrics_metrix.min()
labels = spectral_clustering(metrics_metrix, n_clusters = 3)
n_clusters_ = 3
print "Estimated number of clusters: %d" % n_clusters_
print "Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels)
print "Completeness: %0.3f" % metrics.completeness_score(labels_true, labels)
print "V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels)
print "Adjusted Rand Index: %0.3f" % metrics.adjusted_rand_score(labels_true, labels)
print "Adjusted Mutual Information: %0.3f" % metrics.adjusted_mutual_info_score(labels_true, labels)
print "Silhouette Coefficiet: %0.3f" % metrics.silhouette_score(X, labels, metric = 'sqeuclidean')
print "Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels)
print "Completeness: %0.3f" % metrics.completeness_score(labels_true, labels)
print "V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels)
print "Adjusted Rand Index: %0.3f" % metrics.adjusted_rand_score(labels_true, labels)
print "Adjusted Mutual Information: %0.3f" % metrics.adjusted_mutual_info_score(labels_true, labels)
print "Silhouette Coefficiet: %0.3f" % metrics.silhouette_score(X, labels, metric = 'sqeuclidean')
import matplotlib.pyplot as plt
from itertools import cycle
from itertools import cycle
plt.close('all')
plt.figure(1)
plt.clf()
plt.figure(1)
plt.clf()
colors = cycle('bgrcmyk')
for k, col in zip(range(n_clusters_), colors):
class_members = labels == k
plt.plot(X[class_members, 0], X[class_members, 1], col + '.')
for k, col in zip(range(n_clusters_), colors):
class_members = labels == k
plt.plot(X[class_members, 0], X[class_members, 1], col + '.')
plt.title('Number of clusters: %d' % n_clusters_)
plt.show()
plt.show()