我尝试为MNIST数据集实现k-means算法。但由于结果并不理想,可能有一个(或几个)我目前看不到的bug。代码非常简单。以下是我目前所做的:import numpy as np
# Load images
I = np.load("mnist_test_images.npy").astype(float) # (10000,784)
L = np.load("mnist_test_labels.npy").astype(int) # (10000,1)
# Scale
I = 2.0*(I/255.0-0.5)
images = len(I)
# Random initialization of centers for k=10 clusters
M = np.random.randn(10,28*28)
guess = np.zeros((len(I),1))
step = 0
while (True):
# Compute distance of every image i to the center of every cluster k
# image i belongs to cluster with smallest distance
for i in range(images):
d = np.sum((M-I[i])**2,axis=1)
guess[i] = np.argmin(d)
# Update the centers for all clusters
# New center is the mean of all images i which belong to cluster k
for k in range(10):
idx, _ = np.where(guess == k)
if len(idx) > 0:
M[k] = np.mean(I[idx],axis=0)
# Test how good the algorithm works
# Very similar to first step
if (step % 10 == 0):
fitness = 0
for i in range(images):
dist = np.sum((M-I[i])**2,axis=1)
if L[i] == np.argmin(dist):
fitness += 1
print("%d" % fitness, flush=True)
step += 1
代码看起来非常简单。但可能有个虫子。当我测试它时,精度从10-20%下降到5-10%,或者几乎瞬间收敛到30%以上。我认不出有什么学问。集群中心的随机初始化会导致这种行为吗?在
谢谢你!在