k-medoids 算法思想

自wikipedia

From Wikipedia, the free encyclopedia

The k-medoids algorithm is a clustering algorithm related to the k-means algorithm and the medoidshift algorithm. Both the k-means and k-medoids algorithms are partitional (breaking the dataset up into groups) and both attempt to minimize the distance between points labeled to be in a cluster and a point designated as the center of that cluster. In contrast to the k-means algorithm, k-medoids chooses datapoints as centers (medoids or exemplars) and works with an arbitrary matrix of distances between datapoints instead of l_2. This method was proposed in 1987[1] for the work with l_1 norm and other distances.

k-medoid is a classical partitioning technique of clustering that clusters the data set of n objects into k clusters known a priori. A useful tool for determining k is the silhouette.

It is more robust to noise and outliers as compared to k-means because it minimizes a sum of pairwise dissimilarities instead of a sum of squared Euclidean distances.

medoid can be defined as the object of a cluster, whose average dissimilarity to all the objects in the cluster is minimal i.e. it is a most centrally located point in the cluster.

The most common realisation of k-medoid clustering is the Partitioning Around Medoids (PAM) algorithm and is as follows:[2]

  1. Initialize: randomly select k of the n data points as the medoids
  2. Associate each data point to the closest medoid. ("closest" here is defined using any valid distance metric, most commonly Euclidean distanceManhattan distance orMinkowski distance)
  3. For each medoid m
    1. For each non-medoid data point o
      1. Swap m and o and compute the total cost of the configuration
  4. Select the configuration with the lowest cost.
  5. repeat steps 2 to 4 until there is no change in the medoid.

Contents

   [hide

[edit]Demonstration of PAM

Cluster the following data set of ten objects into two clusters i.e. k = 2.

Consider a data set of ten objects as follows:

Figure 1.1 – distribution of the data
X126
X234
X338
X447
X562
X664
X773
X874
X985
X1076


[edit]Step 1

Figure 1.2 – clusters after step 1

Initialize k centers.

Let us assume c1 = (3,4) and c2 = (7,4)

So here c1 and c2 are selected as medoids.

Calculate distance so as to associate each data object to its nearest medoid. Cost is calculated using Manhattan distance (Minkowski distance metric with r = 1). Costs to the nearest medoid are shown bold in the table.

i c1Data objects (Xi)Cost (distance)
134263
334384
434474
534625
634643
734735
934856
1034766
i c2Data objects (Xi)Cost (distance)
174267
374388
474476
574623
674641
774731
974852
1074762

Then the clusters become:

Cluster1 = {(3,4)(2,6)(3,8)(4,7)}

Cluster2 = {(7,4)(6,2)(6,4)(7,3)(8,5)(7,6)}

Since the points (2,6) (3,8) and (4,7) are closer to c1 hence they form one cluster whilst remaining points form another cluster.

So the total cost involved is 20.

Where cost between any two points is found using formula

\mbox{cost}(x,c) = \sum_{i=1}^d | x_{i} - c_{i} |

where x is any data object, c is the medoid, and d is the dimension of the object which in this case is 2.

Total cost is the summation of the cost of data object from its medoid in its cluster so here:


\begin{align}\mbox{total cost} & = \{\mbox{cost}((3,4),(2,6)) + \mbox{cost}((3,4),(3,8))+ \mbox{cost}((3,4),(4,7))\} \\ & ~+ \{\mbox{cost}((7,4),(6,2)) + \mbox{cost}((7,4),(6,4)) + \mbox{cost}((7,4),(7,3)) \\ & ~+ \mbox{cost}((7,4),(8,5)) + \mbox{cost}((7,4),(7,6)) \} \\ & = (3 + 4 + 4) + (3 + 1 + 1 + 2 + 2) \\ & = 20 \\\end{align}

[edit]Step 2

Figure 1.3 – clusters after step 2

Select one of the nonmedoids O′

Let us assume O′ = (7,3)

So now the medoids are c1(3,4) and O′(7,3)

If c1 and O′ are new medoids, calculate the total cost involved

By using the formula in the step 1

i c1Data objects (Xi)Cost (distance)
134263
334384
434474
534625
634643
734744
934856
1034764
i O′Data objects (Xi)Cost (distance)
173268
373389
473477
573622
673642
773741
973853
1073763


Figure 2. K-medoids versus k-means. Figs 2.1a-2.1f present a typical example of the k-means convergence to a local minimum. This result of k-means clustering contradicts the obvious cluster structure of data set. In this example, k-medoids algorithm (Figs 2.2a-2.2h) with the same initial position of medoids (Fig. 2.2a) converges to the obvious cluster structure. The small circles are data points, the four ray stars are centroids (means), the nine ray stars are medoids. [3]

\begin{align}\mbox{total cost} & = 3 + 4 + 4 + 2 + 2 + 1 + 3 + 3 \\ & = 22 \\\end{align}

So cost of swapping medoid from c2 to O′ is

\begin{align} S & = \mbox{current total cost} - \mbox{past total cost} \\ & = 22 - 20 \\ & = 2 > 0.\end{align}

So moving to O′ would be a bad idea, so the previous choice was good. So we try other nonmedoids and found that our first choice was the best. So the configuration does not change and algorithm terminates here (i.e. there is no change in the medoids).

It may happen some data points may shift from one cluster to another cluster depending upon their closeness to medoid.

In some standard situations, k-medoids demonstrate better performance than k-means. An example is presented in Fig. 2. The most time-consuming part of the k-medoids algorithm is the calculation of the distance matrix between objects. Recently, a new algorithm for K-medoids clustering is proposed which runs not worse than the k-means. This algorithm calculates the distance matrix once and uses it for finding new medoids at every iterative step.[4] A comparative study of K-means and k-medoids algorithms was performed for normal and for uniform distributions of data points.[5] It was demonstrated that in the asymptotic of large data sets the k-medoids algorithm takes less time.

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值