Lecture 8: Scalable k-means Clustering
1. Cluster analysis
Unsupervised, discrete
Anomaly detection:
- cluster the data points
- small clusters -> candidate outliers
- compute the distance between condidate points and non-candidate clusters
- candidate points far from all other non-candidate points -> outliers
2. Hierarchical vs Partitional Clustering
Hierarchical: nested clusters as a hierarchical tree
- Each node in the tree is the union of its children
- The root of the tree -> the cluster containing all data points
Partitional: non-overlapping clusters
- eachdata point is in exactly one cluster
3. Centre/ prototype-based clusters vs. Density-based Clusters
Centre/ prototype-based clusters: centre:
- centroid: the average of all the points in a cluster
- medoid: the most respresentative point of a cluster
Density-based Clusters:
- cluster: a dense region of points separated by low-density regions from other regions of high density
- Used when the clusters are irregular/interwind, and when noise and outliers are present
3. k-means clustering
A centre-based, partitional clustering approach
try to find the distance of every point to the nearest centre and sum up all distanse as SSE
and try to minimise the SSE
4. Lloyd Algorithm for k-means
Start with k centres chosen uniformly at random from data points
Assign clusters and compute centroids till convergence
Limitation:
- Many iterations to converge
- Sensitive to initialisation
- Random initialisation can get two centres in the same cluster -> stuck in a local optimum
5. k-means++
key idea: spread out the centres
steps:
- choose the first centre uniformly at random
- then repeat: choose a centre to be equal to a data point x sampled from the distribution d(x0,C)^2
limitation: initialisation is too expensive
6. Pre-processing and post-processing
Pre-processing:
- normalise the data
- eliminate outliers
post-processing - Eliminate small clusters that may represent outliers
- split loss clusters
- merge clusters that are close