吴恩达机器学习系列内容的学习目录 → \rightarrow →吴恩达机器学习系列内容汇总。
1. 无监督学习:简介
在典型的监督学习中,我们有一个有标签的训练集,目标是找到能够区分正样本和负样本的决策边界。与此不同的是,在无监督学习中,我们需要将一系列无标签的训练数据输入到一个算法中,然后让这个算法为我们找到训练数据的内在结构。下图中的无标签数据集看起来可以分成两个分开的点集(称为簇),能够划分这些点集的算法,就被称为聚类算法。
- 有监督学习
有标签数据集: { ( x ( 1 ) , y ( 1 ) ) , ( x ( 2 ) , y ( 2 ) ) , ( x ( 3 ) , y ( 3 ) ) , . . . , ( x ( m ) , y ( m ) ) } \left \{ (x^{(1)},y^{(1)}),(x^{(2)},y^{(2)}),(x^{(3)},y^{(3)}),...,(x^{(m)},y^{(m)}) \right \} {(x(1),y(1)),(x(2),y(2)),(x(3),y(3)),...,(x(m),y(m))}
- 无监督学习
无标签数据集: { x ( 1 ) , x ( 2 ) , x ( 3 ) , . . . , x ( m ) } \left \{ x^{(1)},x^{(2)},x^{(3)},...,x^{(m)} \right \} {x(1),x(2),x(3),...,x(m)}
聚类算法可以应用于市场分割、社交网络分析、组织计算机集群、天文数据分析等领域。
2. K-均值算法
K-均值算法是一种迭代求解的聚类分析算法,算法接受一个未标记的数据集,然后将数据聚类成不同的组。其步骤是,预将数据分为K组,则随机选取K个对象作为初始的聚类中心,然后计算每个对象与各个种子聚类中心之间的距离,把每个对象分配给距离它最近的聚类中心。
K-均值算法会做两件事:1. 簇分配;2. 移动聚类中心。
假设有一个无标签的数据集,想将其分为两个簇,执行K-均值算法。如下图所示,首先随机生成两点,这两点称为聚类中心,然后根距离移动聚类中心,直至中心点不再变化为止。
K-均值算法:
I
n
p
u
t
:
−
K
(
n
u
m
b
e
r
o
f
c
l
u
s
t
e
r
)
−
T
r
a
i
n
i
n
g
s
e
t
{
x
(
1
)
,
x
(
2
)
,
.
.
.
,
x
(
m
)
}
,
x
(
i
)
∈
R
n
(
d
r
o
p
x
0
=
1
c
o
n
v
e
n
t
i
o
n
)
R
a
n
d
o
m
l
y
i
n
i
t
i
a
l
i
z
e
K
c
l
u
s
t
e
r
c
e
t
r
o
i
d
s
μ
1
,
μ
2
,
.
.
.
,
μ
K
∈
R
n
R
e
p
e
a
t
{
f
o
r
i
=
1
t
o
m
c
(
i
)
:
=
i
n
d
e
x
(
f
r
o
m
1
t
o
K
)
o
f
c
l
u
s
t
e
r
c
e
n
t
r
o
i
d
c
l
o
s
e
s
t
t
o
x
(
i
)
f
o
r
k
=
1
t
o
K
μ
k
:
=
a
v
e
r
a
g
e
(
m
e
a
n
)
o
f
p
o
i
n
t
s
a
s
s
i
g
n
e
d
t
o
c
l
u
s
t
e
r
k
}
.
\begin{matrix} Input:\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{} \\ -K(number\ _{}of\ _{}cluster)\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{} \\ -Training\ _{}set\left \{ x^{(1)}, x^{(2)},...,x^{(m)}\right \},\ _{}x^{(i)}\in R^{n}\ _{}(drop\ _{} x_{0}=1\ _{} convention)\ _{}\ _{}\ _{} \\ Randomly\ _{}initialize\ _{}K\ _{}cluster\ _{}cetroids\ _{}\mu _{1},\mu _{2},...,\mu _{K}\in R^{n}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{} \\ Repeat\left \{ \right.\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{} \\ for\ _{} i = 1 \ _{}to\ _{} m\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{} \\ \ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}c^{(i)}:=index\ _{}(from \ _{}1 \ _{}to\ _{} K)\ _{}of \ _{}cluster\ _{} centroid\ _{} closest\ _{} to\ _{} x^{(i)} \\ for\ _{} k = 1\ _{} to\ _{} K\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{} \\ \mu _{k}:=average\ _{}(mean)\ _{}of\ _{} points\ _{} assigned \ _{}to \ _{}cluster\ _{} k\ _{}\ _{}\ _{} \\ \}.\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{} \end{matrix}
Input: −K(number of cluster) −Training set{x(1),x(2),...,x(m)}, x(i)∈Rn (drop x0=1 convention) Randomly initialize K cluster cetroids μ1,μ2,...,μK∈Rn Repeat{ for i=1 to m c(i):=index (from 1 to K) of cluster centroid closest to x(i)for k=1 to K μk:=average (mean) of points assigned to cluster k }.
其中,第一个for循环是簇分配步骤,
c
(
i
)
c^{(i)}
c(i)来存储与
x
(
i
)
x^{(i)}
x(i)距离最近的聚类中心的索引;第二个for循环是移动聚类中心,
μ
k
\mu_{k}
μk指这个簇中所有点的均值,第k个聚类中心的位置。
假设
μ
k
\mu_{k}
μk是某个簇的均值,那么如果存在一个没有点的聚类中心,会怎么样?最常见的做法是直接移除这个聚类中心,但如果这么做的话,会得到K-1个簇而不是K个簇。不过有时确实需要K个簇,那么可以重新随机初始化这个聚类中心。
3. 优化目标
K-均值最小化问题是要最小化所有的数据点与其所关联的聚类中心点之间的距离之和,因此K-均值的代价函数(又称畸变函数)为 J ( c ( 1 ) , . . . , c ( m ) , μ 1 , . . . , μ K ) = 1 m ∑ i = 1 m ∥ x ( i ) − μ c ( i ) ∥ 2 J(c^{(1)},...,c^{(m)},\mu_{1},...,\mu_{K})=\frac{1}{m}\sum_{i=1}^{m}\left \| x^{(i)}-\mu _{c^{(i)}} \right \|^{2} J(c(1),...,c(m),μ1,...,μK)=m1i=1∑m∥∥∥x(i)−μc(i)∥∥∥2
其中,
μ
c
(
i
)
\mu _{c^{(i)}}
μc(i)代表与
x
(
i
)
x^{(i)}
x(i)最近的聚类中心点。
我们的优化目标是要找出使得代价函数最小的
c
(
1
)
,
.
.
.
,
c
(
m
)
c^{(1)},...,c^{(m)}
c(1),...,c(m)和
μ
1
,
.
.
.
,
μ
K
\mu_{1},...,\mu_{K}
μ1,...,μK,即
min
c
(
1
)
,
.
.
.
,
c
(
m
)
μ
1
,
.
.
.
,
μ
K
J
(
c
(
1
)
,
.
.
.
,
c
(
m
)
,
μ
1
,
.
.
.
,
μ
K
)
\min_{\begin{matrix} c^{(1)},...,c^{(m)}\\ \mu_{1},...,\mu_{K} \end{matrix}}J(c^{(1)},...,c^{(m)},\mu_{1},...,\mu_{K})
c(1),...,c(m)μ1,...,μKminJ(c(1),...,c(m),μ1,...,μK)
4. 随机初始化
如何初始化K-均值聚类算法?如何使算法避开局部最优?
随机初始化:
F
o
r
i
=
1
t
o
100
{
R
a
n
d
o
m
l
y
i
n
i
t
i
a
l
i
z
e
K
−
m
e
a
n
s
R
u
n
K
−
m
e
a
n
s
.
G
e
t
c
(
1
)
,
.
.
.
,
c
(
m
)
,
μ
1
,
.
.
.
,
μ
K
C
o
m
p
u
t
e
c
o
s
t
f
u
n
c
t
i
o
n
(
d
i
s
t
o
r
t
i
o
n
)
J
(
c
(
1
)
,
.
.
.
,
c
(
m
)
,
μ
1
,
.
.
.
,
μ
K
)
}
P
i
c
k
c
l
u
s
t
e
r
i
n
g
t
h
a
t
g
a
v
e
l
o
w
e
s
t
c
o
s
t
J
(
c
(
1
)
,
.
.
.
,
c
(
m
)
,
μ
1
,
.
.
.
,
μ
K
)
\begin{matrix} \ _{}\ _{}\ _{}\ _{}\ _{}\ _{}For \ _{}i =1\ _{} to\ _{}100 \ _{}\{ \ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{} \\\ _{}\ _{} Randomly \ _{}initialize\ _{} K-means\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{} \\\ _{}\ _{}\ _{}Run\ _{} K-means.\ _{}Get\ _{} c^{(1)},...,c^{(m)},\mu _{1},...,\mu _{K}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{} \\ \ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}Compute \ _{}cost \ _{}function(distortion)\ _{}J(c^{(1)},...,c^{(m)},\mu _{1},...,\mu _{K}) \\ \}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{} \\ Pick\ _{}clustering\ _{}that\ _{}gave\ _{}lowest\ _{}cost\ _{}J(c^{(1)},...,c^{(m)},\mu _{1},...,\mu _{K})\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{}\ _{} \end{matrix}
For i=1 to 100 { Randomly initialize K−means Run K−means. Get c(1),...,c(m),μ1,...,μK Compute cost function(distortion) J(c(1),...,c(m),μ1,...,μK)} Pick clustering that gave lowest cost J(c(1),...,c(m),μ1,...,μK)
上式中循环次数
i
i
i 设为100,一般取50-1000之间,应该选择
K
<
m
K<m
K<m。如果
K
=
2
−
10
K=2-10
K=2−10,一般可得到比较好的局部最优解;如果
K
K
K比较大(比10大很大),那么多次随机初始化对聚类结果不会有太大的改善。
局部最优指的是代价函数(畸变函数)
J
J
J 的局部最优。为了避免这个问题,我们可以尝试多次随机初始化,初始化K-均值算法很多次,并运行K-均值算法很多次,以此来保证我们最后能得到一个足够好的结果,一个尽可能好的局部或全局最优值。
5. 选择聚类数
最好的选择聚类数的方法,通常是根据不同的问题,人工进行选择,选择能最好服务于聚类目的的数量。
肘部法则(Elbow method):
大部分时候,聚类数量K仍是通过手动、人工输入或者经验来决定,一种可以尝试的方法是使用“肘部原则”,但不能期望它每次都有效果。选择聚类数量更好的思路是去问自己运行K-均值聚类算法的目的是什么,然后再想聚类数目K取哪一个值能更好的服务于后续的目的。