C-均值聚类算法是动态聚类算法的一种
动态聚类——兼顾系统聚类和分解聚类
下文文字描述居多,公式较少放心食用。
首先给出聚类准则函数是误差平方和准则:
J
c
=
∑
j
=
1
c
∑
k
=
1
n
j
∣
∣
x
k
−
m
j
∣
∣
2
J_{c}=\sum_{j=1}^{c}\sum_{k=1}^{n_{j}}||x_{k}-m_{j}||^{2}
Jc=∑j=1c∑k=1nj∣∣xk−mj∣∣2
其中
c
c
c代表类数,
n
j
n_{j}
nj代表
j
j
j类的样本数
(1)C-均值算法(一)
1.从混合样本中抽取c个初始聚合中心。
2.计算每个样本到每个聚合中心的距离(这里用欧式距离
D
e
(
x
,
y
)
=
∑
i
=
1
d
∣
x
i
−
y
i
∣
2
D_{e}(x,y)=\sqrt{\sum_{i=1}^{d}|x_{i}-y_{i}|^{2}}
De(x,y)=∑i=1d∣xi−yi∣2),距离哪个中心近就归入哪类(这很好理解)
3.计算
c
c
c个新的聚类中心(由于有新的样本进入聚类),计算公式就是求均值。(这里可以每个样本分类后更新聚类中心,也可以全部样本分类后统一更新聚类中心,分别为逐个处理法与批处理法,针对(一)算法实现用批处理法)
4.判断所有聚类中心如果与上次分类后的中心不同则返回第二步,否则算法结束。(这里是因为如果经过计算分类后聚类中心有变化,说明又改进,就不结束,如果已经收敛,没有改进了,就结束。)
(2)C-均值算法(二)
均值算法(二)是对(一)的改进
1.执行C-均值算法(一)的1,2,3步。(这里实现也采用批处理)
2.计算误差平方和(也就是准则函数),计算:
ρ
i
i
=
n
i
n
i
−
1
∣
∣
x
k
(
i
)
−
Z
i
(
I
)
∣
∣
2
,
i
=
1
,
2
,
.
.
.
,
c
\rho_{ii}=\frac{n_{i}}{n_{i}-1}||x_{k}^{(i)}-Z_{i}(I)||^{2},i=1,2,...,c
ρii=ni−1ni∣∣xk(i)−Zi(I)∣∣2,i=1,2,...,c
ρ
i
i
\rho_{ii}
ρii表示使准则函数减小的部分
ρ
i
j
=
n
j
n
j
+
1
∣
∣
x
k
(
i
)
−
Z
j
(
I
)
∣
∣
2
,
j
=
1
,
2
,
.
.
.
,
c
j
≠
i
\rho_{ij}=\frac{n_{j}}{n_{j}+1}||x_{k}^{(i)}-Z_{j}(I)||^{2},j=1,2,...,c \qquad j\ne i
ρij=nj+1nj∣∣xk(i)−Zj(I)∣∣2,j=1,2,...,cj=i
ρ
i
j
\rho_{ij}
ρij表示使准则函数增加的部分
其中
i
i
i代表第
i
i
i类,
j
j
j代表第
j
j
j类,
n
n
n代表聚类中样本数,
x
k
(
i
)
x_{k}^{(i)}
xk(i)代表第
i
i
i类第
k
k
k个样本,
Z
Z
Z代表聚类中心,
I
I
I代表第
I
I
I次迭代值。
找出
ρ
i
j
\rho_{ij}
ρij中的最小值
ρ
i
l
\rho_{il}
ρil,如果此最小值小于
ρ
i
i
\rho_{ii}
ρii,说明存在能够使聚类准则函数减小的操作。即把样本
x
k
(
i
)
x_{k}^{(i)}
xk(i)移动到聚类中心
w
l
w_{l}
wl中。
3.经过移动操作后修改聚合中心与准则函数
J
c
J_{c}
Jc的值。
Z
i
(
I
+
1
)
=
Z
i
(
I
)
+
1
n
i
−
1
[
Z
i
(
I
)
−
x
k
(
i
)
]
Z_{i}(I+1)=Z_{i}(I)+\frac{1}{n_{i}-1}\left[Z_{i}(I)-x_{k}^{(i)}\right]
Zi(I+1)=Zi(I)+ni−11[Zi(I)−xk(i)]
Z
l
(
I
+
1
)
=
Z
j
(
I
)
−
1
n
l
+
1
[
Z
l
(
I
)
−
x
k
(
i
)
]
Z_{l}(I+1)=Z_{j}(I)-\frac{1}{n_{l}+1}\left[Z_{l}(I)-x_{k}^{(i)}\right]
Zl(I+1)=Zj(I)−nl+11[Zl(I)−xk(i)]
J
c
(
I
+
1
)
=
J
c
(
I
)
−
(
ρ
i
i
−
ρ
i
l
)
J_{c}(I+1)=J_{c}(I)-\left(\rho_{i i}-\rho_{i l}\right)
Jc(I+1)=Jc(I)−(ρii−ρil)
4.若
J
c
(
I
+
1
)
<
J
c
(
I
)
J_{c}(I+1)<J_{c}(I)
Jc(I+1)<Jc(I)则
I
=
I
+
1
I=I+1
I=I+1返回2,否则算法结束。
太懒了,搁置了好久了,好多细节都忘了,所以直接上代码:
代码是分类IRIS数据集的,语言MATLAB R2017a,分类精度在89.33%,第一种均值算法相较于第二种算法更容易陷入局部最优(精度在66.67%)。
%% C均值算法对于IRIS数据集的分类(一)
%% author:mym NJUST
clear;
clc;
[number,txt,raw]=xlsread('iris.csv');
%对数据集乱序
index=randperm(150);
rand_number=number(index,:);
kind=3;%类别数
prime_index=sort(randperm(150,3));
m=number(prime_index,2:5);%初始随机选择的三个聚类中心
kind1=0;%记录每类的数目
kind2=0;
kind3=0;
m1=zeros(3,4);
while ~isequal(m,m1)
m1=m;
for i=1:150
D1=norm(rand_number(i,2:5)-m(1,:));
D2=norm(rand_number(i,2:5)-m(2,:));
D3=norm(rand_number(i,2:5)-m(3,:));%计算样本到聚类中心的欧式距离
vec=[D1 D2 D3];
[M,I]=min(vec);
if I==1
kind1=kind1+1;
Kone(kind1)=rand_number(i,1);
elseif I==2
kind2=kind2+1;
Ktwo(kind2)=rand_number(i,1);
else
kind3=kind3+1;
Kthree(kind3)=rand_number(i,1);%将样本放入聚类
end
end
if kind1~=0
m(1,:)=sum(number(Kone,2:5),1)/kind1;
end
if kind2~=0
m(2,:)=sum(number(Ktwo,2:5),1)/kind2;
end
if kind3~=0
m(3,:)=sum(number(Kthree,2:5),1)/kind3;%更新聚类中心
end
kind1=0;
kind2=0;
kind3=0;
Kone_store=Kone;
Ktwo_store=Ktwo;
Kthree_store=Kthree;%分类结果
clear Kone Ktwo Kthree;
end
JC=ERR(number,Kone_store,m(1,:))+ERR(number,Ktwo_store,m(2,:))+ERR(number,Kthree_store,m(3,:));
%% 计算准确率
count=0;
count1=0;
count2=0;
count3=0;
[~,n]=size(Kone_store);
for i=1:n
if Kone_store(i)<=50
count1=count1+1;
elseif Kone_store(i)>100
count2=count2+1;
else
count3=count3+1;
end
end
flag=[count1 count2 count3];
[a,b]=max(flag);
count=count+a;
count1=0;
count2=0;
count3=0;
[~,n]=size(Ktwo_store);
for i=1:n
if Ktwo_store(i)<=50
count1=count1+1;
elseif Ktwo_store(i)>100
count2=count2+1;
else
count3=count3+1;
end
end
flag=[count1 count2 count3];
[a,b]=max(flag);
count=count+a;
count1=0;
count2=0;
count3=0;
[~,n]=size(Kthree_store);
for i=1:n
if Kthree_store(i)<=50
count1=count1+1;
elseif Kthree_store(i)>100
count2=count2+1;
else
count3=count3+1;
end
end
flag=[count1 count2 count3];
[a,b]=max(flag);
count=count+a;
acc=count/150;
%% C均值算法对于IRIS数据集的分类(二)
%% author:mym NJUST
clear;
clc;
[number,txt,raw]=xlsread('iris.csv');
%对数据集乱序
index=randperm(150);
rand_number=number(index,:);
kind=3;%类别数
prime_index=randperm(150,3);
m=number(prime_index,2:5);%初始随机选择的三个聚类中心
kind1=0;%记录每类的数目
kind2=0;
kind3=0;
%% 第一步:初步分为三类
for i=1:150
D1=norm(rand_number(i,2:5)-m(1,:));
D2=norm(rand_number(i,2:5)-m(2,:));
D3=norm(rand_number(i,2:5)-m(3,:));%计算样本到聚类中心的欧式距离
vec=[D1 D2 D3];
[M,I]=min(vec);
if I==1
kind1=kind1+1;
Kone(kind1)=rand_number(i,1);
elseif I==2
kind2=kind2+1;
Ktwo(kind2)=rand_number(i,1);
else
kind3=kind3+1;
Kthree(kind3)=rand_number(i,1);%将样本放入聚类
end
end
%分类后重新计算聚类中心
if kind1~=0
m(1,:)=sum(number(Kone,2:5),1)/kind1;
end
if kind2~=0
m(2,:)=sum(number(Ktwo,2:5),1)/kind2;
end
if kind3~=0
m(3,:)=sum(number(Kthree,2:5),1)/kind3;%更新聚类中心
end
%% 修正聚类
%计算误差平方和
JC=zeros(1,150);
JC(1)=ERR(number,Kone,m(1,:))+ERR(number,Ktwo,m(2,:))+ERR(number,Kthree,m(3,:));
J_number=1;
J_last=0;
p=zeros(3,3);
while J_last~=JC(J_number)
J_last=JC(J_number);
[row1,col1]=size(Kone);
count1=1;
%对聚类(一)中的样本进行修正
while count1<=col1
p(1,1)=kind1/(kind1-1)*(norm(number(Kone(count1),2:5)-m(1,:))^2);
p(1,2)=kind2/(kind2+1)*(norm(number(Kone(count1),2:5)-m(2,:))^2);
p(1,3)=kind3/(kind3+1)*(norm(number(Kone(count1),2:5)-m(3,:))^2);
[min_value,min_index]=min(p(1,2:3));
if min_value<p(1,1)
m(1,:)=m(1,:)+1/(kind1-1)*(m(1,:)-number(Kone(count1),2:5));
if min_index==1
m(2,:)=m(2,:)-1/(kind2+1)*(m(2,:)-number(Kone(count1),2:5));
kind2=kind2+1;
Ktwo=Add(Ktwo,Kone(count1));
J_number=J_number+1;
JC(J_number)=JC(J_number-1)-(p(1,1)-p(1,2));
else
m(3,:)=m(3,:)-1/(kind3+1)*(m(3,:)-number(Kone(count1),2:5));
kind3=kind3+1;
Kthree=Add(Kthree,Kone(count1));
J_number=J_number+1;
JC(J_number)=JC(J_number-1)-(p(1,1)-p(1,3));
end
kind1=kind1-1;
Kone=Remove(Kone,count1);
col1=col1-1;
else
count1=count1+1;
J_number=J_number+1;
JC(J_number)=JC(J_number-1);
end
end
count2=1;
%对聚类(二)中的样本进行修正
[row2,col2]=size(Ktwo);
while count2<=col2
p(2,1)=kind1/(kind1+1)*(norm(number(Ktwo(count2),2:5)-m(1,:))^2);
p(2,2)=kind2/(kind2-1)*(norm(number(Ktwo(count2),2:5)-m(2,:))^2);
p(2,3)=kind3/(kind3+1)*(norm(number(Ktwo(count2),2:5)-m(3,:))^2);
[min_value,min_index]=min([p(2,1) p(2,3)]);
if min_value<p(2,2)
m(2,:)=m(2,:)+1/(kind2-1)*(m(2,:)-number(Ktwo(count2),2:5));
if min_index==1
m(1,:)=m(1,:)-1/(kind1+1)*(m(1,:)-number(Ktwo(count2),2:5));
kind1=kind1+1;
Kone=Add(Kone,Ktwo(count2));
J_number=J_number+1;
JC(J_number)=JC(J_number-1)-(p(2,2)-p(2,1));
else
m(3,:)=m(3,:)-1/(kind3+1)*(m(3,:)-number(Ktwo(count2),2:5));
kind3=kind3+1;
Kthree=Add(Kthree,Ktwo(count2));
J_number=J_number+1;
JC(J_number)=JC(J_number-1)-(p(2,2)-p(2,3));
end
kind2=kind2-1;
Ktwo=Remove(Ktwo,count2);
col2=col2-1;
else
count2=count2+1;
J_number=J_number+1;
JC(J_number)=JC(J_number-1);
end
end
%对聚类(三)中的样本进行修正
[row3,col3]=size(Kthree);
count3=1;
while count3<=col3
p(3,1)=kind1/(kind1+1)*(norm(number(Kthree(count3),2:5)-m(1,:))^2);
p(3,2)=kind2/(kind2+1)*(norm(number(Kthree(count3),2:5)-m(2,:))^2);
p(3,3)=kind3/(kind3-1)*(norm(number(Kthree(count3),2:5)-m(3,:))^2);
[min_value,min_index]=min([p(3,1) p(3,2)]);
if min_value<p(3,3)
m(3,:)=m(3,:)+1/(kind3-1)*(m(3,:)-number(Kthree(count3),2:5));
if min_index==1
m(1,:)=m(1,:)-1/(kind1+1)*(m(1,:)-number(Kthree(count3),2:5));
kind1=kind1+1;
Kone=Add(Kone,Kthree(count3));
J_number=J_number+1;
JC(J_number)=JC(J_number-1)-(p(3,3)-p(3,1));
else
m(2,:)=m(2,:)-1/(kind2+1)*(m(2,:)-number(Kthree(count3),2:5));
kind2=kind2+1;
Ktwo=Add(Ktwo,Kthree(count3));
J_number=J_number+1;
JC(J_number)=JC(J_number-1)-(p(3,3)-p(3,2));
end
kind3=kind3-1;
Kthree=Remove(Kthree,count3);
col3=col3-1;
else
count3=count3+1;
J_number=J_number+1;
JC(J_number)=JC(J_number-1);
end
end
end
%% 计算准确率
count=0;
count1=0;
count2=0;
count3=0;
[~,n]=size(Kone);
for i=1:n
if Kone(i)<=50
count1=count1+1;
elseif Kone(i)>100
count2=count2+1;
else
count3=count3+1;
end
end
flag=[count1 count2 count3];
[a,b]=max(flag);
count=count+a;
count1=0;
count2=0;
count3=0;
[~,n]=size(Ktwo);
for i=1:n
if Ktwo(i)<=50
count1=count1+1;
elseif Ktwo(i)>100
count2=count2+1;
else
count3=count3+1;
end
end
flag=[count1 count2 count3];
[a,b]=max(flag);
count=count+a;
count1=0;
count2=0;
count3=0;
[~,n]=size(Kthree);
for i=1:n
if Kthree(i)<=50
count1=count1+1;
elseif Kthree(i)>100
count2=count2+1;
else
count3=count3+1;
end
end
flag=[count1 count2 count3];
[a,b]=max(flag);
count=count+a;
acc=count/150;
function [ Knumber_out ] = Add(Knumber,add_number)
%输入参数为聚类向量与需要添加的数
%输出为添加后的聚类向量
Knumber_out=[Knumber add_number];
end
function [Knumber_out] = Remove(Knumber,Kindex)
%输入为聚类向量与需要去除的聚类索引
%输出为去除后的聚类向量
[row,col]=size(Knumber);
Knumber_out=[Knumber(1:Kindex-1) Knumber(Kindex+1:col)];
end
function [J] = ERR(number,index,m)
%输入为数字样本,聚类样本的索引以及聚类中心
%输出为误差平方和
[row,n]=size(index);
J=0;
for i=1:n
J=J+norm(number(index(i),2:5)-m)^2;
end
end