机器学习 (K - mean算法)

机器学习 K-Mean算法

根据样本间的某种距离或者相似性来定义聚类,即把相似的(或距离近的)样本聚为同一类,而把不相似的

(或距离远的)样本归在其他类。其基本思想是:通过迭代寻找k个聚类的一种划分方案,使得用这k个聚类的

均值来代表相应各类样本时所得的总体误差最小。

参数说明:μc(i)表示第i个聚类的均值(质心),x(i)为样本数据。

c(i):确定所属类别,与最近的质心归为一类; 

 算法步骤:
1)随机选取K个质心点(作为分类依据)
2)迭代下述过程
1)对于每一个样例 i,计算其应该属于的类j
2)对于每一个类 j,重新计算该类的质心

K - mean 伪代码

创建k个点作为初始的质心点(随机选择)
当任意一个点的簇分配结果发生改变时
对数据集中的每一个数据点
对每一个质心
计算质心与数据点的距离
将数据点分配到距离最近的簇
对每一个簇,计算簇中所有点的均值,并将均值作为质心

Step 1 : 所要使用的库(numpy,random,matplotlib)

numpy用于科学计算,random实现随机取值,matplotlib用于图形绘制;

import numpy as np
import matplotlib.pyplot as plt
import random
#导入numpy和matplotlib

Step 2 : 数据获取(获取数据集)

def DataSet():
    dataSet = []   #数据集
    with open("set") as f:
         for item in f.readlines():
            item=item.strip().split('\t')
            dataSet.append([float(item[0]),float(item[1])])          
    return dataSet    #返回数据集

Step 3 : 随机从数据集中选取K个质点

def CreatePoint(dataSet,k):
    n,m=dataSet.shape    #以二维数据为例(获取行和列的数目)
    point = np.zeros((k,m))
    for i in range(k):
        index = random.randint(0,n-1)
        point[i,:] = dataSet[index,:]
    return point

Step 4 : K - mean算法(核心)

def kmean(dataset,point):
    flag = True  #标志位,检测是否发生改变
    n = dataset.shape[0]    #获取数据个数
    k = point.shape[0]      #分类个数
    cluster = np.ones(n)   #集团
    while flag:
        flag = False
        #计算点到质点的距离
        for i in range(n):
            mindis = 10000
            minindex = 0
            for j in range(k):
                jdis = distence(dataset[i,:],point[j,:])
                if mindis > jdis:
                    mindis = jdis
                    minindex = j
            #更新集团cluster
            if cluster[i] != minindex:
                cluster[i] = minindex
                flag = True
        #重新计算新的质点坐标
        for j in range(k):
            index = np.where(cluster == j)[0]   #获取从属类的下标
            newindex = np.sum(dataset[index],axis = 0)/index.shape[0]
            point[j,:] = newindex
    return cluster

Step 5 : 图形绘制

def Draw(dataset,tag):
    for i in range(dataset.shape[0]):
        flag = tag[i]
        if flag == 0:
            plt.plot(dataset[i][0],dataset[i][1],'Dr')
        elif flag == 1:
             plt.plot(dataset[i][0],dataset[i][1],'Dg')
        elif flag == 2:
             plt.plot(dataset[i][0],dataset[i][1],'Db')
        elif flag == 3:
             plt.plot(dataset[i][0],dataset[i][1],'Dk')
    
    plt.show()

Final 聚类效果

    2  -- 分类

    3  -- 分类

    4  -- 分类

从上数实验数据看出,对于数据的聚类效果较好;

数据集(测试 80条数据)

1.658985	4.285136
-3.453687	3.424321
4.838138	-1.151539
-5.379713	-3.362104
0.972564	2.924086
-3.567919	1.531611
0.450614	-3.302219
-3.487105	-1.724432
2.668759	1.594842
-3.156485	3.191137
3.165506	-3.999838
-2.786837	-3.099354
4.208187	2.984927
-2.123337	2.943366
0.704199	-0.479481
-0.392370	-3.963704
2.831667	1.574018
-0.790153	3.343144
2.943496	-3.357075
-3.195883	-2.283926
2.336445	2.875106
-1.786345	2.554248
2.190101	-1.906020
-3.403367	-2.778288
1.778124	3.880832
-1.688346	2.230267
2.592976	-2.054368
-4.007257	-3.207066
2.257734	3.387564
-2.679011	0.785119
0.939512	-4.023563
-3.674424	-2.261084
2.046259	2.735279
-3.189470	1.780269
4.372646	-0.822248
-2.579316	-3.497576
1.889034	5.190400
-0.798747	2.185588
2.836520	-2.658556
-3.837877	-3.253815
2.096701	3.886007
-2.709034	2.923887
3.367037	-3.184789
-2.121479	-4.232586
2.329546	3.179764
-3.284816	3.273099
3.091414	-3.815232
-3.762093	-2.432191
3.542056	2.778832
-1.736822	4.241041
2.127073	-2.983680
-4.323818	-3.938116
3.792121	5.135768
-4.786473	3.358547
2.624081	-3.260715
-4.009299	-2.978115
2.493525	1.963710
-2.513661	2.642162
1.864375	-3.176309
-3.171184	-3.572452
2.894220	2.489128
-2.562539	2.884438
3.491078	-3.947487
-2.565729	-2.012114
3.332948	3.983102
-1.616805	3.573188
2.280615	-2.559444
-2.651229	-3.103198
2.321395	3.154987
-1.685703	2.939697
3.031012	-3.620252
-4.599622	-2.185829
4.196223	1.126677
-2.133863	3.093686
4.668892	-2.562705
-2.793241	-2.149706
2.884105	3.043438
-2.967647	2.848696
4.479332	-1.764772
-4.905566	-2.911070

反思(与之前KNN算法的区别):KNN偏向分类,K - mean偏向聚类

KNN

K-Means

1.KNN是分类算法 

2.监督学习 

3.数据集是带label的数据,已经是完全正确的数据

1.K-Means是聚类算法 

2.非监督学习 

3.数据集是无label的数据,是杂乱无章的,经过聚类后才变得有点顺序,先无序,后有序

没有明显的前期训练过程,属于memory-based learning有明显的前期训练过程
K的含义:来了一个样本x,要给它分类,即求出它的y,就从数据集中,在x附近找离它最近的K个数据点,这K个数据点,类别c占的个数最多,就把x的label设为cK的含义:K是人工固定好的数字,假设数据集合可以分为K个簇,由于是依靠人工定好,需要一点先验知识
相似点:都包含这样的过程,给定一个点,在数据集中找离它最近的点。即二者都用到了NN(Nears Neighbor)算法,一般用KD树来实现NN。
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
c++实现k-mean算法#include <stdio.h> #include <stdlib.h> #include <string.h> #include <conio.h> #include <math.h> // FUNCTION PROTOTYPES // DEFINES #define SUCCESS 1 #define FAILURE 0 #define TRUE 1 #define FALSE 0 #define MAXVECTDIM 20 #define MAXPATTERN 20 #define MAXCLUSTER 10 char *f2a(double x, int width){ char cbuf[255]; char *cp; int i,k; int d,s; cp=fcvt(x,width,&d,&s); if (s) { strcpy(cbuf,"-"); } else { strcpy(cbuf," "); } /* endif */ if (d>0) { for (i=0; i<d; i++) { cbuf[i+1]=cp[i]; } /* endfor */ cbuf[d+1]=0; cp+=d; strcat(cbuf,"."); strcat(cbuf,cp); } else { if (d==0) { strcat(cbuf,"."); strcat(cbuf,cp); } else { k=-d; strcat(cbuf,"."); for (i=0; i<k; i++) { strcat(cbuf,"0"); } /* endfor */ strcat(cbuf,cp); } /* endif */ } /* endif */ cp=&cbuf[0]; return cp; } // ***** Defined structures & classes ***** struct aCluster { double Center[MAXVECTDIM]; int Member[MAXPATTERN]; //Index of Vectors belonging to this cluster int NumMembers; }; struct aVector { double Center[MAXVECTDIM]; int Size; }; class System { private: double Pattern[MAXPATTERN][MAXVECTDIM+1]; aCluster Cluster[MAXCLUSTER]; int NumPatterns; // Number of patterns int SizeVector; // Number of dimensions in vector int NumClusters; // Number of clusters void DistributeSamples(); // Step 2 of K-means algorithm int CalcNewClustCenters();// Step 3 of K-means algorithm double EucNorm(int, int); // Calc Euclidean norm vector int FindClosestCluster(int); //ret indx of clust closest to pattern //whose index is arg public: system(); int LoadPatterns(char *fname); // Get pattern data to be clustered void InitClusters(); // Step 1 of K-means algorithm void RunKMeans(); // Overall control K-means process void ShowClusters(); // Show results on screen void SaveClusters(char *fname); // Save results to file void ShowCenters(); }; void System::ShowCenters(){ int i; printf("Cluster centers:\n"); for (i=0; i<NumClusters; i++) { Cluster[i].Member[0]=i; printf("ClusterCenter[%d]=(%f,%f)\n",i,Cluster[i].Center[0],Cluster[i].Center[1]); } /* endfor */ printf("\n"); } int System::LoadPatterns(char *fname){ FILE *InFilePtr; int i,j; double x; if((InFilePtr = fopen(fname, "r")) == NULL) return FAILURE; fscanf(InFilePtr, "%d", &NumPatterns); // Read # of patterns fscanf(InFilePtr, "%d", &SizeVector); // Read dimension of vector fscanf(InFilePtr, "%d", &NumClusters); // Read # of clusters for K-Means for (i=0; i<NumPatterns; i++) { // For each vector for (j=0; j<SizeVector; j++) { // create a pattern fscanf(InFilePtr,"%lg",&x); // consisting of all elements Pattern[i][j]=x; } /* endfor */ } /* endfor */ printf("Input patterns:\n"); for (i=0; i<NumPatterns; i++) { printf("Pattern[%d]=(%2.3f,%2.3f)\n",i,Pattern[i][0],Pattern[i][1]); } /* endfor */ printf("\n--------------------\n"); return SUCCESS; } //*************************************************************************** // InitClusters * // Arbitrarily assign a vector to each of the K clusters * // We choose the first K vectors to do this * //*************************************************************************** void System::InitClusters(){ int i,j; printf("Initial cluster centers:\n"); for (i=0; i<NumClusters; i++) { Cluster[i].Member[0]=i; for (j=0; j<SizeVector; j++) { Cluster[i].Center[j]=Pattern[i][j]; } /* endfor */ } /* endfor */ for (i=0; i<NumClusters; i++) { printf("ClusterCenter[%d]=(%f,%f)\n",i,Cluster[i].Center[0],Cluster[i].Center[1]); } /* endfor */ printf("\n"); } void System::RunKMeans(){ int converged; int pass; pass=1; converged=FALSE; while (converged==FALSE) { printf("PASS=%d\n",pass++); DistributeSamples(); converged=CalcNewClustCenters(); ShowCenters(); } /* endwhile */ } double System::EucNorm(int p, int c){ // Calc Euclidean norm of vector difference double dist,x; // between pattern vector, p, and cluster int i; // center, c. char zout[128]; char znum[40]; char *pnum; pnum=&znum[0]; strcpy(zout,"d=sqrt("); printf("The distance from pattern %d to cluster %d is calculated as:\n",c,p); dist=0; for (i=0; i<SizeVector ;i++){ x=(Cluster[c].Center[i]-Pattern[p][i])*(Cluster[c].Center[i]-Pattern[p][i]); strcat(zout,f2a(x,4)); if (i==0) strcat(zout,"+"); dist += (Cluster[c].Center[i]-Pattern[p][i])*(Cluster[c].Center[i]-Pattern[p][i]); } /* endfor */ printf("%s)\n",zout); return dist; } int System::FindClosestCluster(int pat){ int i, ClustID; double MinDist, d; MinDist =9.9e+99; ClustID=-1; for (i=0; i<NumClusters; i++) { d=EucNorm(pat,i); printf("Distance from pattern %d to cluster %d is %f\n\n",pat,i,sqrt(d)); if (d<MinDist) { MinDist=d; ClustID=i; } /* endif */ } /* endfor */ if (ClustID<0) { printf("Aaargh"); exit(0); } /* endif */ return ClustID; } void System::DistributeSamples(){ int i,pat,Clustid,MemberIndex; //Clear membership list for all current clusters for (i=0; i<NumClusters;i++){ Cluster[i].NumMembers=0; } for (pat=0; pat<NumPatterns; pat++) { //Find cluster center to which the pattern is closest Clustid= FindClosestCluster(pat); printf("patern %d assigned to cluster %d\n\n",pat,Clustid); //post this pattern to the cluster MemberIndex=Cluster[Clustid].NumMembers; Cluster[Clustid].Member[MemberIndex]=pat; Cluster[Clustid].NumMembers++; } /* endfor */ } int System::CalcNewClustCenters(){ int ConvFlag,VectID,i,j,k; double tmp[MAXVECTDIM]; char xs[255]; char ys[255]; char nc1[20]; char nc2[20]; char *pnc1; char *pnc2; pnc1=&nc1[0]; pnc2=&nc2[0]; ConvFlag=TRUE; printf("The new cluster centers are now calculated as:\n"); for (i=0; i<NumClusters; i++) { //for each cluster pnc1=itoa(Cluster[i].NumMembers,nc1,10); pnc2=itoa(i,nc2,10); strcpy(xs,"Cluster Center"); strcat(xs,nc2); strcat(xs,"(1/"); strcpy(ys,"(1/"); strcat(xs,nc1); strcat(ys,nc1); strcat(xs,")("); strcat(ys,")("); for (j=0; j<SizeVector; j++) { // clear workspace tmp[j]=0.0; } /* endfor */ for (j=0; j<Cluster[i].NumMembers; j++) { //traverse member vectors VectID=Cluster[i].Member[j]; for (k=0; k<SizeVector; k++) { //traverse elements of vector tmp[k] += Pattern[VectID][k]; // add (member) pattern elmnt into temp if (k==0) { strcat(xs,f2a(Pattern[VectID][k],3)); } else { strcat(ys,f2a(Pattern[VectID][k],3)); } /* endif */ } /* endfor */ if(j<Cluster[i].NumMembers-1){ strcat(xs,"+"); strcat(ys,"+"); } else { strcat(xs,")"); strcat(ys,")"); } } /* endfor */ for (k=0; k<SizeVector; k++) { //traverse elements of vector tmp[k]=tmp[k]/Cluster[i].NumMembers; if (tmp[k] != Cluster[i].Center[k]) ConvFlag=FALSE; Cluster[i].Center[k]=tmp[k]; } /* endfor */ printf("%s,\n",xs); printf("%s\n",ys); } /* endfor */ return ConvFlag; } void System::ShowClusters(){ int cl; for (cl=0; cl<NumClusters; cl++) { printf("\nCLUSTER %d ==>[%f,%f]\n", cl,Cluster[cl].Center[0],Cluster[cl].Center[1]); } /* endfor */ } void System::SaveClusters(char *fname){ } void main(int argc, char *argv[]) { System kmeans; if (argc<2) { printf("USAGE: KMEANS PATTERN_FILE\n"); exit(0); } if (kmeans.LoadPatterns(argv[1])==FAILURE ){ printf("UNABLE TO READ PATTERN_FILE:%s\n",argv[1]); exit(0); } kmeans.InitClusters(); kmeans.RunKMeans(); kmeans.ShowClusters(); } 

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值