Spark MLib测试案例

目录

一、聚类概念及数据集介绍

1、聚类

2、数据集介绍

二、实验步骤

1、准备数据

(1)创建文件用于保存数据

(2)将数据复制到创建的文件中

2、独立模式下打开spark-shell

3、在saprk-shell中使用mllib等处理数据

(1)导入相关包

(2)读取数据,并进行一定格式处理

(3)将数据集聚类,2个类,5次迭代,进行模型训练形成数据模型

(4)打印数据模型的中心点(以下两种方法均可)

(5)通过predict()方法来确定每个样本所属的聚类

(6)使用误差平方之和来评估数据模型(度量聚类的有效性)

(7)使用模型测试单点数据

完整代码如下:


注:此文非原创,原文链接基于Spark MLib的鸢尾花数据聚类项目实战案例

一、聚类概念及数据集介绍

1、聚类

聚类(Cluster analysis)有时也被翻译为簇类,其核心任务是:将一组目标object划分为若干个簇,每个簇之间的object尽可能相似,簇与簇之间的object尽可能相异。聚类算法是机器学习(或者说是数据挖掘更合适)中重要的一部分,除了最为简单的K-Means聚类算法外,比较常见的还有层次法(CURE、CHAMELEON等)、网格算法(STING、WaveCluster等),等等。

较权威的聚类问题定义:所谓聚类问题,就是给定一个元素集合D,其中每个元素具有n个可观察属性,使用某种算法将D划分成k个子集,要求每个子集内部的元素之间相异度尽可能低,而不同子集的元素相异度尽可能高。其中每个子集叫做一个簇。

K-means聚类属于无监督学习,以往的回归、朴素贝叶斯、SVM等都是有类别标签y的,也就是说样例中已经给出了样例的分类。而聚类的样本中却没有给定y,只有特征x,比如假设宇宙中的星星可以表示成三维空间中的点集clip_image002。聚类的目的是找到每个样本x潜在的类别y,并将同类别y的样本x放在一起。比如上面的星星,聚类后结果是一个个星团,星团里面的点相互距离比较近,星团间的星星距离就比较远了。

与分类不同,分类是示例式学习,要求分类前明确各个类别,并断言每个元素映射到一个类别。而聚类是观察式学习,在聚类前可以不知道类别甚至不给定类别数量,是无监督学习的一种。目前聚类广泛应用于统计学、生物学、数据库技术和市场营销等领域,相应的算法也非常多。

K-Means属于基于平方误差的迭代重分配聚类算法,其步骤如下:

随机选择K个中心点;
计算所有点到这K个中心点的距离,选择距离最近的中心点为其所在的簇;
简单地采用算术平均数(mean)来重新计算K个簇的中心;
重复步骤2和3,直至簇类不再发生变化或者达到最大迭代值;
输出结果。
K-Means算法的结果好坏依赖于对初始聚类中心的选择,容易陷入局部最优解,对K值的选择没有准则可依循,对异常数据较为敏感,只能处理数值属性的数据,聚类结构可能不平衡。

实验数据iris以鸢尾花的特征作为数据来源,数据集包含150个数据集,分为3类,每类50个数据,每个数据包含4个属性,是在数据挖掘、数据分类中非常常用的测试集、训练集。本节聚类实验,只保留了4个属性的值,类别值被丢弃。

KMeans原理
KMeans 是一个迭代求解的聚类算法,其属于 划分(Partitioning) 型的聚类方法,即首先创建K个划分,然后迭代地将样本从一个划分转移到另一个划分来改善最终聚类的质量,KMeans 的过程大致如下:
 

1.根据给定的k值,选取k个样本点作为初始划分中心;
2.计算所有样本点到每一个划分中心的距离,并将所有样本点划分到距离最近的划分中心;
3.计算每个划分中样本点的平均值,将其作为新的中心;
循环进行2~3步直至达到最大迭代次数,或划分中心的变化小于某一预定义阈值

2、数据集介绍

        实验数据iris以鸢尾花的特征作为数据来源,数据集包含150个数据集,分为3类,每类50个数据,每个数据包含4个属性,是在数据挖掘、数据分类中非常常用的测试集、训练集。本节聚类实验,只保留了4个属性的值,类别值被丢弃。

        鸢尾花数据集共收集了三类鸢尾花,即Setosa鸢尾花、Versicolour鸢尾花和Virginica鸢尾花,每一类鸢尾花收集了50条样本记录,共计150条。

        数据集包括4个属性,分别为花萼的长、花萼的宽、花瓣的长和花瓣的宽。对花瓣我们可能比较熟悉,花萼是什么呢?花萼是花冠外面的绿色被叶,在花尚未开放时,保护着花蕾。四个属性的单位都是cm,属于数值变量,四个属性均不存在缺失值的情况,字段如下:

  • sepal length(萼片长度)
  • sepal width(萼片宽度)
  • petal length(花瓣长度)
  • petal width (花瓣宽度)
  • Species(品种类别):分别是:Setosa、Versicolour、Virginica

单位都是厘米。

二、实验步骤

1、准备数据

(1)创建文件用于保存数据

vi iris.txt

(2)将数据复制到创建的文件中

数据如下:

5.1,3.5,1.4,0.2,Iris-setosa
4.9,3.0,1.4,0.2,Iris-setosa
4.7,3.2,1.3,0.2,Iris-setosa
4.6,3.1,1.5,0.2,Iris-setosa
5.0,3.6,1.4,0.2,Iris-setosa
5.4,3.9,1.7,0.4,Iris-setosa
4.6,3.4,1.4,0.3,Iris-setosa
5.0,3.4,1.5,0.2,Iris-setosa
4.4,2.9,1.4,0.2,Iris-setosa
4.9,3.1,1.5,0.1,Iris-setosa
5.4,3.7,1.5,0.2,Iris-setosa
4.8,3.4,1.6,0.2,Iris-setosa
4.8,3.0,1.4,0.1,Iris-setosa
4.3,3.0,1.1,0.1,Iris-setosa
5.8,4.0,1.2,0.2,Iris-setosa
5.7,4.4,1.5,0.4,Iris-setosa
5.4,3.9,1.3,0.4,Iris-setosa
5.1,3.5,1.4,0.3,Iris-setosa
5.7,3.8,1.7,0.3,Iris-setosa
5.1,3.8,1.5,0.3,Iris-setosa
5.4,3.4,1.7,0.2,Iris-setosa
5.1,3.7,1.5,0.4,Iris-setosa
4.6,3.6,1.0,0.2,Iris-setosa
5.1,3.3,1.7,0.5,Iris-setosa
4.8,3.4,1.9,0.2,Iris-setosa
5.0,3.0,1.6,0.2,Iris-setosa
5.0,3.4,1.6,0.4,Iris-setosa
5.2,3.5,1.5,0.2,Iris-setosa
5.2,3.4,1.4,0.2,Iris-setosa
4.7,3.2,1.6,0.2,Iris-setosa
4.8,3.1,1.6,0.2,Iris-setosa
5.4,3.4,1.5,0.4,Iris-setosa
5.2,4.1,1.5,0.1,Iris-setosa
5.5,4.2,1.4,0.2,Iris-setosa
4.9,3.1,1.5,0.1,Iris-setosa
5.0,3.2,1.2,0.2,Iris-setosa
5.5,3.5,1.3,0.2,Iris-setosa
4.9,3.1,1.5,0.1,Iris-setosa
4.4,3.0,1.3,0.2,Iris-setosa
5.1,3.4,1.5,0.2,Iris-setosa
5.0,3.5,1.3,0.3,Iris-setosa
4.5,2.3,1.3,0.3,Iris-setosa
4.4,3.2,1.3,0.2,Iris-setosa
5.0,3.5,1.6,0.6,Iris-setosa
5.1,3.8,1.9,0.4,Iris-setosa
4.8,3.0,1.4,0.3,Iris-setosa
5.1,3.8,1.6,0.2,Iris-setosa
4.6,3.2,1.4,0.2,Iris-setosa
5.3,3.7,1.5,0.2,Iris-setosa
5.0,3.3,1.4,0.2,Iris-setosa
7.0,3.2,4.7,1.4,Iris-versicolor
6.4,3.2,4.5,1.5,Iris-versicolor
6.9,3.1,4.9,1.5,Iris-versicolor
5.5,2.3,4.0,1.3,Iris-versicolor
6.5,2.8,4.6,1.5,Iris-versicolor
5.7,2.8,4.5,1.3,Iris-versicolor
6.3,3.3,4.7,1.6,Iris-versicolor
4.9,2.4,3.3,1.0,Iris-versicolor
6.6,2.9,4.6,1.3,Iris-versicolor
5.2,2.7,3.9,1.4,Iris-versicolor
5.0,2.0,3.5,1.0,Iris-versicolor
5.9,3.0,4.2,1.5,Iris-versicolor
6.0,2.2,4.0,1.0,Iris-versicolor
6.1,2.9,4.7,1.4,Iris-versicolor
5.6,2.9,3.6,1.3,Iris-versicolor
6.7,3.1,4.4,1.4,Iris-versicolor
5.6,3.0,4.5,1.5,Iris-versicolor
5.8,2.7,4.1,1.0,Iris-versicolor
6.2,2.2,4.5,1.5,Iris-versicolor
5.6,2.5,3.9,1.1,Iris-versicolor
5.9,3.2,4.8,1.8,Iris-versicolor
6.1,2.8,4.0,1.3,Iris-versicolor
6.3,2.5,4.9,1.5,Iris-versicolor
6.1,2.8,4.7,1.2,Iris-versicolor
6.4,2.9,4.3,1.3,Iris-versicolor
6.6,3.0,4.4,1.4,Iris-versicolor
6.8,2.8,4.8,1.4,Iris-versicolor
6.7,3.0,5.0,1.7,Iris-versicolor
6.0,2.9,4.5,1.5,Iris-versicolor
5.7,2.6,3.5,1.0,Iris-versicolor
5.5,2.4,3.8,1.1,Iris-versicolor
5.5,2.4,3.7,1.0,Iris-versicolor
5.8,2.7,3.9,1.2,Iris-versicolor
6.0,2.7,5.1,1.6,Iris-versicolor
5.4,3.0,4.5,1.5,Iris-versicolor
6.0,3.4,4.5,1.6,Iris-versicolor
6.7,3.1,4.7,1.5,Iris-versicolor
6.3,2.3,4.4,1.3,Iris-versicolor
5.6,3.0,4.1,1.3,Iris-versicolor
5.5,2.5,4.0,1.3,Iris-versicolor
5.5,2.6,4.4,1.2,Iris-versicolor
6.1,3.0,4.6,1.4,Iris-versicolor
5.8,2.6,4.0,1.2,Iris-versicolor
5.0,2.3,3.3,1.0,Iris-versicolor
5.6,2.7,4.2,1.3,Iris-versicolor
5.7,3.0,4.2,1.2,Iris-versicolor
5.7,2.9,4.2,1.3,Iris-versicolor
6.2,2.9,4.3,1.3,Iris-versicolor
5.1,2.5,3.0,1.1,Iris-versicolor
5.7,2.8,4.1,1.3,Iris-versicolor
6.3,3.3,6.0,2.5,Iris-virginica
5.8,2.7,5.1,1.9,Iris-virginica
7.1,3.0,5.9,2.1,Iris-virginica
6.3,2.9,5.6,1.8,Iris-virginica
6.5,3.0,5.8,2.2,Iris-virginica
7.6,3.0,6.6,2.1,Iris-virginica
4.9,2.5,4.5,1.7,Iris-virginica
7.3,2.9,6.3,1.8,Iris-virginica
6.7,2.5,5.8,1.8,Iris-virginica
7.2,3.6,6.1,2.5,Iris-virginica
6.5,3.2,5.1,2.0,Iris-virginica
6.4,2.7,5.3,1.9,Iris-virginica
6.8,3.0,5.5,2.1,Iris-virginica
5.7,2.5,5.0,2.0,Iris-virginica
5.8,2.8,5.1,2.4,Iris-virginica
6.4,3.2,5.3,2.3,Iris-virginica
6.5,3.0,5.5,1.8,Iris-virginica
7.7,3.8,6.7,2.2,Iris-virginica
7.7,2.6,6.9,2.3,Iris-virginica
6.0,2.2,5.0,1.5,Iris-virginica
6.9,3.2,5.7,2.3,Iris-virginica
5.6,2.8,4.9,2.0,Iris-virginica
7.7,2.8,6.7,2.0,Iris-virginica
6.3,2.7,4.9,1.8,Iris-virginica
6.7,3.3,5.7,2.1,Iris-virginica
7.2,3.2,6.0,1.8,Iris-virginica
6.2,2.8,4.8,1.8,Iris-virginica
6.1,3.0,4.9,1.8,Iris-virginica
6.4,2.8,5.6,2.1,Iris-virginica
7.2,3.0,5.8,1.6,Iris-virginica
7.4,2.8,6.1,1.9,Iris-virginica
7.9,3.8,6.4,2.0,Iris-virginica
6.4,2.8,5.6,2.2,Iris-virginica
6.3,2.8,5.1,1.5,Iris-virginica
6.1,2.6,5.6,1.4,Iris-virginica
7.7,3.0,6.1,2.3,Iris-virginica
6.3,3.4,5.6,2.4,Iris-virginica
6.4,3.1,5.5,1.8,Iris-virginica
6.0,3.0,4.8,1.8,Iris-virginica
6.9,3.1,5.4,2.1,Iris-virginica
6.7,3.1,5.6,2.4,Iris-virginica
6.9,3.1,5.1,2.3,Iris-virginica
5.8,2.7,5.1,1.9,Iris-virginica
6.8,3.2,5.9,2.3,Iris-virginica
6.7,3.3,5.7,2.5,Iris-virginica
6.7,3.0,5.2,2.3,Iris-virginica
6.3,2.5,5.0,1.9,Iris-virginica
6.5,3.0,5.2,2.0,Iris-virginica
6.2,3.4,5.4,2.3,Iris-virginica

2、独立模式下打开spark-shell

[WBQ@westgisB068 machinelearn]$ spark-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://westgisB068:4040
Spark context available as 'sc' (master = local[*], app id = local-1686624134091).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 3.3.2
      /_/

Using Scala version 2.12.15 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_271)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

3、在saprk-shell中使用mllib等处理数据

(1)导入相关包

scala> import org.apache.spark.{SparkConf,SparkContext}
import org.apache.spark.{SparkConf, SparkContext}

scala> import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.linalg.Vectors

scala> import org.apache.spark.mllib.clustering.{KMeans,KMeansModel}
import org.apache.spark.mllib.clustering.{KMeans, KMeansModel}

(2)读取数据,并进行一定格式处理

scala> val rawData = sc.textFile("file:///home/WBQ/code/machinelearn/iris.txt")
rawData: org.apache.spark.rdd.RDD[String] = file:///home/WBQ/code/machinelearn/iris.txt MapPartitionsRDD[1] at textFile at <console>:26

scala> val trainingData = rawData.map(line=>{Vectors.dense(line.split(",".filter(p =>p.matches("\\d*(\\.?)\\d*")).map(_.toDouble))}).cache()
<console>:1: error: ')' expected but '}' found.
       val trainingData = rawData.map(line=>{Vectors.dense(line.split(",".filter(p =>p.matches("\\d*(\\.?)\\d*")).map(_.toDouble))}).cache()
                                                                                                                                  ^

scala> val trainingData = rawData.map(line=>{Vectors.dense(line.split(",").filter(p =>p.matches("\\d*(\\.?)\\d*")).map(_.toDouble))}).cache()
trainingData: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector] = MapPartitionsRDD[2] at map at <console>:26

scala> rawData.collect()
res0: Array[String] = Array(5.1,3.5,1.4,0.2,Iris-setosa, 4.9,3.0,1.4,0.2,Iris-setosa, 4.7,3.2,1.3,0.2,Iris-setosa, 4.6,3.1,1.5,0.2,Iris-setosa, 5.0,3.6,1.4,0.2,Iris-setosa, 5.4,3.9,1.7,0.4,Iris-setosa, 4.6,3.4,1.4,0.3,Iris-setosa, 5.0,3.4,1.5,0.2,Iris-setosa, 4.4,2.9,1.4,0.2,Iris-setosa, 4.9,3.1,1.5,0.1,Iris-setosa, 5.4,3.7,1.5,0.2,Iris-setosa, 4.8,3.4,1.6,0.2,Iris-setosa, 4.8,3.0,1.4,0.1,Iris-setosa, 4.3,3.0,1.1,0.1,Iris-setosa, 5.8,4.0,1.2,0.2,Iris-setosa, 5.7,4.4,1.5,0.4,Iris-setosa, 5.4,3.9,1.3,0.4,Iris-setosa, 5.1,3.5,1.4,0.3,Iris-setosa, 5.7,3.8,1.7,0.3,Iris-setosa, 5.1,3.8,1.5,0.3,Iris-setosa, 5.4,3.4,1.7,0.2,Iris-setosa, 5.1,3.7,1.5,0.4,Iris-setosa, 4.6,3.6,1.0,0.2,Iris-setosa, 5.1,3.3,1.7,0.5,Iris-setosa, 4.8,3.4,1.9,0.2,Iris-setosa, 5.0,3.0,1.6,0.2,Ir...

scala> trainingData.collect().foreach{println}
[5.1,3.5,1.4,0.2]
[4.9,3.0,1.4,0.2]
[4.7,3.2,1.3,0.2]
[4.6,3.1,1.5,0.2]
[5.0,3.6,1.4,0.2]
[5.4,3.9,1.7,0.4]
[4.6,3.4,1.4,0.3]
[5.0,3.4,1.5,0.2]
[4.4,2.9,1.4,0.2]
[4.9,3.1,1.5,0.1]
[5.4,3.7,1.5,0.2]
[4.8,3.4,1.6,0.2]
[4.8,3.0,1.4,0.1]
[4.3,3.0,1.1,0.1]
[5.8,4.0,1.2,0.2]
[5.7,4.4,1.5,0.4]
[5.4,3.9,1.3,0.4]
[5.1,3.5,1.4,0.3]
[5.7,3.8,1.7,0.3]
[5.1,3.8,1.5,0.3]
[5.4,3.4,1.7,0.2]
[5.1,3.7,1.5,0.4]
[4.6,3.6,1.0,0.2]
[5.1,3.3,1.7,0.5]
[4.8,3.4,1.9,0.2]
[5.0,3.0,1.6,0.2]
[5.0,3.4,1.6,0.4]
[5.2,3.5,1.5,0.2]
[5.2,3.4,1.4,0.2]
[4.7,3.2,1.6,0.2]
[4.8,3.1,1.6,0.2]
[5.4,3.4,1.5,0.4]
[5.2,4.1,1.5,0.1]
[5.5,4.2,1.4,0.2]
[4.9,3.1,1.5,0.1]
[5.0,3.2,1.2,0.2]
[5.5,3.5,1.3,0.2]
[4.9,3.1,1.5,0.1]
[4.4,3.0,1.3,0.2]
[5.1,3.4,1.5,0.2]
[5.0,3.5,1.3,0.3]
[4.5,2.3,1.3,0.3]
[4.4,3.2,1.3,0.2]
[5.0,3.5,1.6,0.6]
[5.1,3.8,1.9,0.4]
[4.8,3.0,1.4,0.3]
[5.1,3.8,1.6,0.2]
[4.6,3.2,1.4,0.2]
[5.3,3.7,1.5,0.2]
[5.0,3.3,1.4,0.2]
[7.0,3.2,4.7,1.4]
[6.4,3.2,4.5,1.5]
[6.9,3.1,4.9,1.5]
[5.5,2.3,4.0,1.3]
[6.5,2.8,4.6,1.5]
[5.7,2.8,4.5,1.3]
[6.3,3.3,4.7,1.6]
[4.9,2.4,3.3,1.0]
[6.6,2.9,4.6,1.3]
[5.2,2.7,3.9,1.4]
[5.0,2.0,3.5,1.0]
[5.9,3.0,4.2,1.5]
[6.0,2.2,4.0,1.0]
[6.1,2.9,4.7,1.4]
[5.6,2.9,3.6,1.3]
[6.7,3.1,4.4,1.4]
[5.6,3.0,4.5,1.5]
[5.8,2.7,4.1,1.0]
[6.2,2.2,4.5,1.5]
[5.6,2.5,3.9,1.1]
[5.9,3.2,4.8,1.8]
[6.1,2.8,4.0,1.3]
[6.3,2.5,4.9,1.5]
[6.1,2.8,4.7,1.2]
[6.4,2.9,4.3,1.3]
[6.6,3.0,4.4,1.4]
[6.8,2.8,4.8,1.4]
[6.7,3.0,5.0,1.7]
[6.0,2.9,4.5,1.5]
[5.7,2.6,3.5,1.0]
[5.5,2.4,3.8,1.1]
[5.5,2.4,3.7,1.0]
[5.8,2.7,3.9,1.2]
[6.0,2.7,5.1,1.6]
[5.4,3.0,4.5,1.5]
[6.0,3.4,4.5,1.6]
[6.7,3.1,4.7,1.5]
[6.3,2.3,4.4,1.3]
[5.6,3.0,4.1,1.3]
[5.5,2.5,4.0,1.3]
[5.5,2.6,4.4,1.2]
[6.1,3.0,4.6,1.4]
[5.8,2.6,4.0,1.2]
[5.0,2.3,3.3,1.0]
[5.6,2.7,4.2,1.3]
[5.7,3.0,4.2,1.2]
[5.7,2.9,4.2,1.3]
[6.2,2.9,4.3,1.3]
[5.1,2.5,3.0,1.1]
[5.7,2.8,4.1,1.3]
[6.3,3.3,6.0,2.5]
[5.8,2.7,5.1,1.9]
[7.1,3.0,5.9,2.1]
[6.3,2.9,5.6,1.8]
[6.5,3.0,5.8,2.2]
[7.6,3.0,6.6,2.1]
[4.9,2.5,4.5,1.7]
[7.3,2.9,6.3,1.8]
[6.7,2.5,5.8,1.8]
[7.2,3.6,6.1,2.5]
[6.5,3.2,5.1,2.0]
[6.4,2.7,5.3,1.9]
[6.8,3.0,5.5,2.1]
[5.7,2.5,5.0,2.0]
[5.8,2.8,5.1,2.4]
[6.4,3.2,5.3,2.3]
[6.5,3.0,5.5,1.8]
[7.7,3.8,6.7,2.2]
[7.7,2.6,6.9,2.3]
[6.0,2.2,5.0,1.5]
[6.9,3.2,5.7,2.3]
[5.6,2.8,4.9,2.0]
[7.7,2.8,6.7,2.0]
[6.3,2.7,4.9,1.8]
[6.7,3.3,5.7,2.1]
[7.2,3.2,6.0,1.8]
[6.2,2.8,4.8,1.8]
[6.1,3.0,4.9,1.8]
[6.4,2.8,5.6,2.1]
[7.2,3.0,5.8,1.6]
[7.4,2.8,6.1,1.9]
[7.9,3.8,6.4,2.0]
[6.4,2.8,5.6,2.2]
[6.3,2.8,5.1,1.5]
[6.1,2.6,5.6,1.4]
[7.7,3.0,6.1,2.3]
[6.3,3.4,5.6,2.4]
[6.4,3.1,5.5,1.8]
[6.0,3.0,4.8,1.8]
[6.9,3.1,5.4,2.1]
[6.7,3.1,5.6,2.4]
[6.9,3.1,5.1,2.3]
[5.8,2.7,5.1,1.9]
[6.8,3.2,5.9,2.3]
[6.7,3.3,5.7,2.5]
[6.7,3.0,5.2,2.3]
[6.3,2.5,5.0,1.9]
[6.5,3.0,5.2,2.0]
[6.2,3.4,5.4,2.3]

scala>

(3)将数据集聚类,2个类,5次迭代,进行模型训练形成数据模型

scala> val numClusters =2
numClusters: Int = 2

scala> val numIterations = 5
numIterations: Int = 5

scala> val model:KMeansModel = KMeans.train(trainingData, numClusters, numIterations)
model: org.apache.spark.mllib.clustering.KMeansModel = org.apache.spark.mllib.clustering.KMeansModel@4d75dba3

scala> 

(4)打印数据模型的中心点(以下两种方法均可)

for (c <- model.clusterCenters) {
   println("  " + c.toString)
}
model.clusterCenters.foreach(
   center => {
     println("Clustering Center:"+center)
})

(5)通过predict()方法来确定每个样本所属的聚类

trainingData.collect().foreach(
     sample => {
       val predictedCluster = model.predict(sample)
       println(sample.toString + " belongs to cluster " + predictedCluster)
})

(6)使用误差平方之和来评估数据模型(度量聚类的有效性)

scala> val wsse = model.computeCost(trainingData)
wsse: Double = 152.16210102201256

scala>

(7)使用模型测试单点数据

scala> print(model.predict(Vectors.dense("5.9,4.5,7.9,6.3".split(',').map(_.toDouble))))
0
scala>

完整代码如下:

[WBQ@westgisB068 machinelearn]$ spark-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://westgisB068:4040
Spark context available as 'sc' (master = local[*], app id = local-1686624134091).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 3.3.2
      /_/

Using Scala version 2.12.15 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_271)
Type in expressions to have them evaluated.
Type :help for more information.

scala> import org.apache.spark.{SparkConf,SparkContext}
import org.apache.spark.{SparkConf, SparkContext}

scala> import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.linalg.Vectors

scala> import org.apache.spark.mllib.clustering.{KMeans,KMeansModel}
import org.apache.spark.mllib.clustering.{KMeans, KMeansModel}

scala> val rawData = sc.textFile("file:///home/WBQ/code/machinelearn/iris.txt")
rawData: org.apache.spark.rdd.RDD[String] = file:///home/WBQ/code/machinelearn/iris.txt MapPartitionsRDD[1] at textFile at <console>:26

scala> val trainingData = rawData.map(line=>{Vectors.dense(line.split(",".filter(p =>p.matches("\\d*(\\.?)\\d*")).map(_.toDouble))}).cache()
<console>:1: error: ')' expected but '}' found.
       val trainingData = rawData.map(line=>{Vectors.dense(line.split(",".filter(p =>p.matches("\\d*(\\.?)\\d*")).map(_.toDouble))}).cache()
                                                                                                                                  ^

scala> val trainingData = rawData.map(line=>{Vectors.dense(line.split(",").filter(p =>p.matches("\\d*(\\.?)\\d*")).map(_.toDouble))}).cache()
trainingData: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector] = MapPartitionsRDD[2] at map at <console>:26

scala> rawData.collect()
res0: Array[String] = Array(5.1,3.5,1.4,0.2,Iris-setosa, 4.9,3.0,1.4,0.2,Iris-setosa, 4.7,3.2,1.3,0.2,Iris-setosa, 4.6,3.1,1.5,0.2,Iris-setosa, 5.0,3.6,1.4,0.2,Iris-setosa, 5.4,3.9,1.7,0.4,Iris-setosa, 4.6,3.4,1.4,0.3,Iris-setosa, 5.0,3.4,1.5,0.2,Iris-setosa, 4.4,2.9,1.4,0.2,Iris-setosa, 4.9,3.1,1.5,0.1,Iris-setosa, 5.4,3.7,1.5,0.2,Iris-setosa, 4.8,3.4,1.6,0.2,Iris-setosa, 4.8,3.0,1.4,0.1,Iris-setosa, 4.3,3.0,1.1,0.1,Iris-setosa, 5.8,4.0,1.2,0.2,Iris-setosa, 5.7,4.4,1.5,0.4,Iris-setosa, 5.4,3.9,1.3,0.4,Iris-setosa, 5.1,3.5,1.4,0.3,Iris-setosa, 5.7,3.8,1.7,0.3,Iris-setosa, 5.1,3.8,1.5,0.3,Iris-setosa, 5.4,3.4,1.7,0.2,Iris-setosa, 5.1,3.7,1.5,0.4,Iris-setosa, 4.6,3.6,1.0,0.2,Iris-setosa, 5.1,3.3,1.7,0.5,Iris-setosa, 4.8,3.4,1.9,0.2,Iris-setosa, 5.0,3.0,1.6,0.2,Ir...

scala> trainingData.collect().foreach{println}
[5.1,3.5,1.4,0.2]
[4.9,3.0,1.4,0.2]
[4.7,3.2,1.3,0.2]
[4.6,3.1,1.5,0.2]
[5.0,3.6,1.4,0.2]
[5.4,3.9,1.7,0.4]
[4.6,3.4,1.4,0.3]
[5.0,3.4,1.5,0.2]
[4.4,2.9,1.4,0.2]
[4.9,3.1,1.5,0.1]
[5.4,3.7,1.5,0.2]
[4.8,3.4,1.6,0.2]
[4.8,3.0,1.4,0.1]
[4.3,3.0,1.1,0.1]
[5.8,4.0,1.2,0.2]
[5.7,4.4,1.5,0.4]
[5.4,3.9,1.3,0.4]
[5.1,3.5,1.4,0.3]
[5.7,3.8,1.7,0.3]
[5.1,3.8,1.5,0.3]
[5.4,3.4,1.7,0.2]
[5.1,3.7,1.5,0.4]
[4.6,3.6,1.0,0.2]
[5.1,3.3,1.7,0.5]
[4.8,3.4,1.9,0.2]
[5.0,3.0,1.6,0.2]
[5.0,3.4,1.6,0.4]
[5.2,3.5,1.5,0.2]
[5.2,3.4,1.4,0.2]
[4.7,3.2,1.6,0.2]
[4.8,3.1,1.6,0.2]
[5.4,3.4,1.5,0.4]
[5.2,4.1,1.5,0.1]
[5.5,4.2,1.4,0.2]
[4.9,3.1,1.5,0.1]
[5.0,3.2,1.2,0.2]
[5.5,3.5,1.3,0.2]
[4.9,3.1,1.5,0.1]
[4.4,3.0,1.3,0.2]
[5.1,3.4,1.5,0.2]
[5.0,3.5,1.3,0.3]
[4.5,2.3,1.3,0.3]
[4.4,3.2,1.3,0.2]
[5.0,3.5,1.6,0.6]
[5.1,3.8,1.9,0.4]
[4.8,3.0,1.4,0.3]
[5.1,3.8,1.6,0.2]
[4.6,3.2,1.4,0.2]
[5.3,3.7,1.5,0.2]
[5.0,3.3,1.4,0.2]
[7.0,3.2,4.7,1.4]
[6.4,3.2,4.5,1.5]
[6.9,3.1,4.9,1.5]
[5.5,2.3,4.0,1.3]
[6.5,2.8,4.6,1.5]
[5.7,2.8,4.5,1.3]
[6.3,3.3,4.7,1.6]
[4.9,2.4,3.3,1.0]
[6.6,2.9,4.6,1.3]
[5.2,2.7,3.9,1.4]
[5.0,2.0,3.5,1.0]
[5.9,3.0,4.2,1.5]
[6.0,2.2,4.0,1.0]
[6.1,2.9,4.7,1.4]
[5.6,2.9,3.6,1.3]
[6.7,3.1,4.4,1.4]
[5.6,3.0,4.5,1.5]
[5.8,2.7,4.1,1.0]
[6.2,2.2,4.5,1.5]
[5.6,2.5,3.9,1.1]
[5.9,3.2,4.8,1.8]
[6.1,2.8,4.0,1.3]
[6.3,2.5,4.9,1.5]
[6.1,2.8,4.7,1.2]
[6.4,2.9,4.3,1.3]
[6.6,3.0,4.4,1.4]
[6.8,2.8,4.8,1.4]
[6.7,3.0,5.0,1.7]
[6.0,2.9,4.5,1.5]
[5.7,2.6,3.5,1.0]
[5.5,2.4,3.8,1.1]
[5.5,2.4,3.7,1.0]
[5.8,2.7,3.9,1.2]
[6.0,2.7,5.1,1.6]
[5.4,3.0,4.5,1.5]
[6.0,3.4,4.5,1.6]
[6.7,3.1,4.7,1.5]
[6.3,2.3,4.4,1.3]
[5.6,3.0,4.1,1.3]
[5.5,2.5,4.0,1.3]
[5.5,2.6,4.4,1.2]
[6.1,3.0,4.6,1.4]
[5.8,2.6,4.0,1.2]
[5.0,2.3,3.3,1.0]
[5.6,2.7,4.2,1.3]
[5.7,3.0,4.2,1.2]
[5.7,2.9,4.2,1.3]
[6.2,2.9,4.3,1.3]
[5.1,2.5,3.0,1.1]
[5.7,2.8,4.1,1.3]
[6.3,3.3,6.0,2.5]
[5.8,2.7,5.1,1.9]
[7.1,3.0,5.9,2.1]
[6.3,2.9,5.6,1.8]
[6.5,3.0,5.8,2.2]
[7.6,3.0,6.6,2.1]
[4.9,2.5,4.5,1.7]
[7.3,2.9,6.3,1.8]
[6.7,2.5,5.8,1.8]
[7.2,3.6,6.1,2.5]
[6.5,3.2,5.1,2.0]
[6.4,2.7,5.3,1.9]
[6.8,3.0,5.5,2.1]
[5.7,2.5,5.0,2.0]
[5.8,2.8,5.1,2.4]
[6.4,3.2,5.3,2.3]
[6.5,3.0,5.5,1.8]
[7.7,3.8,6.7,2.2]
[7.7,2.6,6.9,2.3]
[6.0,2.2,5.0,1.5]
[6.9,3.2,5.7,2.3]
[5.6,2.8,4.9,2.0]
[7.7,2.8,6.7,2.0]
[6.3,2.7,4.9,1.8]
[6.7,3.3,5.7,2.1]
[7.2,3.2,6.0,1.8]
[6.2,2.8,4.8,1.8]
[6.1,3.0,4.9,1.8]
[6.4,2.8,5.6,2.1]
[7.2,3.0,5.8,1.6]
[7.4,2.8,6.1,1.9]
[7.9,3.8,6.4,2.0]
[6.4,2.8,5.6,2.2]
[6.3,2.8,5.1,1.5]
[6.1,2.6,5.6,1.4]
[7.7,3.0,6.1,2.3]
[6.3,3.4,5.6,2.4]
[6.4,3.1,5.5,1.8]
[6.0,3.0,4.8,1.8]
[6.9,3.1,5.4,2.1]
[6.7,3.1,5.6,2.4]
[6.9,3.1,5.1,2.3]
[5.8,2.7,5.1,1.9]
[6.8,3.2,5.9,2.3]
[6.7,3.3,5.7,2.5]
[6.7,3.0,5.2,2.3]
[6.3,2.5,5.0,1.9]
[6.5,3.0,5.2,2.0]
[6.2,3.4,5.4,2.3]

scala> val numClusters =2
numClusters: Int = 2

scala> val numIterations = 5
numIterations: Int = 5

scala> val model:KMeansModel = KMeans.train(trainingData, numClusters, numIterations)
model: org.apache.spark.mllib.clustering.KMeansModel = org.apache.spark.mllib.clustering.KMeansModel@4d75dba3

scala> for(c <- model.clusterCenters){
     |  println("  "+ c.toString)
     | }
  [6.305208333333331,2.885416666666667,4.957291666666667,1.6947916666666663]
  [5.005660377358491,3.3603773584905667,1.562264150943396,0.28867924528301875]

scala> model.clusterCenters.foreach()
<console>:27: error: not enough arguments for method foreach: (f: org.apache.spark.mllib.linalg.Vector => U)Unit.
Unspecified value parameter f.
       model.clusterCenters.foreach()
                                   ^

scala> model.clusterCenters.foreach(
     |   center =>{
     |      println("clustering center: "+ center)
     | })
clustering center: [6.305208333333331,2.885416666666667,4.957291666666667,1.6947916666666663]
clustering center: [5.005660377358491,3.3603773584905667,1.562264150943396,0.28867924528301875]

scala> trainingData.collect().foreach(
     |   sample => {
     |      val predictCluster = model.predict(sample)
     |      println(sample.toString + " belongs to cluster " + predictCluster)
     | })
[5.1,3.5,1.4,0.2] belongs to cluster 1
[4.9,3.0,1.4,0.2] belongs to cluster 1
[4.7,3.2,1.3,0.2] belongs to cluster 1
[4.6,3.1,1.5,0.2] belongs to cluster 1
[5.0,3.6,1.4,0.2] belongs to cluster 1
[5.4,3.9,1.7,0.4] belongs to cluster 1
[4.6,3.4,1.4,0.3] belongs to cluster 1
[5.0,3.4,1.5,0.2] belongs to cluster 1
[4.4,2.9,1.4,0.2] belongs to cluster 1
[4.9,3.1,1.5,0.1] belongs to cluster 1
[5.4,3.7,1.5,0.2] belongs to cluster 1
[4.8,3.4,1.6,0.2] belongs to cluster 1
[4.8,3.0,1.4,0.1] belongs to cluster 1
[4.3,3.0,1.1,0.1] belongs to cluster 1
[5.8,4.0,1.2,0.2] belongs to cluster 1
[5.7,4.4,1.5,0.4] belongs to cluster 1
[5.4,3.9,1.3,0.4] belongs to cluster 1
[5.1,3.5,1.4,0.3] belongs to cluster 1
[5.7,3.8,1.7,0.3] belongs to cluster 1
[5.1,3.8,1.5,0.3] belongs to cluster 1
[5.4,3.4,1.7,0.2] belongs to cluster 1
[5.1,3.7,1.5,0.4] belongs to cluster 1
[4.6,3.6,1.0,0.2] belongs to cluster 1
[5.1,3.3,1.7,0.5] belongs to cluster 1
[4.8,3.4,1.9,0.2] belongs to cluster 1
[5.0,3.0,1.6,0.2] belongs to cluster 1
[5.0,3.4,1.6,0.4] belongs to cluster 1
[5.2,3.5,1.5,0.2] belongs to cluster 1
[5.2,3.4,1.4,0.2] belongs to cluster 1
[4.7,3.2,1.6,0.2] belongs to cluster 1
[4.8,3.1,1.6,0.2] belongs to cluster 1
[5.4,3.4,1.5,0.4] belongs to cluster 1
[5.2,4.1,1.5,0.1] belongs to cluster 1
[5.5,4.2,1.4,0.2] belongs to cluster 1
[4.9,3.1,1.5,0.1] belongs to cluster 1
[5.0,3.2,1.2,0.2] belongs to cluster 1
[5.5,3.5,1.3,0.2] belongs to cluster 1
[4.9,3.1,1.5,0.1] belongs to cluster 1
[4.4,3.0,1.3,0.2] belongs to cluster 1
[5.1,3.4,1.5,0.2] belongs to cluster 1
[5.0,3.5,1.3,0.3] belongs to cluster 1
[4.5,2.3,1.3,0.3] belongs to cluster 1
[4.4,3.2,1.3,0.2] belongs to cluster 1
[5.0,3.5,1.6,0.6] belongs to cluster 1
[5.1,3.8,1.9,0.4] belongs to cluster 1
[4.8,3.0,1.4,0.3] belongs to cluster 1
[5.1,3.8,1.6,0.2] belongs to cluster 1
[4.6,3.2,1.4,0.2] belongs to cluster 1
[5.3,3.7,1.5,0.2] belongs to cluster 1
[5.0,3.3,1.4,0.2] belongs to cluster 1
[7.0,3.2,4.7,1.4] belongs to cluster 0
[6.4,3.2,4.5,1.5] belongs to cluster 0
[6.9,3.1,4.9,1.5] belongs to cluster 0
[5.5,2.3,4.0,1.3] belongs to cluster 0
[6.5,2.8,4.6,1.5] belongs to cluster 0
[5.7,2.8,4.5,1.3] belongs to cluster 0
[6.3,3.3,4.7,1.6] belongs to cluster 0
[4.9,2.4,3.3,1.0] belongs to cluster 1
[6.6,2.9,4.6,1.3] belongs to cluster 0
[5.2,2.7,3.9,1.4] belongs to cluster 0
[5.0,2.0,3.5,1.0] belongs to cluster 0
[5.9,3.0,4.2,1.5] belongs to cluster 0
[6.0,2.2,4.0,1.0] belongs to cluster 0
[6.1,2.9,4.7,1.4] belongs to cluster 0
[5.6,2.9,3.6,1.3] belongs to cluster 0
[6.7,3.1,4.4,1.4] belongs to cluster 0
[5.6,3.0,4.5,1.5] belongs to cluster 0
[5.8,2.7,4.1,1.0] belongs to cluster 0
[6.2,2.2,4.5,1.5] belongs to cluster 0
[5.6,2.5,3.9,1.1] belongs to cluster 0
[5.9,3.2,4.8,1.8] belongs to cluster 0
[6.1,2.8,4.0,1.3] belongs to cluster 0
[6.3,2.5,4.9,1.5] belongs to cluster 0
[6.1,2.8,4.7,1.2] belongs to cluster 0
[6.4,2.9,4.3,1.3] belongs to cluster 0
[6.6,3.0,4.4,1.4] belongs to cluster 0
[6.8,2.8,4.8,1.4] belongs to cluster 0
[6.7,3.0,5.0,1.7] belongs to cluster 0
[6.0,2.9,4.5,1.5] belongs to cluster 0
[5.7,2.6,3.5,1.0] belongs to cluster 0
[5.5,2.4,3.8,1.1] belongs to cluster 0
[5.5,2.4,3.7,1.0] belongs to cluster 0
[5.8,2.7,3.9,1.2] belongs to cluster 0
[6.0,2.7,5.1,1.6] belongs to cluster 0
[5.4,3.0,4.5,1.5] belongs to cluster 0
[6.0,3.4,4.5,1.6] belongs to cluster 0
[6.7,3.1,4.7,1.5] belongs to cluster 0
[6.3,2.3,4.4,1.3] belongs to cluster 0
[5.6,3.0,4.1,1.3] belongs to cluster 0
[5.5,2.5,4.0,1.3] belongs to cluster 0
[5.5,2.6,4.4,1.2] belongs to cluster 0
[6.1,3.0,4.6,1.4] belongs to cluster 0
[5.8,2.6,4.0,1.2] belongs to cluster 0
[5.0,2.3,3.3,1.0] belongs to cluster 1
[5.6,2.7,4.2,1.3] belongs to cluster 0
[5.7,3.0,4.2,1.2] belongs to cluster 0
[5.7,2.9,4.2,1.3] belongs to cluster 0
[6.2,2.9,4.3,1.3] belongs to cluster 0
[5.1,2.5,3.0,1.1] belongs to cluster 1
[5.7,2.8,4.1,1.3] belongs to cluster 0
[6.3,3.3,6.0,2.5] belongs to cluster 0
[5.8,2.7,5.1,1.9] belongs to cluster 0
[7.1,3.0,5.9,2.1] belongs to cluster 0
[6.3,2.9,5.6,1.8] belongs to cluster 0
[6.5,3.0,5.8,2.2] belongs to cluster 0
[7.6,3.0,6.6,2.1] belongs to cluster 0
[4.9,2.5,4.5,1.7] belongs to cluster 0
[7.3,2.9,6.3,1.8] belongs to cluster 0
[6.7,2.5,5.8,1.8] belongs to cluster 0
[7.2,3.6,6.1,2.5] belongs to cluster 0
[6.5,3.2,5.1,2.0] belongs to cluster 0
[6.4,2.7,5.3,1.9] belongs to cluster 0
[6.8,3.0,5.5,2.1] belongs to cluster 0
[5.7,2.5,5.0,2.0] belongs to cluster 0
[5.8,2.8,5.1,2.4] belongs to cluster 0
[6.4,3.2,5.3,2.3] belongs to cluster 0
[6.5,3.0,5.5,1.8] belongs to cluster 0
[7.7,3.8,6.7,2.2] belongs to cluster 0
[7.7,2.6,6.9,2.3] belongs to cluster 0
[6.0,2.2,5.0,1.5] belongs to cluster 0
[6.9,3.2,5.7,2.3] belongs to cluster 0
[5.6,2.8,4.9,2.0] belongs to cluster 0
[7.7,2.8,6.7,2.0] belongs to cluster 0
[6.3,2.7,4.9,1.8] belongs to cluster 0
[6.7,3.3,5.7,2.1] belongs to cluster 0
[7.2,3.2,6.0,1.8] belongs to cluster 0
[6.2,2.8,4.8,1.8] belongs to cluster 0
[6.1,3.0,4.9,1.8] belongs to cluster 0
[6.4,2.8,5.6,2.1] belongs to cluster 0
[7.2,3.0,5.8,1.6] belongs to cluster 0
[7.4,2.8,6.1,1.9] belongs to cluster 0
[7.9,3.8,6.4,2.0] belongs to cluster 0
[6.4,2.8,5.6,2.2] belongs to cluster 0
[6.3,2.8,5.1,1.5] belongs to cluster 0
[6.1,2.6,5.6,1.4] belongs to cluster 0
[7.7,3.0,6.1,2.3] belongs to cluster 0
[6.3,3.4,5.6,2.4] belongs to cluster 0
[6.4,3.1,5.5,1.8] belongs to cluster 0
[6.0,3.0,4.8,1.8] belongs to cluster 0
[6.9,3.1,5.4,2.1] belongs to cluster 0
[6.7,3.1,5.6,2.4] belongs to cluster 0
[6.9,3.1,5.1,2.3] belongs to cluster 0
[5.8,2.7,5.1,1.9] belongs to cluster 0
[6.8,3.2,5.9,2.3] belongs to cluster 0
[6.7,3.3,5.7,2.5] belongs to cluster 0
[6.7,3.0,5.2,2.3] belongs to cluster 0
[6.3,2.5,5.0,1.9] belongs to cluster 0
[6.5,3.0,5.2,2.0] belongs to cluster 0
[6.2,3.4,5.4,2.3] belongs to cluster 0

scala> val wsse = model.computeCost(trainingData)
wsse: Double = 152.16210102201256

scala> print(model.predict(Vectors.dense("5.9,4.5,7.9,6.3".split(',').map(_.toDouble))))
0
scala>

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值