machine learning on spark - 第一节:基础数据结构

本节内容

1.本地向量和矩阵(Local Vector/Matrix)

2.带类标签的特征向量(Labeled point)

3.分布式矩阵(Distributed Matrix)

1. 本地向量和矩阵

本地向量(Local Vector)存储在单台机器上,索引采用0开始的整型表示,值采用Double类型的值表示。Spark MLlib中支持两种类型的矩阵,分别是密度向量(Dense Vector)和稀疏矩阵(Spasre Vector),密度向量会存储所有的值包括零值,而稀疏向量存储的是索引位置及值,不存储零值,在数据量比较大时候,稀疏向量才能体现出它的优势和价值。下面给出应用示例:

import org.apache.spark.mllib.linalg.{Vector, Vectors}
// 注意spark version
//密度向量,零值也存储
scala> val dv: Vector = Vectors.dense(1.0, 0.0, 3.0)
dv: org.apache.spark.mllib.linalg.Vector = [1.0,0.0,3.0]

// 创建稀疏向量,指定元素的个数、索引及非零值,数组方式
scala> val sv1: Vector = Vectors.sparse(3, Array(0, 2), Array(1.0, 3.0))
sv1: org.apache.spark.mllib.linalg.Vector = (3,[0,2],[1.0,3.0])

// 创建稀疏向量,指定元素的个数、索引及非零值,采用序列方式
scala> val sv2: Vector = Vectors.sparse(3, Seq((0, 1.0), (2, 3.0)))
sv2: org.apache.spark.mllib.linalg.Vector = (3,[0,2],[1.0,3.0])


本地矩阵(Local Matrix)指的也是存储于单台机器上的数据结构,本地矩阵采用整体的行列序号存取元素,本地矩阵也有密度矩阵(Dense Matrix)、稀疏矩阵(Sparse Matrix)两种存储方法,其使用代码如下:

//密度矩阵的存储
scala> import org.apache.spark.mllib.linalg.{Matrix, Matrices}
import org.apache.spark.mllib.linalg.{Matrix, Matrices}
//创建一个密度矩阵
scala> val dm: Matrix = Matrices.dense(3, 2, Array(1.0, 3.0, 5.0, 2.0, 4.0, 6.0))
dm: org.apache.spark.mllib.linalg.Matrix = 
1.0  2.0  
3.0  4.0  
5.0  6.0

在Spark MLlib中,稀疏矩阵采用的是Compressed Sparse Column(CSC)格式进行矩阵的存储,对于CSC存储格式的理解可以参考理解Compressed Sparse Column Format (CSC),例如

//下列矩阵
    1.0 0.0 4.0

    0.0 3.0 5.0

    2.0 0.0 6.0
如果采用稀疏矩阵存储的话,其存储信息包括:
实际存储值: [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]`,
矩阵元素对应的行索引:rowIndices=[0, 2, 1, 0, 1, 2]`
列起始位置索引: `colPointers=[0, 2, 3, 6]`.


scala> val sparseMatrix= Matrices.sparse(3, 3, Array(0, 2, 3, 6), Array(0, 2, 1, 0, 1, 2), Array(1.0, 2.0, 3.0, 4.0, 5.0, 6.0))
sparseMatrix: org.apache.spark.mllib.linalg.Matrix = 
3 x 3 CSCMatrix
(0,0) 1.0
(2,0) 2.0
(1,1) 3.0
(0,2) 4.0
(1,2) 5.0
(2,2) 6.0

2. 带类标签的特征向量(Labeled point)

Labeled point是Spark MLlib中最重要的数据结构之一,它在无监督算法中使用十分广泛,它也是一种本地向量,只不过它提供了类标签,对于二元分类,它的标签数据为0和1,而对于多类分类,它的标签为0,1,2...。它同本地向量一样,同时具有Sparse和Dense两种实现方式,例如:

scala> import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.regression.LabeledPoint

// LabeledPoint第一个参数是类标签数据,第二参数是对应的特征数据
//下面给出的是其密度向量实现方式
scala> val pos = LabeledPoint(1.0, Vectors.dense(1.0, 0.0, 3.0))
pos: org.apache.spark.mllib.regression.LabeledPoint = (1.0,[1.0,0.0,3.0])

 // LabeledPoint的稀疏向量实现方式
scala> val neg = LabeledPoint(0.0, Vectors.sparse(3, Array(0, 2), Array(1.0, 3.0)))
neg: org.apache.spark.mllib.regression.LabeledPoint = (0.0,(3,[0,2],[1.0,3.0]))

LabeledPoint的稀疏向量实现方式在实际应用最为广泛,这是因为某一特征的维度可能达到上千,而这其中又存在大量对后期训练无益的零值特征信息,如果对所有的零值特征都进行存储的话,会浪费大量的存储空间,因此实际中常常使用稀疏的实现方式,使用的是LIBSVM格式:label index1:value1 index2:value2...进行特征标签及特征的存储与读取。

scala> val examples: RDD[LabeledPoint] = MLUtils.loadLibSVMFile(sc, "/data/sample_data.txt")

examples: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint] = MapPartitionsRDD[6] at map at MLUtils.scala:98

3. 分布式矩阵RowMatrix与CoordinateMatrix

rowMatrix行矩阵

scala> import org.apache.spark.rdd.RDD
import org.apache.spark.rdd.RDD

scala> import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.linalg.Vectors


scala> import org.apache.spark.mllib.linalg.distributed.RowMatrix
import org.apache.spark.mllib.linalg.distributed.RowMatrix
scala> :paste
// Entering paste mode (ctrl-D to finish)

val df1 = Seq(
(1.0,2.0,3.0),
(1.1,2.1,3.1),
(1.2,2.2,3.2)).toDF("c1", "c2", "c3")

// Exiting paste mode, now interpreting.

df1: org.apache.spark.sql.DataFrame = [c1: double, c2: double ... 1 more field]
scala> df1.show
+---+---+---+
| c1| c2| c3|
+---+---+---+
|1.0|2.0|3.0|
|1.1|2.1|3.1|
|1.2|2.2|3.2|
+---+---+---+

// dataframe转rdd[vector]
scala> :paste
// Entering paste mode (ctrl-D to finish)

val rowsVector = df1.rdd.map{
x => Vectors.dense(x(0).toString().toDouble, x(1).toString().toDouble, x(2).toString().toDouble)}

// Exiting paste mode, now interpreting.

rowsVector: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector] = MapPartitionsRDD[4] at map at <console>:28

scala> rowsVector.take(1)
18/06/04 12:04:32 WARN hdfs.DFSClient: Slow ReadProcessor read fields took 157734ms (threshold=30000ms); ack: seqno: 5 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 400578 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[10.130.126.126:1004,DS-63e808f4-96f9-4830-8bc0-954b929db441,DISK], DatanodeInfoWithStorage[10.130.125.185:1004,DS-0e0c5cda-b702-457a-90a8-8c5d6a41a544,DISK], DatanodeInfoWithStorage[10.130.125.186:1004,DS-6bb5ccfd-0de6-49ca-abc6-35b050105657,DISK]]
res1: Array[org.apache.spark.mllib.linalg.Vector] = Array([1.0,2.0,3.0])

scala> rowsVector.take(2)
res2: Array[org.apache.spark.mllib.linalg.Vector] = Array([1.0,2.0,3.0], [1.1,2.1,3.1])

scala> rowsVector.take(3)
res3: Array[org.apache.spark.mllib.linalg.Vector] = Array([1.0,2.0,3.0], [1.1,2.1,3.1], [1.2,2.2,3.2])

scala> rowsVector.take(4)
res4: Array[org.apache.spark.mllib.linalg.Vector] = Array([1.0,2.0,3.0], [1.1,2.1,3.1], [1.2,2.2,3.2])
scala> // create a rowMatrix from an rdd[vector]

scala> val mat1:RowMatrix = new RowMatrix(rowsVector)
mat1: org.apache.spark.mllib.linalg.distributed.RowMatrix = org.apache.spark.mllib.linalg.distributed.RowMatrix@407649b

scala> // get its size

scala> val m = mat1.numRows()
18/06/04 12:10:25 WARN hdfs.DFSClient: Slow ReadProcessor read fields took 37620ms (threshold=30000ms); ack: seqno: 26 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 482615 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[10.130.126.126:1004,DS-63e808f4-96f9-4830-8bc0-954b929db441,DISK], DatanodeInfoWithStorage[10.130.125.185:1004,DS-0e0c5cda-b702-457a-90a8-8c5d6a41a544,DISK], DatanodeInfoWithStorage[10.130.125.186:1004,DS-6bb5ccfd-0de6-49ca-abc6-35b050105657,DISK]]
m: Long = 3

scala> val n = mat1.numCols()
18/06/04 12:11:01 WARN hdfs.DFSClient: Slow ReadProcessor read fields took 35912ms (threshold=30000ms); ack: seqno: 29 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 487873 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[10.130.126.126:1004,DS-63e808f4-96f9-4830-8bc0-954b929db441,DISK], DatanodeInfoWithStorage[10.130.125.185:1004,DS-0e0c5cda-b702-457a-90a8-8c5d6a41a544,DISK], DatanodeInfoWithStorage[10.130.125.186:1004,DS-6bb5ccfd-0de6-49ca-abc6-35b050105657,DISK]]
n: Long = 3

scala> // 将RowMatrix转换成DataFrame

scala> :paste
// Entering paste mode (ctrl-D to finish)

val resDF = mat1.rows.map{
x => (x(0).toDouble, x(1).toDouble, x(2).toDouble)}.toDF("c1","c2","c3")

// Exiting paste mode, now interpreting.

resDF: org.apache.spark.sql.DataFrame = [c1: double, c2: double ... 1 more field]

scala> resDF.show
18/06/04 12:13:22 WARN hdfs.DFSClient: Slow ReadProcessor read fields took 141142ms (threshold=30000ms); ack: seqno: 32 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 470761 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[10.130.126.126:1004,DS-63e808f4-96f9-4830-8bc0-954b929db441,DISK], DatanodeInfoWithStorage[10.130.125.185:1004,DS-0e0c5cda-b702-457a-90a8-8c5d6a41a544,DISK], DatanodeInfoWithStorage[10.130.125.186:1004,DS-6bb5ccfd-0de6-49ca-abc6-35b050105657,DISK]]
+---+---+---+
| c1| c2| c3|
+---+---+---+
|1.0|2.0|3.0|
|1.1|2.1|3.1|
|1.2|2.2|3.2|
+---+---+---+
scala> mat1.rows.collect().take(10)
18/06/04 12:14:38 WARN hdfs.DFSClient: Slow ReadProcessor read fields took 75423ms (threshold=30000ms); ack: seqno: 40 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 528863 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[10.130.126.126:1004,DS-63e808f4-96f9-4830-8bc0-954b929db441,DISK], DatanodeInfoWithStorage[10.130.125.185:1004,DS-0e0c5cda-b702-457a-90a8-8c5d6a41a544,DISK], DatanodeInfoWithStorage[10.130.125.186:1004,DS-6bb5ccfd-0de6-49ca-abc6-35b050105657,DISK]]
res7: Array[org.apache.spark.mllib.linalg.Vector] = Array([1.0,2.0,3.0], [1.1,2.1,3.1], [1.2,2.2,3.2])


CoordinateMatrix坐标矩阵

scala> // CoordinateMatrix 坐标矩阵

scala> import org.apache.spark.rdd.RDD
import org.apache.spark.rdd.RDD

scala> import org.apache.spark.mllib.linalg.distributed.{CoordinateMatrix, MatrixEntry}
import org.apache.spark.mllib.linalg.distributed.{CoordinateMatrix, MatrixEntry}

scala> // 第一列:行坐标;第二列:列坐标;第三列:矩阵元素

scala> :paste
// Entering paste mode (ctrl-D to finish)

val df = Seq(
(0,0,1.1), (0,1,1.2), (0,2,1.3),
(1,0,2.1), (1,1,2.2), (1,2,2.3),
(2,0,3.1), (2,1,3.2), (2,2,3.3),
(3,0,4.1), (3,1,4.2), (3,2,4.3)).toDF("row", "col", "value") 

// Exiting paste mode, now interpreting.

df: org.apache.spark.sql.DataFrame = [row: int, col: int ... 1 more field]

scala> df.show
18/06/04 12:23:32 WARN hdfs.DFSClient: Slow ReadProcessor read fields took 233376ms (threshold=30000ms); ack: seqno: 43 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 490894 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[10.130.126.126:1004,DS-63e808f4-96f9-4830-8bc0-954b929db441,DISK], DatanodeInfoWithStorage[10.130.125.185:1004,DS-0e0c5cda-b702-457a-90a8-8c5d6a41a544,DISK], DatanodeInfoWithStorage[10.130.125.186:1004,DS-6bb5ccfd-0de6-49ca-abc6-35b050105657,DISK]]
+---+---+-----+
|row|col|value|
+---+---+-----+
|  0|  0|  1.1|
|  0|  1|  1.2|
|  0|  2|  1.3|
|  1|  0|  2.1|
|  1|  1|  2.2|
|  1|  2|  2.3|
|  2|  0|  3.1|
|  2|  1|  3.2|
|  2|  2|  3.3|
|  3|  0|  4.1|
|  3|  1|  4.2|
|  3|  2|  4.3|
+---+---+-----+
// 生成入口矩阵
scala> :paste
// Entering paste mode (ctrl-D to finish)

val entr = df.rdd.map{ x=>
val a = x(0).toString().toLong
val b = x(1).toString().toLong
val c = x(2).toString().toDouble
MatrixEntry(a, b, c)
}

// Exiting paste mode, now interpreting.

entr: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.distributed.MatrixEntry] = MapPartitionsRDD[13] at map at <console>:30

scala> entr.take(5)
18/06/04 17:44:05 WARN hdfs.DFSClient: Slow ReadProcessor read fields took 33327ms (threshold=30000ms); ack: seqno: 45 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 29616527 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[10.130.126.126:1004,DS-63e808f4-96f9-4830-8bc0-954b929db441,DISK], DatanodeInfoWithStorage[10.130.125.185:1004,DS-0e0c5cda-b702-457a-90a8-8c5d6a41a544,DISK], DatanodeInfoWithStorage[10.130.125.186:1004,DS-6bb5ccfd-0de6-49ca-abc6-35b050105657,DISK]]
res9: Array[org.apache.spark.mllib.linalg.distributed.MatrixEntry] = Array(MatrixEntry(0,0,1.1), MatrixEntry(0,1,1.2), MatrixEntry(0,2,1.3), MatrixEntry(1,0,2.1), MatrixEntry(1,1,2.2))

scala> // 生成坐标矩阵

scala> val mat:CoordinateMatrix = new CoordinateMatrix(entr)
mat: org.apache.spark.mllib.linalg.distributed.CoordinateMatrix = org.apache.spark.mllib.linalg.distributed.CoordinateMatrix@4b2769bf

scala> mat.numRows()
18/06/04 17:47:59 WARN hdfs.DFSClient: Slow ReadProcessor read fields took 233622ms (threshold=30000ms); ack: seqno: 51 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 548477 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[10.130.126.126:1004,DS-63e808f4-96f9-4830-8bc0-954b929db441,DISK], DatanodeInfoWithStorage[10.130.125.185:1004,DS-0e0c5cda-b702-457a-90a8-8c5d6a41a544,DISK], DatanodeInfoWithStorage[10.130.125.186:1004,DS-6bb5ccfd-0de6-49ca-abc6-35b050105657,DISK]]
res10: Long = 4

scala> mat.numCols()
res11: Long = 3

scala> mat.entries.collect().take(10)
18/06/04 17:50:40 WARN hdfs.DFSClient: Slow ReadProcessor read fields took 161429ms (threshold=30000ms); ack: seqno: 54 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 573177 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[10.130.126.126:1004,DS-63e808f4-96f9-4830-8bc0-954b929db441,DISK], DatanodeInfoWithStorage[10.130.125.185:1004,DS-0e0c5cda-b702-457a-90a8-8c5d6a41a544,DISK], DatanodeInfoWithStorage[10.130.125.186:1004,DS-6bb5ccfd-0de6-49ca-abc6-35b050105657,DISK]]
res12: Array[org.apache.spark.mllib.linalg.distributed.MatrixEntry] = Array(MatrixEntry(0,0,1.1), MatrixEntry(0,1,1.2), MatrixEntry(0,2,1.3), MatrixEntry(1,0,2.1), MatrixEntry(1,1,2.2), MatrixEntry(1,2,2.3), MatrixEntry(2,0,3.1), MatrixEntry(2,1,3.2), MatrixEntry(2,2,3.3), MatrixEntry(3,0,4.1))

scala> 

scala> :paste
// Entering paste mode (ctrl-D to finish)

val t = mat.toIndexedRowMatrix().rows.map{ x=>
val v = x.vector
(x.index, v(0).toDouble, v(1).toDouble, v(2).toDouble)
}

// Exiting paste mode, now interpreting.

t: org.apache.spark.rdd.RDD[(Long, Double, Double, Double)] = MapPartitionsRDD[18] at map at <console>:34

scala> 

scala> 

scala> t.toDF().show
18/06/04 18:45:13 WARN hdfs.DFSClient: Slow ReadProcessor read fields took 272450ms (threshold=30000ms); ack: seqno: 57 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 471144 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[10.130.126.126:1004,DS-63e808f4-96f9-4830-8bc0-954b929db441,DISK], DatanodeInfoWithStorage[10.130.125.185:1004,DS-0e0c5cda-b702-457a-90a8-8c5d6a41a544,DISK], DatanodeInfoWithStorage[10.130.125.186:1004,DS-6bb5ccfd-0de6-49ca-abc6-35b050105657,DISK]]
+---+---+---+---+
| _1| _2| _3| _4|
+---+---+---+---+
|  0|1.1|1.2|1.3|
|  1|2.1|2.2|2.3|
|  2|3.1|3.2|3.3|
|  3|4.1|4.2|4.3|
+---+---+---+---+


scala> // 坐标转化成DataFrame

scala> :paste
// Entering paste mode (ctrl-D to finish)

val t1 = mat.toRowMatrix().rows.map{ x=>
(x(0).toDouble, x(1).toDouble, x(2).toDouble)
}

// Exiting paste mode, now interpreting.

t1: org.apache.spark.rdd.RDD[(Double, Double, Double)] = MapPartitionsRDD[26] at map at <console>:34

scala> t1.toDF().show
18/06/04 19:33:55 WARN hdfs.DFSClient: Slow ReadProcessor read fields took 219854ms (threshold=30000ms); ack: seqno: 69 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 520820 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[10.130.126.126:1004,DS-63e808f4-96f9-4830-8bc0-954b929db441,DISK], DatanodeInfoWithStorage[10.130.125.185:1004,DS-0e0c5cda-b702-457a-90a8-8c5d6a41a544,DISK], DatanodeInfoWithStorage[10.130.125.186:1004,DS-6bb5ccfd-0de6-49ca-abc6-35b050105657,DISK]]
+---+---+---+
| _1| _2| _3|
+---+---+---+
|1.1|1.2|1.3|
|2.1|2.2|2.3|
|3.1|3.2|3.3|
|4.1|4.2|4.3|
+---+---+---+





下面代码演示了RowMatrix与CoordinateMatrix及其相关核心类的使用方法

package cn.ml.datastruct

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.linalg.distributed.RowMatrix
import org.apache.spark.mllib.linalg.distributed.CoordinateMatrix


object RowMatrixDedmo extends App {
  val sparkConf = new SparkConf().setAppName("RowMatrixDemo").setMaster("spark://sparkmaster:7077")
  val sc = new SparkContext(sparkConf)
  // 创建RDD[Vector]
  val rdd1= sc.parallelize(
      Array(
          Array(1.0,2.0,3.0,4.0),
          Array(2.0,3.0,4.0,5.0),
          Array(3.0,4.0,5.0,6.0)
          )
      ).map(f => Vectors.dense(f))
   //创建RowMatrix
   val rowMatirx = new RowMatrix(rdd1)
   //计算列之间的相似度,返回的是CoordinateMatrix,采用
   //case class MatrixEntry(i: Long, j: Long, value: Double)存储值
   var coordinateMatrix:CoordinateMatrix= rowMatirx.columnSimilarities()
   //返回矩阵行数、列数
   println(coordinateMatrix.numCols())
   println(coordinateMatrix.numRows())
   //查看返回值,查看列与列之间的相似度
   //Array[org.apache.spark.mllib.linalg.distributed.MatrixEntry] 
   //= Array(MatrixEntry(2,3,0.9992204753914715), 
   //MatrixEntry(0,1,0.9925833339709303), 
   //MatrixEntry(1,2,0.9979288897338914), 
   //MatrixEntry(0,3,0.9746318461970762), 
   //MatrixEntry(1,3,0.9946115458726394), 
   //MatrixEntry(0,2,0.9827076298239907))
   println(coordinateMatrix.entries.collect())

   //转成后块矩阵,下一节中详细讲解
   coordinateMatrix.toBlockMatrix()
   //转换成索引行矩阵,下一节中详细讲解
   coordinateMatrix.toIndexedRowMatrix()
   //转换成RowMatrix
   coordinateMatrix.toRowMatrix()

   //计算列统计信息
    var mss:MultivariateStatisticalSummary=rowMatirx.computeColumnSummaryStatistics()
   //每列的均值, org.apache.spark.mllib.linalg.Vector = [2.0,3.0,4.0,5.0]
   mss.mean
   // 每列的最大值org.apache.spark.mllib.linalg.Vector = [3.0,4.0,5.0,6.0]
   mss.max
   // 每列的最小值 org.apache.spark.mllib.linalg.Vector = [1.0,2.0,3.0,4.0]
   mss.min
   //每列非零元素的个数org.apache.spark.mllib.linalg.Vector = [3.0,3.0,3.0,3.0]
   mss.numNonzeros
   //矩阵列的1-范数,||x||1 = sum(abs(xi));
   //org.apache.spark.mllib.linalg.Vector = [6.0,9.0,12.0,15.0]
   mss.normL1
   //矩阵列的2-范数,||x||2 = sqrt(sum(xi.^2));
   // org.apache.spark.mllib.linalg.Vector = [3.7416573867739413,5.385164807134504,7.0710678118654755,8.774964387392123]
   mss.normL2
   //矩阵列的方差
   //org.apache.spark.mllib.linalg.Vector = [1.0,1.0,1.0,1.0]
   mss.variance
   //计算协方差
   //covariance: org.apache.spark.mllib.linalg.Matrix = 
   //1.0  1.0  1.0  1.0  
   //1.0  1.0  1.0  1.0  
   //1.0  1.0  1.0  1.0  
   //1.0  1.0  1.0  1.0  
   var covariance:Matrix=rowMatirx.computeCovariance()
    //计算拉姆矩阵rowMatirx^T*rowMatirx,T表示转置操作
   //gramianMatrix: org.apache.spark.mllib.linalg.Matrix = 
    //14.0  20.0  26.0  32.0  
    //20.0  29.0  38.0  47.0  
    //26.0  38.0  50.0  62.0  
    //32.0  47.0  62.0  77.0  
   var gramianMatrix:Matrix=rowMatirx.computeGramianMatrix()
   //对矩阵进行主成分分析,参数指定返回的列数,即主分成个数
   //PCA算法是一种经典的降维算法
   //principalComponents: org.apache.spark.mllib.linalg.Matrix = 
  //-0.5000000000000002  0.8660254037844388    
  //-0.5000000000000002  -0.28867513459481275  
  //-0.5000000000000002  -0.28867513459481287  
  //-0.5000000000000002  -0.28867513459481287  
   var principalComponents=rowMatirx.computePrincipalComponents(2)

/**
   * 对矩阵进行奇异值分解,设矩阵为A(m x n). 奇异值分解将计算三个矩阵,分别是U,S,V
   * 它们满足 A ~= U * S * V', S包含了设定的k个奇异值,U,V为相应的奇异值向量
   */
  //   svd: org.apache.spark.mllib.linalg.SingularValueDecomposition[org.apache.spark.mllib.linalg.distributed.RowMatrix,org.apache.spark.mllib.linalg.Matrix] = 
  //SingularValueDecomposition(org.apache.spark.mllib.linalg.distributed.RowMatrix@688884e,[13.011193721236575,0.8419251442105343,7.793650306633694E-8],-0.2830233037672786  -0.7873358937103356  -0.5230588083704528  
  //-0.4132328277901395  -0.3594977469144485  0.5762839813994667   
  //-0.5434423518130005  0.06834039988143598  0.4166084623124157   
  //-0.6736518758358616  0.4961785466773299   -0.4698336353414313  )
   var svd:SingularValueDecomposition[RowMatrix, Matrix]=rowMatirx.computeSVD(3,true)


   //矩阵相乘积操作
   var multiplyMatrix:RowMatrix=rowMatirx.multiply(Matrices.dense(4, 1, Array(1,2,3,4)))
}



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值