-------------------------------------------------------------------------------------
笔者追求算法实现,不喜欢大篇幅叙述原理,有关PCA理论推荐查看该篇博客
http://www.cnblogs.com/pinard/p/6239403.html
特此说明:发现之前写的不是很好,特地从UCI上找了Wine葡萄酒的例子重新写了一次。
LDA降维欢迎前往笔者下一篇博客:https://blog.csdn.net/Java_Man_China/article/details/89504514
KPCA降维欢迎前往笔者下一篇博客:
https://blog.csdn.net/Java_Man_China/article/details/89677371
-------------------------------------------------------------------------------------
import breeze.linalg.{DenseMatrix, eig}
import org.apache.log4j.{Level, Logger}
import org.apache.spark.ml.feature.{LabeledPoint,StandardScaler, VectorAssembler}
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.sql.types.{DoubleType, StructField, StructType}
import org.apache.spark.sql.{DataFrame, Row, SparkSession}
import scala.collection.mutable.ArrayBuffer
/** The method attempts to lower the dimensionality based on PCA
* Data Source:http://archive.ics.uci.edu/ml/datasets/Wine
* @author XiaoTangBao
* @date 2019/4/16 9:16
* @version 1.0
*/
object PCA {
def main(args: Array[String]): Unit = {
//屏蔽日志
Logger.getLogger("org.apache.spark").setLevel(Level.ERROR)
//spark初始化
val spark = SparkSession.builder().master("local[4]").appName("LDA").getOrCreate()
//获取数据源 http://archive.ics.uci.edu/ml/datasets/Wine
val data = spark.sparkContext.textFile("G:\\mldata\\wine.data").map(line => line.split(","))
.map(arr => arr.map(str => str.toDouble)).map(arr =>Row(arr(0),arr(1),arr(2),arr(3),arr(4),arr(5),
arr(6),arr(7),arr(8),arr(9),arr(10),arr(11),arr(12),arr(13)))
//设置featuresArr和schema,便于后期数据转化及生成dataFrame
val featuresArr = Array("Alcohol","Malic acid","Ash","Alcalinity of ash","Magnesium",
"Total phenols","Flavanoids","Nonflavanoid phenols","Proanthocyanins","Color intensity",
"Hue","OD280/OD315 of diluted wines","Proline")
val schema = StructType(List(StructField("label",DoubleType,true),StructField("Alcohol",DoubleType,true),StructField("Malic acid",DoubleType,true),
StructField("Ash",DoubleType,true),StructField("Alcalinity of ash",DoubleType,true),StructField("Magnesium",DoubleType,true)
,StructField("Total phenols",DoubleType,true),StructField("Flavanoids",DoubleType,true),StructField("Nonflavanoid phenols",DoubleType,true)
,StructField("Proanthocyanins",DoubleType,true),StructField("Color intensity",DoubleType,true),StructField("Hue",DoubleType,true)
,StructField("OD280/OD315 of diluted wines",DoubleType,true),StructField("Proline",DoubleType,true)))
val oridf = spark.createDataFrame(data,schema)
//设置转化器
val vectorAsb = new VectorAssembler().setInputCols(featuresArr).setOutputCol("features")
//数据整理后传入run,启动PCA算法
val newdf = vectorAsb.transform(oridf).select("label","features")
//标准化处理数据,标准化后不再需要去中心化
val std = new StandardScaler().setInputCol("features").setOutputCol("Scaledfeatures")
.setWithMean(true).setWithStd(true).fit(newdf).transform(newdf)
.select("label","Scaledfeatures")
.withColumnRenamed("Scaledfeatures","features")
val result = run(std,2)
val arr = ArrayBuffer[(Double,Double)]()
for(i<-0 until result.cols) arr.append((result(0,i),result(1,i)))
arr.foreach(tp =>println(tp._1))
println()
arr.foreach(tp =>println(tp._2))
}
/**
* the method attempts to lower the dimensionality
* @param data the ioriginal data which in high dimensions, include label and features cols
* @param k the final dimensions
*/
def run(df:DataFrame,k:Int)={
val trainData = df.select("features").rdd.map(row => row.toString())
.map(str => str.replace('[',' '))
.map(str => str.replace(']',' '))
.map(str => str.trim).map(str => str.split(','))
.map(arr => arr.map(str => str.toDouble)).collect()
val labels = df.select("label").rdd.map(row => row.toString())
.map(str => str.replace('[',' '))
.map(str => str.replace(']',' '))
.map(str => str.trim).map(str => str.toDouble).collect()
//特征列数
val tz = trainData(0).length
//生成新的带label的数据
val labArr = ArrayBuffer[LabeledPoint]()
for(i<-0 until trainData.length) labArr.append(LabeledPoint(labels(i),Vectors.dense(trainData(i))))
//总样本组成的大型矩阵
val allData = labArr.map(lab => lab.features).map(vec => vec.toArray).flatMap(x => x).toArray
val big_Matrx =new DenseMatrix[Double](tz,trainData.length,allData)
// //存放向量各维度的均值
// val big_mean = sum(big_Matrx,Axis._1).*= (1.0 / big_Matrx.cols)
// //样本中心化
// for(i<-0 until big_Matrx.cols) big_Matrx(::,i) := big_Matrx(::,i) - big_mean
//计算样本的协方差矩阵
val covMatrix = (big_Matrx * big_Matrx.t) * (1.0 / (big_Matrx.cols-1))
val eigValues = eig(covMatrix).eigenvalues
val eigVectors = eig(covMatrix).eigenvectors
//选取最大的k个特征值对应的特征向量
val label_eig = DenseMatrix.horzcat(eigVectors.t,eigValues.toDenseMatrix.t)
var strArr = ArrayBuffer[String]()
for(i<-0 until label_eig.rows) strArr.append(label_eig.t(::,i).toString)
for(i<-0 until strArr.length){
strArr(i) = strArr(i).replace("DenseVector(","").replace(')',' ').trim()
}
val da = ArrayBuffer[LabeledPoint]()
for(str <- strArr){
val arr = str.split(',').map(string => string.toDouble)
val lab = arr.takeRight(1)(0)
val value = arr.take(arr.length -1)
val labPoint = LabeledPoint(lab,Vectors.dense(value))
da.append(labPoint)
}
val result = da.sortBy(labPoint => labPoint.label).reverse.take(k).map(lab => lab.features).map(vec => vec.toArray)
var rt = DenseMatrix.zeros[Double](result.length,result(0).length)
for(i<-0 until rt.rows){
for(j<-0 until rt.cols){
rt(i,j) = result(i)(j)
}
}
//降维后的数据集
val newData = rt * big_Matrx
newData
}
}
Wine 葡萄酒原始数据共13个特征,选取其中两列绘制二维图
数据未标准化(仅去中心化),采用PCA算法降至二维后的结果如下:
此时发现降维后几乎看不到效果,数据依旧混乱(无法区分)
下图为调用Python包实现PCA的结果,依旧没有对数据进行标准化
--------Python调包--------from sklearn.decomposition import PCA------------------
对比上面两幅图发现,似乎降维后两者结果不一样,这是由于Python调包求解的特征向量与笔者自己计算的特征向量不一致导致的,同一矩阵,相同的特征值下有n个不同的特征向量。为了拟合python结果,笔者将求出的某个特征向量乘以-1后,结果显示如图:
此时大家应该发现两者是一致的了。
数据标准化后,PCA降至二维的结果如下图所示:(对结果乘以了-1,方便对比Python库)
此时发现数据终于可分了。(干一杯葡萄酒)
下图为数据标注化后,调用python库的结果:(毫无意外的一样)
实验结果表明:笔者的算法是正确的。。。。。。。。。。。