Spark计算相关性系数(皮尔森、斯皮尔曼、卡方检验)



皮尔森、斯皮尔曼(pearson spearman):

   import spark.implicits._
   import org.apache.spark.mllib.stat.Statistics
   import spark.sql  
   val df = sql(s"select * from xxxx ")
    
   val columns = List("xx","xx","xx")
   for(col <- columns){
      
     val df_real = df.select("label", col) 
     val rdd_real = df_real.rdd.map(x=>(x(0).toString.toDouble ,x(1).toString.toDouble ))
     val label = rdd_real.map(x=>x._1.toDouble )
     val feature = rdd_real.map(x=>x._2.toDouble )
     
     val cor_pearson:Double = Statistics.corr(label, feature, "pearson")
     println( s"${col}------" + cor_pearson )
 
     val cor_spearman:Double = Statistics.corr(label, feature, "spearman")
     println(s"${col}------" + cor_spearman )
   }


卡方检验计算卡方值:

   import org.apache.spark.mllib.linalg.{Matrix, Matrices, Vectors }
   import org.apache.spark.mllib.regression.LabeledPoint 
   import org.apache.spark.mllib.stat.Statistics
   import spark.implicits._ 
   import spark.sql
   val df_real = sql(s"select * from  xxxx ")
    
    val columns = List("xx", "xx","xx","xx" )
  
  val featInd = columns.map(df_real.columns.indexOf(_))
  val targetInd = df_real.columns.indexOf("label") 
  val lp_data = df_real.rdd.map(r => LabeledPoint(
   r.getString(targetInd).toDouble,   
   Vectors.dense(featInd.map(r.getString(_).toDouble).toArray) 
))
   val vd=Statistics.chiSqTest(lp_data) 
     
  vd.foreach(x=>println(x.statistic))
  columns.foreach(println(_))



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值