0
点赞
收藏
分享

微信扫一扫

Spark计算相关性系数(皮尔森、斯皮尔曼、卡方检验)

微言记 2022-07-27 阅读 64


Spark计算相关性系数(皮尔森、斯皮尔曼、卡方检验)_sql

Spark计算相关性系数(皮尔森、斯皮尔曼、卡方检验)_apache_02

皮尔森、斯皮尔曼(pearson spearman):

import spark.implicits._
import org.apache.spark.mllib.stat.Statistics
import spark.sql
val df = sql(s"select * from xxxx ")

val columns = List("xx","xx","xx")
for(col <- columns){

val df_real = df.select("label", col)
val rdd_real = df_real.rdd.map(x=>(x(0).toString.toDouble ,x(1).toString.toDouble ))
val label = rdd_real.map(x=>x._1.toDouble )
val feature = rdd_real.map(x=>x._2.toDouble )

val cor_pearson:Double = Statistics.corr(label, feature, "pearson")
println( s"${col}------" + cor_pearson )

val cor_spearman:Double = Statistics.corr(label, feature, "spearman")
println(s"${col}------" + cor_spearman )
}

卡方检验计算卡方值:

 

import org.apache.spark.mllib.linalg.{Matrix, Matrices, Vectors }
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.stat.Statistics
import spark.implicits._
import spark.sql
val df_real = sql(s"select * from xxxx ")

val columns = List("xx", "xx","xx","xx" )

val featInd = columns.map(df_real.columns.indexOf(_))
val targetInd = df_real.columns.indexOf("label")
val lp_data = df_real.rdd.map(r => LabeledPoint(
r.getString(targetInd).toDouble,
Vectors.dense(featInd.map(r.getString(_).toDouble).toArray)
))
val vd=Statistics.chiSqTest(lp_data)

vd.foreach(x=>println(x.statistic))
columns.foreach(println(_))

举报

相关推荐

0 条评论