Spark ml数据归一化

23 篇文章 0 订阅
23 篇文章 0 订阅
import org.apache.log4j.{Level, Logger}
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.sql.SparkSession
import org.apache.spark.ml.feature.Normalizer
import org.apache.spark.ml.feature.StandardScaler
import org.apache.spark.ml.feature.MinMaxScaler
import org.apache.spark.ml.feature.MaxAbsScaler
/**
  * @author XiaoTangBao
  * @date 2019/3/4 16:21
  * @version 1.0
  */
object Normalized {
  def main(args: Array[String]): Unit = {
    Logger.getLogger("org.apache.spark").setLevel(Level.ERROR)
    val sparkSession = SparkSession.builder().master("local[4]").appName("NOrmalize").getOrCreate()
    val df =  sparkSession.createDataFrame(Seq((1, Vectors.dense(1.0, 12.5, -108.0)),
      (2, Vectors.dense(2.5, 36.0, 198.0)),(3, Vectors.dense(6.8, 24.0, 459.0))))
      .toDF("id","features")
    //Normalizer的作用范围是每一行,使每一个行向量的范数变换为一个单位范数
    val normalizer1 = new Normalizer()
      .setInputCol("features")
      .setOutputCol("normalfeatures")
      .setP(1.0)
    val L1 = normalizer1.transform(df)
    L1.show(false)

	+---+-----------------+------------------------------------------------------------+
	|id |features         |normalfeatures                                              |
	+---+-----------------+------------------------------------------------------------+
	|1  |[1.0,12.5,-108.0]|[0.00823045267489712,0.102880658436214,-0.8888888888888888] |
	|2  |[2.5,36.0,198.0] |[0.010570824524312896,0.1522198731501057,0.8372093023255814]|
	|3  |[6.8,24.0,459.0] |[0.013883217639853,0.04899959167006941,0.9371171906900776]  |
	+---+-----------------+------------------------------------------------------------+

    //StandardScaler处理的对象是每一列,也就是每一维特征,将特征标准化为单位标准差或是0均值,或是0均值单位标准差。
    val scaler_1 = new StandardScaler()
      .setInputCol("features")
      .setOutputCol("scaledFeatures")
      .setWithStd(true)
      .setWithMean(false)
    val scalerMode_l = scaler_1.fit(df)
    val scalaerdData_1 = scalerMode_l.transform(df)
    scalaerdData_1.show(false)

	+---+-----------------+------------------------------------------------------------+
	|id |features         |scaledFeatures                                              |
	+---+-----------------+------------------------------------------------------------+
	|1  |[1.0,12.5,-108.0]|[0.3321666477362439,1.0637495315070804,-0.38055308480157485]|
	|2  |[2.5,36.0,198.0] |[0.8304166193406097,3.063598650740391,0.6976806554695538]   |
	|3  |[6.8,24.0,459.0] |[2.2587332046064583,2.0423991004935944,1.617350610406693]   |
	+---+-----------------+------------------------------------------------------------+

    //MinMaxScaler作用同样是每一列,即每一维特征。将每一维特征线性地映射到指定的区间,通常是[0, 1]
    val scaler_2 = new MinMaxScaler()
      .setInputCol("features")
      .setOutputCol("scaledFeatures")
    val scalerModel_2 = scaler_2.fit(df)
    val scalaerdData_2 = scalerModel_2.transform(df)
    scalaerdData_2.show(false)

	+---+-----------------+--------------------------------------------+
	|id |features         |scaledFeatures                              |
	+---+-----------------+--------------------------------------------+
	|1  |[1.0,12.5,-108.0]|[0.0,0.0,0.0]                               |
	|2  |[2.5,36.0,198.0] |[0.25862068965517243,1.0,0.5396825396825397]|
	|3  |[6.8,24.0,459.0] |[1.0,0.48936170212765956,1.0]               |
	+---+-----------------+--------------------------------------------+

    //MaxAbsScaler将每一维的特征变换到[-1, 1]闭区间上,通过除以每一维特征上的最大的绝对值,它不会平移整个分布,也不会破坏原来每一个特征向量的稀疏性。
    val scaler_3 = new MaxAbsScaler()
      .setInputCol("features")
      .setOutputCol("scaledFeatures")
    val scalerModel_3 = scaler_3.fit(df)
    val scalaerdData_3 = scalerModel_3.transform(df)
    scalaerdData_3.show(false)
	+---+-----------------+-------------------------------------------------------------+
	|id |features         |scaledFeatures                                               |
	+---+-----------------+-------------------------------------------------------------+
	|1  |[1.0,12.5,-108.0]|[0.14705882352941177,0.3472222222222222,-0.23529411764705882]|
	|2  |[2.5,36.0,198.0] |[0.36764705882352944,1.0,0.43137254901960786]                |
	|3  |[6.8,24.0,459.0] |[1.0,0.6666666666666666,1.0]                                 |
	+---+-----------------+-------------------------------------------------------------+

  }
}
  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
在pyspark中,归一化可以通过使用pyspark.ml库中的MinMaxScaler进行实现。MinMaxScaler是一种常用的归一化方法,它将数据缩放到指定的范围内。具体步骤如下: 1. 首先,导入必要的库和模块: ``` from pyspark.ml.feature import MinMaxScaler from pyspark.ml.linalg import Vectors from pyspark.sql.functions import udf from pyspark.sql.types import DoubleType ``` 2. 准备数据并创建一个DataFrame: ``` data = [(Vectors.dense([1.0, 2.0, 3.0]),), (Vectors.dense([4.0, 5.0, 6.0]),), (Vectors.dense([7.0, 8.0, 9.0]),)] df = spark.createDataFrame(data, ["features"]) ``` 3. 定义一个自定义函数,用于将vector类型的特征列转换为dense类型的数组: ``` to_array = udf(lambda v: v.toArray().tolist(), DoubleType()) df = df.withColumn("features_array", to_array(df.features)) ``` 4. 创建一个MinMaxScaler对象,并设置输入和输出列的名称: ``` scaler = MinMaxScaler(inputCol="features_array", outputCol="scaled_features") ``` 5. 使用fit方法拟合模型并进行转换: ``` scaler_model = scaler.fit(df) scaled_df = scaler_model.transform(df) ``` 6. 最后,查看归一化后的结果: ``` scaled_df.select("scaled_features").show(truncate=False) ``` 这样就可以使用pyspark进行归一化处理了。通过使用MinMaxScaler,可以将特征列的值缩放到指定的范围内,以便在应用机器学习算法之前,确保每个特征被平等对待。<span class="em">1</span><span class="em">2</span><span class="em">3</span>
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值