使用Apache Spark进行预测性数据分析--特征工程篇

本文是Boutros El-Gamil的使用Apache Spark进行预测性数据分析系列文章的第三篇,http://www.data-automaton.com/2019/01/08/predictive-data-analytics-with-apache-spark-part-3-feature-engineering/

前两篇分别是

  1. 使用Apache Spark进行预测性数据分析--简介篇

  2. 使用Apache Spark进行预测性数据分析--数据准备篇

1.删除低方差特征

上一篇文章的末尾, 我们看到某些数据特征随时间变化不大或变化很小。由于此类特征在构建预测数据模型中无用,因此我们希望删除这些特征以降低数据维数。以下函数接收Spark Dataframe为输入,以及最大不可接受的方差阈值。该函数计算每个特征方差,如果特征方差小于或等于给定阈值,则删除该特征。

def spark_remove_low_var_features(spark_df, features, threshold, remove):    '''    This function removes low-variance features from features columns in Spark DF        INPUTS:    @spark_df: Spark Dataframe    @features: list of data features in spark_df to be tested for low-variance removal    @threshold: lowest accepted variance value of each feature    @remove: boolean variable determine if the low-variance variable should be removed or not        OUTPUTS:    @spark_df: updated Spark Dataframe     @low_var_features: list of low variance features     @low_var_values: list of low variance values    '''            # set list of low variance features    low_var_features = []        # set corresponded list of low-var values    low_var_values = []        # loop over data features    for f in features:        # compute standard deviation of column 'f'        std = float(spark_df.describe(f).filter("summary = 'stddev'").select(f).collect()[0].asDict()[f])                # compute variance        var = std*std
# check if column 'f' variance is less of equal to threshold if var <= threshold: # append low-var feature name and value to the corresponded lists low_var_features.append(f) low_var_values.append(var) print(f + ': var: ' + str(var)) # drop column 'f' if @remove is True if remove: spark_df = spark_df.drop(f) # return Spark Dataframe, low variance features, and low variance values return spark_df, low_var_features, low_var_values
# remove low-variance features from datatrain_df, train_low_var_features, train_low_var_values = spark_remove_low_var_features(train_df, data_features, 0.05, False)

删除低方差特征后,我们在训练和测试数据集中都留下了八个特征。下图显示了删除低方差尺寸后的引擎#15的测试数据特征。

2.消除噪音

第二个数据预处理步骤是删除随时间推移可能出现的不规则数据读数。为了实现这一目标,我们对数据应用了移动平均值程序。以下函数使用 Window模块对Engine的Spark Dataframe进行分区,并计算该Dataframe中每个数值数据特征的对应滚动平均值特征。

from pyspark.sql.window import Windowimport pyspark.sql.functions as Fdef add_rolling_avg_features(df, features, lags, idx, time):    '''    This function adds rolling average features for each asset in DF. The new features     take names OLD_FEATURE+'_rollingmean_'        INPUTS:    @df: Spark Dataframe    @features: list of data features in @df    @lags: list of windows sizes of rolling average method    @idx: column name of asset ID     @time: column name of operational time        OUTPUTS:    @df: Updated Spark Dataframe    '''        # loop over window sizes    for lag_n in lags:                # create Spark window        w = Window.partitionBy(idx).orderBy(time).rowsBetween(1-lag_n, 0)                # loop over data features        for f in features:                        # add new column of rolling average of feature 'f'            df = df.withColumn(f + '_rollingmean_'+str(lag_n), F.avg(F.col(f)).over(w))        # return DF    return df
# set lag window to 4 cycleslags = [4]
# add rolling average featurestrain_df = add_rolling_avg_features(train_df, data_features, lags, "id", "cycle")

从传感器数据中删除噪声后,随着时间的推移,我们将获得更多的平滑趋势。下图显示了除去测试数据集中的噪声前后的差异。

测试数据去除噪音前(Engine #15)

测试数据去除噪音后(Engine #15)

3.归一化

数据预处理的第三步是特征归一化。归一化的目标是将每个特征重新缩放为[0,1]域。特征归一化统一了所有数据特征的规模,这有助于ML算法生成准确的模型。以下函数从一组数值数据特征中生成归一化特征。

def add_normalized_features(df, features):    '''    This function squashes columns in Spark Dataframe in [0,1] domain.        INPUTS:    @df: Spark Dataframe    @features: list of data features in spark_df          OUTPUTS:    @df: Updated Spark Dataframe    '''        # create Spark window    w = Window.partitionBy("id")        for f in features:                # compute noralized feature        norm_feature = (F.col(f) - F.min(f).over(w)) / (F.max(f).over(w) - F.min(f).over(w))
# add normalized feature to DF df = df.withColumn(f + '_norm', norm_feature) return df

下图显示了在[0,1]域中对其进行规范化后的测试数据特征。请注意,我们对滚动平均特征(即低噪声数据特征)进行了归一化

4.标准化

类似于归一化的另一个常见特征缩放过程是标准化。在此过程中,我们重新缩放特征,以使每个特征的均值和单位方差为零。在预测建模中,我们可以使用归一化特征,标准化特征或这两个集合的集合来获得更好的性能。以下函数将Spark Dataframe中的数字功能列表标准化.

def add_standardized_features(df, features):    '''    This function add standard features with 0 mean and unit variance for each data feature in Spark DF        INPUTS:    @df: Spark Dataframe    @features: list of data features in spark_df          OUTPUTS:    @df: Updated Spark Dataframe    '''        # set windows range    w = Window.partitionBy("id")        # loop over features    for f in features:                        # compute scaled feature        scaled_feature = (F.col(f) - F.mean(f).over(w)) / (F.stddev(f).over(w))                # add standardized data features to DF        df = df.withColumn(f + '_scaled', scaled_feature)                  return df    # add standardized features to df*train_df = add_standardized_features(train_df, roll_data_features)

下图显示了使用上述特征为测试数据集生成的标准化数据特征。请注意,我们对滚动平均值功能(即低噪声数据功能)进行了标准化。

5.完整代码

可以在我的Github中https://github.com/boutrosrg/Predictive-Maintenance-In-PySpark中找到本教程的代码。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值