Spark在线广告点击预测

package Classification

/** 在线广告点击预测,若网页中广告发生点击计为1,否则0.
  *  每次曝光的特征向量由曝光事件的相关特征变量组成(如:用户、URL、网页id、网页内容、广告、广告客户、设备类型、事件、地理位置等其它相关因素)
  *
  * Created by raini on 16-3-4.

  * spark-shell --master yarn \
                --driver-memory 4g \
                --executor-memory 2g \
                --executor-cores 1 \
                --jars /home/raini/spark/lib/mysql-connector-java-5.1.38-bin.jar , /home/raini/spark/lib/jblas-1.2.3.jar
  *
  * 相关知识:
  *
  * 分类通常指将事物分成不同的类别。在分类模型中,我们期望根据一组特征来判断类别,这些特征代表了物体.事件或上下文相关的属性.
  *
  * 分类是监督学习的一种形式,我们用带有标记或者类输出的训练样本训练模型。
  *
  * 二分类分为正类(1)和负类(-1或0)。多分类一般从0开始标记。
  *
  * 分类模型的种类:
  *   1.线性模型 简单而且相对容易扩展到非常大的训练集
  *     对输入变量(特征或自变量)应用简单的线性预测函数- y=f(wTx) -y是目标变量,w参数为权重,x是特征向量
  *   2.决策树 强大的非线性技术,训练过程计算量大很多情况下性能很好
  *   3.朴素贝叶斯 模型简单、易训练,具有高效和并行的优点(实际中,模型训练只需要遍历所有数据集一次),且可以用于比较其它模型的性能
  *
  * 如上模型基于二分类实现,决策树和贝叶斯可用于多类别分类模型。
  *
  * 拟合过程:最小化模型输出与实际值的误差。给定输入数据的特征向量和相关的目标值,存在一个权重向量能够最小化训练样本的由损失函数计算出来的误差之和。
  *
  * 二分类的损失函数有:
  *   1.逻辑损失 = 逻辑回归模型。
  *   2.合页损失 = 线性支持向量机SVM。
  * 损失函数(连接函数)的输入:训练样本的权重向量、特征向量、实际输出,输出损失。
  *
  * 概率模型:逻辑回归、朴素贝叶斯
  * 非概率模型:SVM(最大间隔分类器)、决策树(可表达复杂的非线性模式和特征相互关系。信息增益:节点不纯度-基尼不纯+熵)
  *
  *
  * 模型性能影响因素及优化过程: 特征提取、特征选择、输入正确的数据格式、模型对数据分布的假设、使用更多的训练元素、模型参数调优、交叉验证
  *
  */
object Classifier {

  def main (args: Array[String]) {
    val conf = new SparkConf()
      .setMaster("local[2]")
      .setSparkHome(System.getenv("SPARK_HOME"))
      .setAppName("网页点击Classification")
      .set("spark.executor.memory", "4g")

    val sc = new SparkContext(conf)

    val rawData = sc.textFile("file:///home/raini/data/train_noheader.tsv")

    val records = rawData.map(line => line.split("\t"))
    records.first()
    //res0: Array[String] = Array("http://www.bloomberg.com/news/2010-12-23/ibm-predicts-holographic-calls-air-breathing-batteries-by-2015.html", "4042", "{""title"":""IBM Sees Holographic Calls Air Breathing Batteries ibm sees holographic calls, air-breathing batteries"",""body"":""A sign stands outside the International Business Machines Corp IBM Almaden Research Center campus in San Jose California Photographer Tony Avelar Bloomberg Buildings stand at the International Business Machines Corp IBM Almaden Research Center campus in the Santa Teresa Hills of San Jose California Photographer Tony Avelar Bloomberg By 2015 your mobile phone will project a 3 D image of anyone who calls and your laptop will be powered by kinetic energy At least that s what International Business Machines Corp sees ...


    import org.apache.spark.mllib.regression.LabeledPoint
    import org.apache.spark.mllib.linalg.Vectors

    /** 数据清理: 用0替换缺失值?,去掉多余”*/
    val data = records.map { r =>
      val trimmed = r.map(_.replaceAll("\"",""))
      val lable = trimmed(r.size - 1).toInt      //“截尾均值” 去掉两端的极端值后所计算

      val features = trimmed.slice(4,r.size - 1).map(d => if (d == "?") 0.0 else d.toDouble)
      LabeledPoint(lable, Vectors.dense(features)) //创建一个稠密向量
    }
    /* //创建一个稀疏向量(第一种方式)
          val sv1: Vector = Vector.sparse(3, Array(0,2), Array(1.0,3.0));
       //创建一个稀疏向量(第二种方式)
          val sv2 : Vector = Vector.sparse(3, Seq((0,1.0),(2,3.0)))

对于稠密向量:很直观,你要创建什么,就加入什么,其函数声明为Vector.dense(values : Array[Double])
对于稀疏向量,当采用第一种方式时,3表示此向量的长度,第一个Array(0,2)表示的索引,第二个Array(1.0, 3.0)与前面的Array(0,2)是相互对应的,表示第0个位置的值为1.0,第2个位置的值为3
对于稀疏向量,当采用第二种方式时,3表示此向量的长度,后面的比较直观,Seq里面每一对都是(索引,值)的形式。
 */
//    val Array(training, test) = data.randomSplit(Array(0.6,0.4),seed = 11L)
    data.cache
    val numData = data.count

    //贝叶斯模型需要特征值非负,否则碰到负的特征值程序会抛出错误。
    val nbData = records.map { r =>
      val trimmed = r.map(_.replaceAll("\"",""))
      val lable = trimmed(r.size - 1).toInt
      val features = trimmed.slice(4,r.size - 1).map(d => if (d == "?") 0.0 else d.toDouble).map(d => if (d < 0) 0.0 else d)
      LabeledPoint(lable, Vectors.dense(features))
    }

    /** 训练分类模型 */

    import org.apache.spark.mllib.classification.LogisticRegressionWithSGD
    import org.apache.spark.mllib.classification.SVMWithSGD
    import org.apache.spark.mllib.classification.NaiveBayes

    import org.apache.spark.mllib.tree.DecisionTree
    import org.apache.spark.mllib.tree.configuration.Algo
    import org.apache.spark.mllib.tree.impurity.Entropy
    val numIteration = 10
    val maxTreeDepth = 5

    //训练模型

    val lrModel = LogisticRegressionWithSGD.train(data, numIteration)

    val svmModel = SVMWithSGD.train(data, numIteration)

    val nbModel = NaiveBayes.train(nbData)

    val dtModel = DecisionTree.train(data, Algo.Classification, Entropy, maxTreeDepth)

    /** 使用分类模型 */

    val dataPoint = data.first
    val prediction = lrModel.predict(dataPoint.features)
    //prediction: Double = 1.0

    //预测值与真实值比较
    val trueLabel = dataPoint.label
    //trueLabel: Double = 0.0


    //输入整体做预测
    val predictions = lrModel.predict(data.map(lp => lp.features))
    predictions.take(5)
    //Array[Double] = Array(1.0, 1.0, 1.0, 1.0, 1.0)


    /** 预测的 正确率与错误率 计算 */

    val lrTotalCorrect = data.map { point =>
      if (lrModel.predict(point.features)== point.label) 1 else 0
    }.sum
    val lrAccuracy = lrTotalCorrect / numData       //查看模型正确率
    // lrAccuracy: Double = 0.5146720757268425


    val svmTotalCorrect = data.map { point =>
      if (svmModel.predict(point.features) == point.label) 1 else 0
    }.sum
    val svmAccuracy = svmTotalCorrect / numData  //查看模型正确率
    // svmAccuracy: Double = 0.5146720757268425


    val nbTotalCorrect = nbData.map { point =>
      if (nbModel.predict(point.features) == point.label) 1 else 0
    }.sum
    val nbAccuracy = nbTotalCorrect / numData   //查看模型正确率
    // nbAccuracy: Double = 0.5803921568627451


    //决策树的预测 阈值 需要明确给出
    val dtTotalCorrect = data.map { point =>
      val score = dtModel.predict(point.features)
      val predicted = if (score > 0.5) 1 else 0   //阈值 0.5
      if (predicted == point.label) 1 else 0
    }.sum
    val dtAccuracy = dtTotalCorrect / numData  //查看模型正确率
    // dtAccuracy: Double = 0.6482758620689655



    /** 计算 准确率-召回律(PR曲线)   ROC曲线的面积(AUC)
      *
      * 准确率通常用于评价结果的质量,定义为真阳性的数目除以真阳性和假阳性的总数,其中真阳性值被预测的类别为1的样本,假阳性是错误预测为1的样本。
      * 召回率用来评价结果的完整性,定义为真阳性的数目除以真阳性和假阳性的和,其中假阳性是类别为1却被预测为0的样本。
      * 通常高准确率对应着低召回率
      *
      * ROC曲线与PR曲线类似,是对分类器的真阳性率-假阳性率的图形化解释。
      * */

    import org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
    //
    val metrics = Seq(lrModel, svmModel).map{ model =>
      val scoreAndLabels = data.map{ point =>
        (model.predict(point.features), point.label)
      }
      val metrics = new BinaryClassificationMetrics(scoreAndLabels)
      (model.getClass.getSimpleName, metrics.areaUnderPR(), metrics.areaUnderROC())
    }
    //List((LogisticRegressionModel,0.7567586293858841,0.5014181143280931), (SVMModel,0.7567586293858841,0.5014181143280931))


    //
    val nbMetrics =Seq(nbModel).map{ model =>
      val scoreAndLabels = nbData.map {point =>
        val score = model.predict(point.features)
        (if (score > 0.5) 1.0 else 0.0, point.label)
      }
      val metrics =new BinaryClassificationMetrics(scoreAndLabels)
      (model.getClass.getSimpleName, metrics.areaUnderPR(), metrics.areaUnderROC())
    }
    //nbMetrics: Seq[(String, Double, Double)] = List((NaiveBayesModel,0.6808510815151734,0.5835585110136261))


    //
    val dtMetrics = Seq(dtModel).map{ model =>
      val scoreAndLabels = data.map{ point =>
        val score = model.predict(point.features)
        (if (score > 0.5) 1.0 else 0.0, point.label)
      }
      val metrics = new BinaryClassificationMetrics(scoreAndLabels)
      (model.getClass.getSimpleName, metrics.areaUnderPR(), metrics.areaUnderROC())
    }
    //dtMetrics: Seq[(String, Double, Double)] = List((DecisionTreeModel,0.7430805993331199,0.6488371887050935))


    val allMetrics = metrics ++ nbMetrics ++ dtMetrics
    allMetrics.foreach{ case (m, pr, roc) =>
      println(f"$m, Area under PR: ${pr * 100.0}%2.4f%%, Area under ROC: ${roc * 100.0}2.4f%%")
    }



    /** 改进模型性能以及参数调优 */

    //特征标准化
    //将特征变量用(RowMatrix类)表示成MLlib中的(分布矩阵)
    import org.apache.spark.mllib.linalg.distributed.RowMatrix

    val vectors = data.map(lp => lp.features)
    val matrix = new RowMatrix(vectors)
    val matrixSummary = matrix.computeColumnSummaryStatistics() //计算矩阵每列的统计特性

    println(matrixSummary.mean)  //输出每列均值
    println(matrixSummary.max)
    println(matrixSummary.variance) //输出矩阵每列方差
    println(matrixSummary.numNonzeros) //每列非0项的数目
    println(matrixSummary.normL2)

    /**为使得数据更符合模型的假设,对每个特征进行标准化,使得每个特征是(0均值)和(单位标准差)*/
    //做法:对(每个特征值)减去(列的均值),然后(除以)列的(标准差)以进行缩放
    import org.apache.spark.mllib.feature.StandardScaler

    val scaler = new StandardScaler(withMean = true, withStd = true).fit(vectors) //将向量传到转换函数
    val scaledData = data.map(lp => LabeledPoint(lp.label, scaler.transform(lp.features)))

    println(data.first.features)
    //[0.789131,2.055555556,0.676470588,0.205882353,0.047058824,...]
    println(scaledData.first.features)
    //[1.137647336497678,-0.08193557169294771,1.0251398128933331,-0.05586356442541689,...
    //为验证第一个特征已经应用标准差公式被转换了,用 第一个特征(减去)其均值,然后(除以)标准差--方差的平方根
    println((data.first.features(0) - matrixSummary.mean(0)) / math.sqrt(matrixSummary.variance(0)))
    //1.137647336497678


    /** 现在使用标准化的数据重新训练模型逻辑回归-(决策树和朴素贝叶斯不受特征标准化的影响)*/
    val lrModelScaled = LogisticRegressionWithSGD.train(scaledData,numIteration)
    //lrModelScaled: org.apache.spark.mllib.classification.LogisticRegressionModel = org.apache.spark.mllib
    // .classification.LogisticRegressionModel: intercept = 0.0, numFeatures = 22, numClasses = 2, threshold = 0.5
    val lrTotalCorrectScaled = scaledData.map{ point =>
      if (lrModelScaled.predict(point.features) == point.label) 1 else 0
    }.sum
    val lrAccuracyScaled = lrTotalCorrectScaled / numData
    //lrAccuracyScaled: Double = 0.6204192021636241

    val lrPredictionsVsTrue = scaledData.map{ point =>
      (lrModelScaled.predict(point.features), point.label)
    }
    val lrMetricsScaled = new BinaryClassificationMetrics(lrPredictionsVsTrue)
    val lrPr = lrMetricsScaled.areaUnderPR()   //lrPr: Double = 0.7272540762713375
    val lrPoc = lrMetricsScaled.areaUnderROC() //lrPoc: Double = 0.6196629669112512
    println(f"${lrModelScaled.getClass.getSimpleName}\nAccuracy:" +
      f"${lrAccuracyScaled * 100}%2.4f\nArea under PR: ${lrPr * 100}%2.4f%%\nArea under POC: ${lrPoc * 100}%2.4f%%")


    /** 给模型加入其它特征,
      * 前面我们只是使用了部分特征,忽略了类别变量和样板列的文本内容 */
    //首先,查看所有类别,并对每个类别做一个索引的映射,用1-of-k编码
    val catgories = records.map(r => r(3)).distinct.collect.zipWithIndex.toMap
    //catgories: scala.collection.immutable.Map[String,Int] = Map("weather" -> 0, "sports" -> 6, ...)
    val numCategories = catgories.size //numCategories: Int = 14
//    println(catgories)
//    println(numCategories)

    val dataCategories = records.map{ r =>
      val trimmed = r.map{ _.replaceAll("\"", "")}
      val label = trimmed(r.size - 1).toInt
      val categoryIdx = catgories(r(3))
      val categoryFeatures = Array.ofDim[Double](numCategories)
      categoryFeatures(categoryIdx) = 1.0
      val otherFeatures = trimmed.slice(4, r.size - 1).map(d => if (d == "?") 0.0 else d.toDouble)
      val features = categoryFeatures ++ otherFeatures
      LabeledPoint(label, Vectors.dense(features))
    }
    println(dataCategories.first)
    //dataCategories.saveAsTextFile("/home/raini/1")


    /** 标准化数据 */
    val scalerCats = new StandardScaler(withMean = true, withStd = true).fit(dataCategories.map(lp => lp.features))
    val scaledDataCats = dataCategories.map(lp =>
      LabeledPoint(lp.label, scalerCats.transform(lp.features)))
    println(dataCategories.first.features)
    println(scaledDataCats.first.features)  //标准化之后特征

    /** 训练新的逻辑回归模型 */
    val lrModelScaledCats = LogisticRegressionWithSGD.train(scaledDataCats,numIteration)
    val lrTotalCorrectScaledCats = scaledDataCats.map { point =>
      if (lrModelScaledCats.predict(point.features) == point.label) 1 else 0
    }.sum
    val lrAccuracyScaledCats = lrTotalCorrectScaledCats / numData
    //lrAccuracyScaledCats: Double = 0.6657200811359026 提升了0.05
    val lrPredictionsVsTrueCats = scaledDataCats.map { point =>
      (lrModelScaledCats.predict(point.features), point.label)
    }
    val lrMetricsScaledCats = new BinaryClassificationMetrics(lrPredictionsVsTrueCats)
    val lrPrCats = lrMetricsScaledCats.areaUnderPR() //lrPrCats: Double = 0.7579640787676577
    val lrRocCats = lrMetricsScaledCats.areaUnderROC() //lrRocCats: Double = 0.6654826844243996

    println(f"${lrMetricsScaledCats.getClass.getSimpleName}\n" +
      f"Accuracy: ${lrAccuracyScaledCats * 100}%2.4f%%\n" +
      f"Area under PR: ${lrPrCats * 100}%2.4f%%\n" +
      f"Area under ROC: ${lrRocCats * 100}%2.4f%%")



    /** 使用正确的数据格式,因为使用数值向量训练贝叶斯模型效果非常差,如下: */
    //                   (先用 全部特征数据实现一遍, 需要特征非负 否则报错)
//    val dataNB = records.map { r =>
//      val trimmed = r.map( _.replaceAll("\"",""))
//      val label = trimmed(r.size - 1).toInt
//      val categoryIdx = catgories(r(3))
//      val categoryFeatures = Array.ofDim[Double](numCategories)
//      categoryFeatures(categoryIdx) = 1.0
//      val otherFeatures = trimmed.slice(4, r.size - 1).map(d => if (d == "?") 0.0 else d.toDouble)
//        .map(d => if (d<0) 0.0 else d)
//      val features = categoryFeatures ++ otherFeatures
//      LabeledPoint(label, Vectors.dense(features))   // 仅仅使用了特征类型
//    }

    /** 这里只用了类别特征,结果确比上面使用了全部特征向量的训练所得结果好很多,说明贝叶斯更符合1-of-k编码的类型特征*/
    val dataNB = records.map { r =>
      val trimmed = r.map( _.replaceAll("\"",""))
      val label = trimmed(r.size - 1).toInt
      val categoryIdx = catgories(r(3))
      val categoryFeatures = Array.ofDim[Double](numCategories)
      categoryFeatures(categoryIdx) = 1.0
      LabeledPoint(label, Vectors.dense(categoryFeatures))   // 仅仅使用了特征类型
    }
    //训练
    val nbModelCats = NaiveBayes.train(dataNB)
    val nbTotalCorrectCats = dataNB.map { point =>
      if (nbModelCats.predict(point.features) == point.label) 1 else 0
    }.sum
    val nbAccuracyCats = nbTotalCorrectCats / numData // Accuracy: 60.9601%
    val nbPredictionVsTrueCats = dataNB.map{ point =>
      (nbModelCats.predict(point.features), point.label)
    }
    val nbMetricsCats = new BinaryClassificationMetrics(nbPredictionVsTrueCats)
    val nbPrCats = nbMetricsCats.areaUnderPR()   // Area under PR: 74.0522%
    val nbRocCats = nbMetricsCats.areaUnderROC() // Area under ROC: 60.5138%

    println(f"${nbModelCats.getClass.getSimpleName}\n" +
      f"Accuracy: ${nbAccuracyCats * 100}%2.4f%%\n" +
      f"Area under PR: ${nbPrCats * 100}%2.4f%%\n" +
      f"Area under ROC: ${nbRocCats * 100}%2.4f%%")



    /** 模型参数调优
      *        MLlib线性模型优化技术:SGD 和 L-BFGS(只在逻辑回归中使用LogisticRegressionWithLBFGS)*/

    import org.apache.spark.mllib.optimization.{Updater,SimpleUpdater,L1Updater,SquaredL2Updater}
    import org.apache.spark.mllib.classification.ClassificationModel
    import org.apache.spark.rdd.RDD
  //线性模型
    //定义辅助函数,根据给定输入训练模型 (输入, 则正则化参数, 迭代次数, 正则化形式, 步长)
    def trainWithParams(input: RDD[LabeledPoint], regParam: Double, numIntrations: Int, updater: Updater, stepSize: Double) = {
      val lr =new LogisticRegressionWithSGD
      lr.optimizer
        .setNumIterations(numIntrations)
        .setUpdater(updater)
        .setRegParam(regParam)
        .setStepSize(stepSize)
      lr.run(input)
    }
    //定义第二个辅助函数,根据输入数据和分类模型 计算AUC
    def creatMetrics(label: String, data: RDD[LabeledPoint], model: ClassificationModel) = {
      val scoreAndLabels = data.map { point =>
        (model.predict(point.features),point.label)
      }
      val metrics = new BinaryClassificationMetrics(scoreAndLabels)
      (label, metrics.areaUnderROC())
    }
    //加快多次模型训练速度, 缓存标准化后的数据
    scaledDataCats.cache()

    //1迭代
    val iterRasults = Seq(1, 5, 10, 50).map { param =>
      val model = trainWithParams(scaledDataCats, 0.0, param, new SimpleUpdater, 1.0)
      creatMetrics(s"$param iterations", scaledDataCats, model)
    }
    iterRasults.foreach { case (param, auc) => println(f"$param, AUC = ${auc * 100}%2.2f%%")}
    //1 iterations, AUC = 64.95%
//    5 iterations, AUC = 66.62%
//    10 iterations, AUC = 66.55%
//    50 iterations, AUC = 66.81%


    //2步长 大步长收敛快,太大可能导致收敛到局部最优解
    val stepResults = Seq(0.001, 0.01, 0.1, 1.0, 10.0).map { param =>
      val model = trainWithParams(scaledDataCats, 0.0, numIteration, new SimpleUpdater, param)
      creatMetrics(s"$param step size", scaledDataCats, model)
    }
    stepResults.foreach { case (param, auc) => println(f"$param,AUC = ${auc * 100}%2.2f%%")
    }
//    0.001 step size,AUC = 64.97%
//      0.01 step size,AUC = 64.96%
//      0.1 step size,AUC = 65.52%
//      1.0 step size,AUC = 66.55%
//      10.0 step size,AUC = 61.92%


    //3正则化, new L1Updater , new SimpleUpdater
    val regResults = Seq(0.001, 0.01, 0.1, 1.0, 10.0).map{ param =>
      val model = trainWithParams(scaledDataCats, param, numIteration, new SquaredL2Updater, 1.0)
      creatMetrics(s"${param} L2 regularization parameter",scaledDataCats, model)
    }
    regResults.foreach{ case (param,auc) => println(f"$param,AUC = ${auc * 100}%2.2f%%")
    }
//    0.001 L2 regularization parameter,AUC = 66.55%
//      0.01 L2 regularization parameter,AUC = 66.55%
//      0.1 L2 regularization parameter,AUC = 66.63%
//      1.0 L2 regularization parameter,AUC = 66.04%
//      10.0 L2 regularization parameter,AUC = 35.33%


    /** 2 决策树调优 */
    //     对于分类问题,需要为决策树选择以下两种 不纯度 度量方式: Gini、Entropy (回归为variance)
    //树深度 yu 不纯度
    import org.apache.spark.mllib.tree.impurity.Impurity
    import org.apache.spark.mllib.tree.impurity.Entropy
    import org.apache.spark.mllib.tree.impurity.Gini
    //创建辅助函数
    def trainDTWithParams(input: RDD[LabeledPoint], maxDepth: Int, impurity: Impurity) = {
      DecisionTree.train(input, Algo.Classification, impurity, maxDepth)
    }
    //决策树通常不需要特征标准化和归一化还有类型特征二元编码,所以用data,不要scaledDataCats
    //不同树深度
    val dtResultsEntropy = Seq(1, 2, 3, 4, 5, 10, 20).map{ param =>
      val model = trainDTWithParams(data, param, Entropy)
      val scoreAndLabels = data.map{ point =>
        val score = model.predict(point.features)
        (if (score > 0.5) 1.0 else 0.0, point.label)
      }
      val metrics = new BinaryClassificationMetrics(scoreAndLabels)
      (s" $param tree depth", metrics.areaUnderROC())
    }
    dtResultsEntropy.foreach { case (param, auc) => println(f"$param,AUC = ${auc * 100}%2.2f%%")
    }
//    1 tree depth,AUC = 59.33%
//      2 tree depth,AUC = 61.68%
//      3 tree depth,AUC = 62.61%
//      4 tree depth,AUC = 63.63%
//      5 tree depth,AUC = 64.88%
//      10 tree depth,AUC = 76.26%
//      20 tree depth,AUC = 98.45%


    //Gini不纯度
    val dtResultsGini = Seq(1, 2, 3, 4, 5, 10, 20, 30).map{ param =>
      val model = trainDTWithParams(data, param, Gini)
      val scoreAndLabels = data.map{ point =>
        val score = model.predict(point.features)
        (if (score > 0.5) 1.0 else 0.0, point.label)
      }
      val metrics = new BinaryClassificationMetrics(scoreAndLabels)
      (s" $param tree depth", metrics.areaUnderROC())
    }
    dtResultsGini.foreach { case (param, auc) => println(f"$param,AUC = ${auc * 100}%2.2f%%")
    }
//    1 tree depth,AUC = 59.33%
//      2 tree depth,AUC = 61.68%
//      3 tree depth,AUC = 62.61%
//      4 tree depth,AUC = 63.63%
//      5 tree depth,AUC = 64.89%
//      10 tree depth,AUC = 78.37%
//      20 tree depth,AUC = 98.87%
//      30 tree depth,AUC = 99.95%


    /** 3.朴素贝叶斯模型参数调优 */
    //lamda参数对模型的影响,可以控制相加式平滑,解决数据中某个类别和某个特征值的组合没有同时出现的问题
    //创建铺筑函数
    def trainNBWithParams(input: RDD[LabeledPoint], lambda: Double) = {
     val nb = new NaiveBayes()
     nb.setLambda(lambda)
     nb.run(input)
    }
    val nbResults = Seq(0.001, 0.01, 0.1, 1.0, 10.0).map { param =>
      val model = trainNBWithParams(dataNB, param)
      val scoreAndLabels = dataNB.map{ point =>
        (model.predict(point.features), point.label)
      }
      val metrics = new BinaryClassificationMetrics(scoreAndLabels)
      (s"$param lambda", metrics.areaUnderROC())
    }
    nbResults.foreach { case (param, auc) => println(f"$param, AUC = ${auc * 100}%2.2f%%")
    }
//    0.001 lambda, AUC = 60.51%
//      0.01 lambda, AUC = 60.51%
//      0.1 lambda, AUC = 60.51%
//      1.0 lambda, AUC = 60.51%
//      10.0 lambda, AUC = 60.51%


  /** 4交叉验证 */
    //60训练集 评估集 40测试集
    val trainTreeSplit = scaledDataCats.randomSplit(Array(0.6,0.4), 123)
    val train = trainTreeSplit(0)
    val test = trainTreeSplit(1)

    //1在测试集上的性能测试
    val regResultsTest = Seq(0.0, 0.001, 0.0025, 0.005, 0.01).map { param =>
      val model = trainWithParams(train, param, numIteration, new SquaredL2Updater, 1.0)
      creatMetrics(s"$param L2 regularization parameter", test, model)
    }
    regResultsTest.foreach { case (param, auc) => println(f"$param, AUC = ${auc * 100}%2.6f%%")
    }
//      0.0 L2 regularization parameter, AUC = 66.126842%
//      0.001 L2 regularization parameter, AUC = 66.126842%
//      0.0025 L2 regularization parameter, AUC = 66.126842%
//      0.005 L2 regularization parameter, AUC = 66.126842%
//      0.01 L2 regularization parameter, AUC = 66.093195%

    //2在训练集上的性能测试
    val regResultsTrain = Seq(0.0, 0.001, 0.0025, 0.005, 0.01).map { param =>
      val model = trainWithParams(train, param, numIteration, new SquaredL2Updater, 1.0)
      creatMetrics(s"$param L2 regularization parameter", train, model)
    }
    regResultsTrain.foreach { case (param, auc) => println(f"$param, AUC = ${auc * 100}%2.6f%%") }
//      0.0 L2 regularization parameter, AUC = 66.233459%
//      0.001 L2 regularization parameter, AUC = 66.233459%
//      0.0025 L2 regularization parameter, AUC = 66.233459%
//      0.005 L2 regularization parameter, AUC = 66.257100%
//      0.01 L2 regularization parameter, AUC = 66.278745%

    /** 从上面结果可以看出,当训练集和测试集相同时,正则化参数比较小的情况可以得到最高的性能
      *                 当训练集和测试集不同时,较高正则化可以得到较高的测试性能 */

  }
}
  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值