Windows下Spark-mlib保存模型到本地报空指针解决方案

在Windows上使用Spark mlib进行机器学习模型训练时遇到一个问题,即尝试将模型保存到本地时遭遇空指针异常。错误主要源于缺少winutils.exe。解决方法包括下载并编译合适的winutils.exe,将其放在指定路径如c:\winutils\bin,并在代码中设置系统属性'hadoop.home.dir'为该路径。
摘要由CSDN通过智能技术生成

一、问题描述

在windows下用Spark mlib跑机器学习模型时,保存模型到本地,出现空指针异常。代码如下:

import org.apache.spark.mllib.classification.{LogisticRegressionWithLBFGS, SVMWithSGD}
import org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.mllib.util.MLUtils;

object SVM {
  def main(args: Array[String]) {
    val conf = new SparkConf().setMaster("local").setAppName("SVM")
    val sc = new SparkContext(conf)
    val data = MLUtils.loadLibSVMFile(sc,"D://spark/sample_libsvm_data.txt");

    val splits = data.randomSplit(Array(0.6,0.4), seed = 11L)
    val training = splits(0).cache()
    val test = splits(1)
    //training.foreach(println)

    val numIterations = 100
    val model = SVMWithSGD.train(training, numIterations)

    //model.clearThreshold();
    println("######## Threshold is : " + model.getThreshold)
    val scoreAndLabels = test.map { point =>
      val score = model.predict(point.features)
      (score, point.label)
    }

    // Get evaluation metrics.
    val metrics = new BinaryClassificationMetrics(scoreAndLabels)
    val auROC = metrics.areaUnderROC()

    println("Area under ROC = " + auROC)

    model.save(sc, "file:///D://spark/SVMTrainingModel")
    //val model = new LogisticRegressionWithLBFGS().setNumClasses(10).run(training)

  }
}
在model.save处报如下错误:

首先以下异常信息:

16/08/26 18:34:58 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
	at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
	at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
	at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
	at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
	at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)
	at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
	at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
	at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
	at org.apache.spark.rdd.HadoopRDD$$anonfu
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值