下面的例子展示了如何加载数据,解析为RDD(译者注:RDD为Spark的弹性数据集);然后利用线性回归+随机梯度下降算法构建一个线性模型,并进行预测,最后计算均方误差(Mean Squared Errors)来对模型进行评估。
-
import org.apache.spark.mllib.regression.LinearRegressionWithSGD
-
import org.apache.spark.mllib.regression.LabeledPoint
-
// Load and parse the data
-
val data = sc.textFile("mllib/data/ridge-data/lpsa.data")
-
val parsedData = data.map { line =>
-
val parts = line.split(',')
-
LabeledPoint(parts(0).toDouble, parts(1).split(' ').map(x => x.toDouble).toArray)
-
}
-
// Building the model
-
val numIterations = 20
-
val model = LinearRegressionWithSGD.train(parsedData, numIterations)
-
// Evaluate model on training examples and compute training error
-
val valuesAndPreds = parsedData.map { point =>
-
val prediction = model.predict(point.features)
-
(point.label, prediction)
-
}
-
val MSE = valuesAndPreds.map{ case(v, p) => math.pow((v - p), 2)}.reduce(_ + _)/valuesAndPreds.count
-
println("training Mean Squared Error = " + MSE)