既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,涵盖了95%以上大数据知识点,真正体系化!
由于文件比较多,这里只是将部分目录截图出来,全套包含大厂面经、学习笔记、源码讲义、实战项目、大纲路线、讲解视频,并且后续会持续更新
五、使用Spark ML分别进行回归与分类建模
数据集
No,year,month,day,hour,pm,DEWP,TEMP,PRES,cbwd,Iws,Is,Ir
1,2010,1,1,0,NaN,-21.0,-11.0,1021.0,NW,1.79,0.0,0.0
2,2010,1,1,1,NaN,-21,-12,1020,NW,4.92,0,0
3,2010,1,1,2,NaN,-21,-11,1019,NW,6.71,0,0
4,2010,1,1,3,NaN,-21,-14,1019,NW,9.84,0,0
5,2010,1,1,4,NaN,-20,-12,1018,NW,12.97,0,0
6,2010,1,1,5,NaN,-19,-10,1017,NW,16.1,0,0
7,2010,1,1,6,NaN,-19,-9,1017,NW,19.23,0,0
8,2010,1,1,7,NaN,-19,-9,1017,NW,21.02,0,0
9,2010,1,1,8,NaN,-19,-9,1017,NW,24.15,0,0
10,2010,1,1,9,NaN,-20,-8,1017,NW,27.28,0,0
11,2010,1,1,10,NaN,-19,-7,1017,NW,31.3,0,0
12,2010,1,1,11,NaN,-18,-5,1017,NW,34.43,0,0
13,2010,1,1,12,NaN,-19,-5,1015,NW,37.56,0,0
14,2010,1,1,13,NaN,-18,-3,1015,NW,40.69,0,0
15,2010,1,1,14,NaN,-18,-2,1014,NW,43.82,0,0
16,2010,1,1,15,NaN,-18,-1,1014,cv,0.89,0,0
……
1、读取pm.csv,将含有缺失值的行扔掉(或用均值填充)将数据集分为两部分,0.8比例作为训练集,0.2比例作为测试集。
case class data(No: Int, year: Int, month: Int, day: Int, hour: Int, pm: Double, DEWP: Double, TEMP: Double, PRES: Double, cbwd: String, Iws: Double, Is: Double, Ir: Double)
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setMaster("local[\*]").setAppName("foreast")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
Logger.getLogger("org").setLevel(Level.ERROR)
val root = MyRandomForeast.getClass.getResource("/")
import sqlContext.implicits._
val df = sc.textFile(root + "pm.csv").map(_.split(",")).filter(_ (0) != "No").filter(!_.contains("NaN")).map(x => data(x(0).toInt, x(1).toInt, x(2).toInt, x(3).toInt, x(4).toInt, x(5).toDouble, x(6).toDouble, x(7).toDouble, x(8).toDouble, x(9), x(10).toDouble, x(11).toDouble, x(12).toDouble)).toDF.drop("No").drop("year")
2、使用 month、day、hour、DEWP、TEMP、PRES、cbwd、Iws、Is、Ir 作为特征列(除去No,year,pm),pm作为label列,使用训练集、随机森林算法进行回归建模,使用回归模型对测试集进行预测,并评估。
val splitdf = df.randomSplit(Array(0.8, 0.2))
val (train, test) = (splitdf(0), splitdf(1))
val traindf = train.withColumnRenamed("pm", "label")
val indexer = new StringIndexer().setInputCol("cbwd").setOutputCol("cbwd\_")
val assembler = new VectorAssembler().setInputCols(Array("month", "day", "hour", "DEWP", "TEMP", "PRES", "cbwd\_", "Iws", "Is", "Ir")).setOutputCol("features")
import org.apache.spark.ml.Pipeline
val rf = new RandomForestRegressor().setLabelCol("label").setFeaturesCol("features")
// setMaxDepth最大20,会大幅增加计算时间,增大能有效减小根均方误差
// setMaxBins我觉得和数据量相关,单独增大适得其反,要和setNumTrees一起增大
// 目前这个参数得到的评估结果(根均方误差)46.0002637676162
// val numClasses=2
// val categoricalFeaturesInfo = Map[Int, Int]()// Empty categoricalFeaturesInfo indicates all features are continuous.
val pipeline = new Pipeline().setStages(Array(indexer, assembler, rf))
val model = pipeline.fit(traindf)
val testdf = test.withColumnRenamed("pm", "label")
val labelsAndPredictions = model.transform(testdf)
labelsAndPredictions.select("prediction", "label", "features").show(false)
val eva = new RegressionEvaluator().setLabelCol("label").setPredictionCol("prediction")
val accuracy = eva.evaluate(labelsAndPredictions)
println(accuracy)
val treemodel = model.stages(2).asInstanceOf[RandomForestRegressionModel]
println("Learned regression forest model:\n" + treemodel.toDebugString)
3、按照下面标准处理pm列,数字结果放进(levelNum)列,字符串结果放进(levelStr)列
优(0) 50
良(1)50~100
轻度污染(2) 100~150
中度污染(3) 150~200
重度污染(4) 200~300
严重污染(5) 大于300及以上
// 使用Bucketizer特征转换,按区间划分
val splits = Array(Double.NegativeInfinity, 50, 100, 150, 200, 300, Double.PositiveInfinity)
val bucketizer = new Bucketizer().setInputCol("pm").setOutputCol("levelNum").setSplits(splits)
val bucketizerdf = bucketizer.transform(df)
val tempdf = sqlContext.createDataFrame(
Seq((0.0, "优"), (1.0, "良"), (2.0, "轻度污染"), (3.0, "中度污染"), (4.0, "重度污染"), (5.0, "严重污染"))
).toDF("levelNum", "levelStr")
val df2 = bucketizerdf.join(tempdf, "levelNum").drop("pm")
df2 show
4、使用month、day、hour、DEWP、TEMP、PRES、cbwd、Iws、Is、Ir作为特征列(除去No,year,pm),levelNum作为label列,使用训练集、随机森林算法进行分类建模。使用分类模型对测试集进行预测对预测结果df进行处理,基于prediction列生成predictionStr(0-5转换优-严重污染),对结果进行评估。
val splitdf2 = df2.randomSplit(Array(0.8, 0.2))
val (train2, test2) = (splitdf2(0), splitdf2(1))
val traindf2 = train2.withColumnRenamed("levelNum", "label")
val indexer2 = new StringIndexer().setInputCol("cbwd").setOutputCol("cbwd\_")
val assembler2 = new VectorAssembler().setInputCols(Array("month", "day", "hour", "DEWP", "TEMP", "PRES", "cbwd\_", "Iws", "Is", "Ir")).setOutputCol("features")
val rf2 = new RandomForestClassifier().setLabelCol("label").setFeaturesCol("features")
val pipeline2 = new Pipeline().setStages(Array(indexer2, assembler2, rf2))
val model2 = pipeline2.fit(traindf2)
val testdf2 = test2.withColumnRenamed("levelNum", "label")
val labelsAndPredictions2 = model2.transform(testdf2)
labelsAndPredictions2.select("label", "prediction", "features").show
val eva2 = new MulticlassClassificationEvaluator().setLabelCol("label").setPredictionCol("prediction").setMetricName("precision")
val accuracy2 = eva2.evaluate(labelsAndPredictions2)
println("Test Error = " + (1.0 - accuracy2))
val treeModel2 = model2.stages(2).asInstanceOf[RandomForestClassificationModel]
![img](https://img-blog.csdnimg.cn/img_convert/97c1c833296314ddaffb04459a921b18.png)
![img](https://img-blog.csdnimg.cn/img_convert/8dd775663b290c4520c3abc043f5a120.png)
**网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。**
**[需要这份系统化资料的朋友,可以戳这里获取](https://bbs.csdn.net/topics/618545628)**
**一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**
系统化资料的朋友,可以戳这里获取](https://bbs.csdn.net/topics/618545628)**
**一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**