参考:
spark中文官方网址:http://spark.apachecn.org/#/
https://www.iteblog.com/archives/1674.html
一、知识点:
1、Dataframe新增一列:https://www.cnblogs.com/itboys/p/9762808.html
方法四和五是新增一列唯一ID
方法一:利用createDataFrame方法,新增列的过程包含在构建rdd和schema中
方法二:利用withColumn方法,新增列的过程包含在udf函数中
方法三:利用SQL代码,新增列的过程直接写入SQL代码中
方法四:以上三种是增加一个有判断的列,如果想要增加一列唯一序号,可以使用monotonically_increasing_id
方法五:使用zipWithUniqueId
获取id 并重建 DataFrame.
// dataframe新增一列方法1,利用createDataFrame方法 val trdd = input.select(targetColumns).rdd.map(x=>{ if (x.get(0).toString().toDouble > critValueR || x.get(0).toString().toDouble < critValueL) Row(x.get(0).toString().toDouble,"F") else Row(x.get(0).toString().toDouble,"T") }) val schema = input.select(targetColumns).schema.add("flag", StringType, true) val sample3 = ss.createDataFrame(trdd, schema).distinct().withColumnRenamed(targetColumns, "idx") // dataframe新增一列方法2 val code :(Int => String) = (arg: Int) => {if (arg > critValueR || arg < critValueL) "F" else "T"} val addCol = udf(code) val sample3 = input.select(targetColumns).withColumn("flag", addCol(input(targetColumns))) .withColumnRenamed(targetColumns, "idx") // dataframe新增一列方法3 input.select(targetColumns).createOrReplaceTempView("tmp") val sample3 = ss.sqlContext.sql("select distinct "+targetColname+ " as idx,case when "+targetColname+">"+critValueR+" then 'F'"+ " when "+targetColname+"<"+critValueL+" then 'F' else 'T' end as flag from tmp") // 添加序号列新增一列方法4 import org.apache.spark.sql.functions.monotonically_increasing_id val inputnew = input.withColumn("idx", monotonically_increasing_id) // 这个id虽然是唯一的,但是不能从零开始,也不是顺序排列,可以简单理解为是随机产生的标识码
// 方法五:使用zipWithUniqueId
获取id 并重建 DataFrame.
import spark.implicits._
import org.apache.spark.sql.Row
import org.apache.spark.sql.types.{StructType, StructField, LongType}
val df =Seq(("a", -1.0), ("b", -2.0), ("c", -3.0)).toDF("foo", "bar")
// 获取df 的表头
val s = df.schema
// 将原表转换成带有rdd,
//再转换成带有id的rdd,
//再展开成Seq方便转化成 Dataframe
val rows = df.rdd.zipWithUniqueId.map{case (r: Row, id: Long) => Row.fromSeq(id +: r.toSeq)} // 再由 row 根据原表头进行转换 val dfWithPK = spark.createDataFrame( rows, StructType(StructField("id", LongType, false) +: s.fields))
2、新增一列ID:https://blog.csdn.net/liaodaoluyun/article/details/86232639
二、wordcount
package com.qihoo.spark.examles import com.qihoo.spark.app.SparkAppJob import org.apache.spark.SparkContext import org.kohsuke.args4j.{Option => ArgOption} import org.apache.spark.sql.functions.monotonically_increasing_id class WordCount extends SparkAppJob { //input @ArgOption(name = "-i", required = true, aliases = Array("--input"), usage = "input") var input: String = _ //output @ArgOption(name = "-o", required = true, aliases = Array("--output"), usage = "output") var output: String = _ override protected def run(sc: SparkContext): Unit = { import sparkSession.implicits._ val showDasouSegment = sparkSession.read.text(input).as[String].filter(_.trim.length() != 0) showDasouSegment.show() val words = showDasouSegment .map(line => line.split("\t")) .flatMap(line => line(1).split(" ")) .groupByKey(value=>value).keys // words.count();这一句是才让wordcount有效。以下代码是增加一列word的ID。 val res = words.withColumn("ID",monotonically_increasing_id) res.show() // res.write.text(output) } }