【1】转帖部分
转自:http://blog.csdn.net/sparkexpert/article/details/52837269
如分别创建两个DF,其结果如下:
val df = sc.parallelize(Array(
("one", "A", 1), ("one", "B", 2), ("two", "A", 3), ("two", "B", 4)
)).toDF("key1", "key2", "value")
df.show()
+----+----+-----+
|key1|key2|value|
+----+----+-----+
| one| A| 1|
| one| B| 2|
| two| A| 3|
| two| B| 4|
+----+----+-----+
val df2 = sc.parallelize(Array(
("one", "A", 5), ("two", "A", 6)
)).toDF("key1", "key2", "value2")
df2.show()
+----+----+------+
|key1|key2|value2|
+----+----+------+
| one| A| 5|
| two| A| 6|
+----+----+------+
对其进行JOIN操作之后,发现多产生了KEY1和KEY2这样的两个字段。
val joined = df.join(df2, df("key1") === df2("key1") && df("key2") === df2("key2"), "left_outer")
joined.show()
+----+----+-----+----+----+------+
|key1|key2|value|key1|key2|value2|
+----+----+-----+----+----+------+
| two| A| 3| two| A| 6|
| two| B| 4|null|null| null|
| one| A| 1| one| A| 5|
| one| B| 2|null|null| null|
+----+----+-----+----+----+------+
假如这两个字段同时存在,那么就会报错,如下:org.apache.spark.sql.AnalysisException: Reference 'key2' is ambiguous
因此,网上有很多关于如何在JOIN之后删除列的,后来经过仔细查找,才发现通过修改JOIN的表达式,完全可以避免这个问题。而且非常简单。主要是通过Seq这个对象来实现。
df.join(df2, Seq("key1", "key2"), "left_outer").show()
+----+----+-----+------+
|key1|key2|value|value2|
+----+----+-----+------+
| two| A| 3| 6|
| two| B| 4| null|
| one| A| 1| 5|
| one| B| 2| null|
+----+----+-----+------+
df.join(df2, Seq("key1"), "left_outer").show()
//df
// +----+----+-----+
// |key1|key2|value|
// +----+----+-----+
// | one| A| 1|
// | one| B| 2|
// | two| A| 3|
// | two| B| 4|
// +----+----+-----+
//df2
// +----+----+------+
// |key1|key2|value2|
// +----+----+------+
// | one| A| 5|
// | two| A| 6|
// +----+----+------+
// +----+----+-----+----+------+
// |key1|key2|value|key2|value2|
// +----+----+-----+----+------+
// | two| A| 3| A| 6|
// | two| B| 4| A| 6|
// | one| A| 1| A| 5|
// | one| B| 2| A| 5|
// +----+----+-----+----+------+
【2】自测其他方式部分
withColumnRenamed方式将其中一个表的类名字改掉
package com.dt.spark.main.DataFrameLearn
import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.{SparkConf, SparkContext}
/**
* spark-DataFrame学习记录-[2]解决spark-dataframe的JOIN操作之后产生重复列(Reference '***' is ambiguous问题解决)
*/
object DataFrameSQL_2 {
def main(args: Array[String]) {
val conf = new SparkConf()
conf.setAppName("test")
conf.setMaster("local")
val sc = new SparkContext(conf)
//设置日志级别
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
Logger.getLogger("org.apache.spark.sql").setLevel(Level.WARN)
val sqlContext = new HiveContext(sc)
import sqlContext.implicits._
val df = sc.parallelize(Array(
("one", "A", 1), ("one", "B", 2), ("two", "A", 3), ("two", "B", 4)
)).toDF("key1", "key2", "value")
df.show()
// +----+----+-----+
// |key1|key2|value|
// +----+----+-----+
// | one| A| 1|
// | one| B| 2|
// | two| A| 3|
// | two| B| 4|
// +----+----+-----+
val df2 = sc.parallelize(Array(
("one", "A", 5), ("two", "A", 6)
)).toDF("key1", "key2", "value2")
df2.show()
// +----+----+------+
// |key1|key2|value2|
// +----+----+------+
// | one| A| 5|
// | two| A| 6|
// +----+----+------+
val joined = df.join(df2, df("key1") === df2("key1") && df("key2") === df2("key2"), "left_outer")
joined.show()
// +----+----+-----+----+----+------+
// |key1|key2|value|key1|key2|value2|
// +----+----+-----+----+----+------+
// | two| A| 3| two| A| 6|
// | two| B| 4|null|null| null|
// | one| A| 1| one| A| 5|
// | one| B| 2|null|null| null|
// +----+----+-----+----+----+------+
df.join(df2, Seq("key1", "key2"), "left_outer").show()
// +----+----+-----+------+
// |key1|key2|value|value2|
// +----+----+-----+------+
// | two| A| 3| 6|
// | two| B| 4| null|
// | one| A| 1| 5|
// | one| B| 2| null|
// +----+----+-----+------+
val df22 = df2.withColumnRenamed("key1","k1").withColumnRenamed("key2","k2")
df.join(df22,df("key1") === df22("k1") && df("key2") === df22("k2"), "left_outer").show()
// +----+----+-----+----+----+------+
// |key1|key2|value| k1| k2|value2|
// +----+----+-----+----+----+------+
// | two| A| 3| two| A| 6|
// | two| B| 4|null|null| null|
// | one| A| 1| one| A| 5|
// | one| B| 2|null|null| null|
// +----+----+-----+----+----+------+
sc.stop()
}
}