spark-DataFrame学习记录-[2]解决spark-dataframe的JOIN操作之后产生重复列(Reference '***' is ambiguous问题解决)


【1】转帖部分

转自:http://blog.csdn.net/sparkexpert/article/details/52837269

如分别创建两个DF,其结果如下:
val df = sc.parallelize(Array(
    ("one", "A", 1), ("one", "B", 2), ("two", "A", 3), ("two", "B", 4)
)).toDF("key1", "key2", "value")
df.show()

+----+----+-----+
|key1|key2|value|
+----+----+-----+
| one|   A|    1|
| one|   B|    2|
| two|   A|    3|
| two|   B|    4|
+----+----+-----+
val df2 = sc.parallelize(Array(
    ("one", "A", 5), ("two", "A", 6)
)).toDF("key1", "key2", "value2")
df2.show()

+----+----+------+
|key1|key2|value2|
+----+----+------+
| one|   A|     5|
| two|   A|     6|
+----+----+------+
对其进行JOIN操作之后,发现多产生了KEY1和KEY2这样的两个字段。
val joined = df.join(df2, df("key1") === df2("key1") && df("key2") === df2("key2"), "left_outer")
joined.show()

+----+----+-----+----+----+------+
|key1|key2|value|key1|key2|value2|
+----+----+-----+----+----+------+
| two|   A|    3| two|   A|     6|
| two|   B|    4|null|null|  null|
| one|   A|    1| one|   A|     5|
| one|   B|    2|null|null|  null|
+----+----+-----+----+----+------+
假如这两个字段同时存在,那么就会报错,如下:org.apache.spark.sql.AnalysisException: Reference 'key2' is ambiguous
因此,网上有很多关于如何在JOIN之后删除列的,后来经过仔细查找,才发现通过修改JOIN的表达式,完全可以避免这个问题。而且非常简单。主要是通过Seq这个对象来实现。
df.join(df2, Seq("key1", "key2"), "left_outer").show()

+----+----+-----+------+
|key1|key2|value|value2|
+----+----+-----+------+
| two|   A|    3|     6|
| two|   B|    4|  null|
| one|   A|    1|     5|
| one|   B|    2|  null|
+----+----+-----+------+

 df.join(df2, Seq("key1"), "left_outer").show()

    //df
    //    +----+----+-----+
    //    |key1|key2|value|
    //    +----+----+-----+
    //    | one|   A|    1|
    //    | one|   B|    2|
    //    | two|   A|    3|
    //    | two|   B|    4|
    //    +----+----+-----+

    //df2
    //    +----+----+------+
    //    |key1|key2|value2|
    //    +----+----+------+
    //    | one|   A|     5|
    //    | two|   A|     6|
    //    +----+----+------+

    //    +----+----+-----+----+------+
    //    |key1|key2|value|key2|value2|
    //    +----+----+-----+----+------+
    //    | two|   A|    3|   A|     6|
    //    | two|   B|    4|   A|     6|
    //    | one|   A|    1|   A|     5|
    //    | one|   B|    2|   A|     5|
    //    +----+----+-----+----+------+



【2】自测其他方式部分

withColumnRenamed方式将其中一个表的类名字改掉

package com.dt.spark.main.DataFrameLearn

import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.{SparkConf, SparkContext}

/**
  * spark-DataFrame学习记录-[2]解决spark-dataframe的JOIN操作之后产生重复列(Reference '***' is ambiguous问题解决)
  */
object DataFrameSQL_2 {
  def main(args: Array[String]) {

    val conf = new SparkConf()
    conf.setAppName("test")
    conf.setMaster("local")

    val sc = new SparkContext(conf)

    //设置日志级别
    Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
    Logger.getLogger("org.apache.spark.sql").setLevel(Level.WARN)

    val sqlContext = new HiveContext(sc)
    import sqlContext.implicits._

    val df = sc.parallelize(Array(
      ("one", "A", 1), ("one", "B", 2), ("two", "A", 3), ("two", "B", 4)
    )).toDF("key1", "key2", "value")
    df.show()
    //    +----+----+-----+
    //    |key1|key2|value|
    //    +----+----+-----+
    //    | one|   A|    1|
    //    | one|   B|    2|
    //    | two|   A|    3|
    //    | two|   B|    4|
    //    +----+----+-----+

    val df2 = sc.parallelize(Array(
      ("one", "A", 5), ("two", "A", 6)
    )).toDF("key1", "key2", "value2")
    df2.show()

    //    +----+----+------+
    //    |key1|key2|value2|
    //    +----+----+------+
    //    | one|   A|     5|
    //    | two|   A|     6|
    //    +----+----+------+

    val joined = df.join(df2, df("key1") === df2("key1") && df("key2") === df2("key2"), "left_outer")
    joined.show()

    //    +----+----+-----+----+----+------+
    //    |key1|key2|value|key1|key2|value2|
    //    +----+----+-----+----+----+------+
    //    | two|   A|    3| two|   A|     6|
    //    | two|   B|    4|null|null|  null|
    //    | one|   A|    1| one|   A|     5|
    //    | one|   B|    2|null|null|  null|
    //    +----+----+-----+----+----+------+

    df.join(df2, Seq("key1", "key2"), "left_outer").show()

    //    +----+----+-----+------+
    //    |key1|key2|value|value2|
    //    +----+----+-----+------+
    //    | two|   A|    3|     6|
    //    | two|   B|    4|  null|
    //    | one|   A|    1|     5|
    //    | one|   B|    2|  null|
    //    +----+----+-----+------+

    val df22 = df2.withColumnRenamed("key1","k1").withColumnRenamed("key2","k2")

    df.join(df22,df("key1") === df22("k1") && df("key2") === df22("k2"), "left_outer").show()
    //    +----+----+-----+----+----+------+
    //    |key1|key2|value|  k1|  k2|value2|
    //    +----+----+-----+----+----+------+
    //    | two|   A|    3| two|   A|     6|
    //    | two|   B|    4|null|null|  null|
    //    | one|   A|    1| one|   A|     5|
    //    | one|   B|    2|null|null|  null|
    //    +----+----+-----+----+----+------+
    
    sc.stop()

  }

}



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值