mysql collect_set_使用collect_list和collect_set在SQL星火

bd96500e110b49cbb3cd949968f18be7.png

According to the docs, the collect_set and collect_list functions should be available in Spark SQL. However, I cannot get it to work. I'm running Spark 1.6.0 using a Docker image.

I'm trying to do this in Scala:

import org.apache.spark.sql.functions._

df.groupBy("column1")

.agg(collect_set("column2"))

.show()

And receive the following error at runtime:

Exception in thread "main" org.apache.spark.sql.AnalysisException: undefined function collect_set;

Also tried it using pyspark, but it also fails. The docs state these functions are aliases of Hive UDAFs, but I can't figure out to enable these functions.

How to fix this? Thanx!

解决方案

Spark 2.0+:

You have to enable Hive support for a given SparkSession:

In Scala:

val spark = SparkSession.builder

.master("local")

.appName("testing")

.enableHiveSupport() //

.getOrCreate()

In Python:

spark = (SparkSession.builder

.enableHiveSupport()

.getOrCreate())

Spark < 2.0:

To be able to use Hive UDFs you have use Spark built with Hive support (this is already covered when you use pre-built binaries what seems to be the case here) and initialize SparkContext using HiveContext.

In Scala:

import org.apache.spark.sql.hive.HiveContext

import org.apache.spark.sql.SQLContext

val sqlContext: SQLContext = new HiveContext(sc)

In Python:

from pyspark.sql import HiveContext

sqlContext = HiveContext(sc)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值