关于异常 java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)的处理

接上个blogpost-http://blog.csdn.net/lzlchangqi/article/details/50631341

环境配置好后进行了开发 查询,下面是写了一个函数查询hive的两column,把这两列以key value的形式放入map返回,但是出现了如标题的异常

def get_repage_clicks(data_day_str : String)  = {
    val sql_repage_clicks = """select convert_type,clicks from app.app_cps_repageall_total_clicks where dt = '""" + data_day_str + """' """ 
    val ret = new HashMap[String, Long]()
    hiveContext.sql(s"${sql_repage_clicks}").collect().foreach(convertclick => ret += (convertclick.getString(0) -> convertclick.getLong(1)))
    ret
}

详细异常如下:

16/02/19 17:03:57 INFO DAGScheduler: ResultStage 0 (collect at SparkPlan.scala:94) finished in 9.795 s
16/02/19 17:03:57 INFO YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/02/19 17:03:57 INFO DAGScheduler: Job 0 finished: collect at SparkPlan.scala:94, took 10.060897 s
Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object;
        at com.jd.jd_ad.report.auto.week.cost.CPSCal$$anonfun$main$1.apply(CPSCal.scala:24)
        at com.jd.jd_ad.report.auto.week.cost.CPSCal$$anonfun$main$1.apply(CPSCal.scala:24)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at com.jd.jd_ad.report.auto.week.cost.CPSCal$.main(CPSCal.scala:24)
        at com.jd.jd_ad.report.auto.week.cost.CPSCal.main(CPSCal.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:619)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
其实异常就是  val ret = new HashMap[String, Long]() 导致的
查询https://issues.apache.org/jira/browse/SPARK-5483 得知 是scala和spark不一致导致的:运行环境是spark-1.4,1.4应该使用scala-2.10 于是修改build.sbt,把2.11都改为了2.10就ok了
import AssemblyKeys._
assemblySettings

name := "scalaProjectTest"

version := "1.0"

scalaVersion := "2.10.5"

EclipseKeys.createSrc := EclipseCreateSrc.Default + EclipseCreateSrc.Resource

libraryDependencies ++= Seq(
	"org.apache.spark" % "spark-core_2.10" % "1.6.0" % "provided",
	"org.apache.spark" % "spark-sql_2.10" % "1.6.0" % "provided",
	"org.apache.spark" % "spark-hive_2.10" % "1.6.0" % "provided"
	
)



  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值