zeppelin0.8.0在cdh5.15.0集群的安装

前言
几年前看到zeppelin,用了一段时间;后来因为有相对于当时的业务场景更好的工具(hue)没有深入的使用zeppelin;
最近跟同事沟通了解到zeppelin的新版本有不少有趣的功能;打算安装一个,跟我们最新安装的CDH5.15集群一起使用;看看效果。
安装步骤
安装比较简单,去官网下载最新的包:
http://zeppelin.apache.org/download.html
点击下载:http://ftp.riken.jp/net/apache/zeppelin/zeppelin-0.8.0/zeppelin-0.8.0-bin-all.tgz
下载下来的包传到服务器上,解压了:
tar -zxvf zeppelin-0.8.0-bin-all.tgz
复制配置文件:

cp zeppelin-env.sh.template zeppelin-env.sh
cp zeppelin-site.xml.template zeppelin-site.xml

配置如下参数到zeppelin-env.sh:

export MASTER=yarn-client export
HADOOP_CONF_DIR=[your_hadoop_conf_path] export
SPARK_HOME=[your_spark_home_path]

直接启动服务:./zeppelin-daemon.sh start
查看启动日志看到下面的日志说明启动成功:

INFO [2018-09-07 11:40:18,663] ({main}
AbstractConnector.java[doStart]:266) - Started
ServerConnector@6c0905f6{HTTP/1.1}{0.0.0.0:8080} INFO [2018-09-07
11:40:18,664] ({main} Server.java[doStart]:379) - Started @16611ms
INFO [2018-09-07 11:40:18,664] ({main} ZeppelinServer.java[main]:223)
- Done, zeppelin server started

然后就可以新建notebook来查询了;
登录页面,执行spark-sql查询报错:

java.lang.RuntimeException:
java.lang.reflect.InvocationTargetException
at org.apache.zeppelin.spark.SparkZeppelinContext.showData(SparkZeppelinContext.java:112)
at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:135)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:103)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer InterpretJob.jobRun(RemoteInterpreterServer.java:633)atorg.apache.zeppelin.scheduler.Job.run(Job.java:188)atorg.apache.zeppelin.scheduler.FIFOScheduler I n t e r p r e t J o b . j o b R u n ( R e m o t e I n t e r p r e t e r S e r v e r . j a v a : 633 ) a t o r g . a p a c h e . z e p p e l i n . s c h e d u l e r . J o b . r u n ( J o b . j a v a : 188 ) a t o r g . a p a c h e . z e p p e l i n . s c h e d u l e r . F I F O S c h e d u l e r 1.run(FIFOScheduler.java:140)
at java.util.concurrent.Executors RunnableAdapter.call(Executors.java:511)atjava.util.concurrent.FutureTask.run(FutureTask.java:266)atjava.util.concurrent.ScheduledThreadPoolExecutor R u n n a b l e A d a p t e r . c a l l ( E x e c u t o r s . j a v a : 511 ) a t j a v a . u t i l . c o n c u r r e n t . F u t u r e T a s k . r u n ( F u t u r e T a s k . j a v a : 266 ) a t j a v a . u t i l . c o n c u r r e n t . S c h e d u l e d T h r e a d P o o l E x e c u t o r ScheduledFutureTask.access 201(ScheduledThreadPoolExecutor.java:180)atjava.util.concurrent.ScheduledThreadPoolExecutor 201 ( S c h e d u l e d T h r e a d P o o l E x e c u t o r . j a v a : 180 ) a t j a v a . u t i l . c o n c u r r e n t . S c h e d u l e d T h r e a d P o o l E x e c u t o r ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor Worker.run(ThreadPoolExecutor.java:624)atjava.lang.Thread.run(Thread.java:748)Causedby:java.lang.reflect.InvocationTargetExceptionatsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)atjava.lang.reflect.Method.invoke(Method.java:498)atorg.apache.zeppelin.spark.SparkZeppelinContext.showData(SparkZeppelinContext.java:108)12moreCausedby:java.lang.ExceptionInInitializerErroratorg.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)atorg.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)atorg.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:228)atorg.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:311)atorg.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)atorg.apache.spark.sql.Dataset.org W o r k e r . r u n ( T h r e a d P o o l E x e c u t o r . j a v a : 624 ) a t j a v a . l a n g . T h r e a d . r u n ( T h r e a d . j a v a : 748 ) C a u s e d b y : j a v a . l a n g . r e f l e c t . I n v o c a t i o n T a r g e t E x c e p t i o n a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( N a t i v e M e t h o d A c c e s s o r I m p l . j a v a : 62 ) a t s u n . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( D e l e g a t i n g M e t h o d A c c e s s o r I m p l . j a v a : 43 ) a t j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( M e t h o d . j a v a : 498 ) a t o r g . a p a c h e . z e p p e l i n . s p a r k . S p a r k Z e p p e l i n C o n t e x t . s h o w D a t a ( S p a r k Z e p p e l i n C o n t e x t . j a v a : 108 ) … 12 m o r e C a u s e d b y : j a v a . l a n g . E x c e p t i o n I n I n i t i a l i z e r E r r o r a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . S p a r k P l a n . e x e c u t e Q u e r y ( S p a r k P l a n . s c a l a : 135 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . S p a r k P l a n . e x e c u t e ( S p a r k P l a n . s c a l a : 116 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . S p a r k P l a n . g e t B y t e A r r a y R d d ( S p a r k P l a n . s c a l a : 228 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . S p a r k P l a n . e x e c u t e T a k e ( S p a r k P l a n . s c a l a : 311 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . C o l l e c t L i m i t E x e c . e x e c u t e C o l l e c t ( l i m i t . s c a l a : 38 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t . o r g apache spark s p a r k sql Dataset D a t a s e t collectFromPlan(Dataset.scala:2861)>atorg.apache.spark.sql.Dataset c o l l e c t F r o m P l a n ( D a t a s e t . s c a l a : 2861 ) > a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t anonfun a n o n f u n head 1.apply(Dataset.scala:2150)atorg.apache.spark.sql.Dataset 1. a p p l y ( D a t a s e t . s c a l a : 2150 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t anonfun a n o n f u n head 1.apply(Dataset.scala:2150)>atorg.apache.spark.sql.Dataset 1. a p p l y ( D a t a s e t . s c a l a : 2150 ) > a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t anonfun a n o n f u n 55.apply(Dataset.scala:2842)
at org.apache.spark.sql.execution.SQLExecution .withNewExecutionId(SQLExecution.scala:65)atorg.apache.spark.sql.Dataset.withAction(Dataset.scala:2841)atorg.apache.spark.sql.Dataset.head(Dataset.scala:2150)atorg.apache.spark.sql.Dataset.take(Dataset.scala:2363)17moreCausedby:com.fasterxml.jackson.databind.JsonMappingException:IncompatibleJacksonversion:2.8.111atcom.fasterxml.jackson.module.scala.JacksonModule . w i t h N e w E x e c u t i o n I d ( S Q L E x e c u t i o n . s c a l a : 65 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t . w i t h A c t i o n ( D a t a s e t . s c a l a : 2841 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t . h e a d ( D a t a s e t . s c a l a : 2150 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t . t a k e ( D a t a s e t . s c a l a : 2363 ) … 17 m o r e C a u s e d b y : c o m . f a s t e r x m l . j a c k s o n . d a t a b i n d . J s o n M a p p i n g E x c e p t i o n : I n c o m p a t i b l e J a c k s o n v e r s i o n : 2.8.11 − 1 a t c o m . f a s t e r x m l . j a c k s o n . m o d u l e . s c a l a . J a c k s o n M o d u l e class.setupModule(JacksonModule.scala:64)
at com.fasterxml.jackson.module.scala.DefaultScalaModule.setupModule(DefaultScalaModule.scala:19)
at com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:747)
at org.apache.spark.rdd.RDDOperationScope .(RDDOperationScope.scala:82)atorg.apache.spark.rdd.RDDOperationScope . ( R D D O p e r a t i o n S c o p e . s c a l a : 82 ) a t o r g . a p a c h e . s p a r k . r d d . R D D O p e r a t i o n S c o p e .(RDDOperationScope.scala)
INFO [2018-09-07 11:41:08,159] ({ForkJoinPool-2-worker-5} OrcInputFormat.java[generateSplitsInfo]:1025) - FooterCacheHitRatio: 0/0
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer InterpretJob.jobRun(RemoteInterpreterServer.java:633)atorg.apache.zeppelin.scheduler.Job.run(Job.java:188)atorg.apache.zeppelin.scheduler.FIFOScheduler I n t e r p r e t J o b . j o b R u n ( R e m o t e I n t e r p r e t e r S e r v e r . j a v a : 633 ) a t o r g . a p a c h e . z e p p e l i n . s c h e d u l e r . J o b . r u n ( J o b . j a v a : 188 ) a t o r g . a p a c h e . z e p p e l i n . s c h e d u l e r . F I F O S c h e d u l e r 1.run(FIFOScheduler.java:140)
at java.util.concurrent.Executors RunnableAdapter.call(Executors.java:511)atjava.util.concurrent.FutureTask.run(FutureTask.java:266)atjava.util.concurrent.ScheduledThreadPoolExecutor R u n n a b l e A d a p t e r . c a l l ( E x e c u t o r s . j a v a : 511 ) a t j a v a . u t i l . c o n c u r r e n t . F u t u r e T a s k . r u n ( F u t u r e T a s k . j a v a : 266 ) a t j a v a . u t i l . c o n c u r r e n t . S c h e d u l e d T h r e a d P o o l E x e c u t o r ScheduledFutureTask.access 201(ScheduledThreadPoolExecutor.java:180)atjava.util.concurrent.ScheduledThreadPoolExecutor 201 ( S c h e d u l e d T h r e a d P o o l E x e c u t o r . j a v a : 180 ) a t j a v a . u t i l . c o n c u r r e n t . S c h e d u l e d T h r e a d P o o l E x e c u t o r ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor Worker.run(ThreadPoolExecutor.java:624)atjava.lang.Thread.run(Thread.java:748)Causedby:java.lang.reflect.InvocationTargetExceptionatsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)atjava.lang.reflect.Method.invoke(Method.java:498)atorg.apache.zeppelin.spark.SparkZeppelinContext.showData(SparkZeppelinContext.java:108)12moreCausedby:java.lang.NoClassDefFoundError:Couldnotinitializeclassorg.apache.spark.rdd.RDDOperationScope W o r k e r . r u n ( T h r e a d P o o l E x e c u t o r . j a v a : 624 ) a t j a v a . l a n g . T h r e a d . r u n ( T h r e a d . j a v a : 748 ) C a u s e d b y : j a v a . l a n g . r e f l e c t . I n v o c a t i o n T a r g e t E x c e p t i o n a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( N a t i v e M e t h o d A c c e s s o r I m p l . j a v a : 62 ) a t s u n . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( D e l e g a t i n g M e t h o d A c c e s s o r I m p l . j a v a : 43 ) a t j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( M e t h o d . j a v a : 498 ) a t o r g . a p a c h e . z e p p e l i n . s p a r k . S p a r k Z e p p e l i n C o n t e x t . s h o w D a t a ( S p a r k Z e p p e l i n C o n t e x t . j a v a : 108 ) … 12 m o r e C a u s e d b y : j a v a . l a n g . N o C l a s s D e f F o u n d E r r o r : C o u l d n o t i n i t i a l i z e c l a s s o r g . a p a c h e . s p a r k . r d d . R D D O p e r a t i o n S c o p e
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:228)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:311)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset.org apache a p a c h e spark sql s q l Dataset

collectFromPlan(Dataset.scala:2861)atorg.apache.spark.sql.Dataset c o l l e c t F r o m P l a n ( D a t a s e t . s c a l a : 2861 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t
anonfun head h e a d 1.apply(Dataset.scala:2150)
at org.apache.spark.sql.Dataset
anonfun$head$1.apply(Dataset.scala:2150)atorg.apache.spark.sql.Dataset a n o n f u n $ h e a d $ 1. a p p l y ( D a t a s e t . s c a l a : 2150 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t
anonfun 55.apply(Dataset.scala:2842)atorg.apache.spark.sql.execution.SQLExecution 55. a p p l y ( D a t a s e t . s c a l a : 2842 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . S Q L E x e c u t i o n .withNewExecutionId(SQLExecution.scala:65)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2841)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2150)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2363)
… 17 more

此问题解决方法就是 提升 Jackson的 jar 版本,直接把 spark 安装根目录里的jars目录下的Jackson 相关包 copy到 Zeppelin 的根目录 lib 目录下即可

rm -rf jackson-annotations-2.8.0.jar jackson-core-2.8.10.jar
jackson-databind-2.8.11.1.jar
cp ~/spark-2.2.1-bin-hadoop2.6/jars/jackson-annotations-2.6.5.jar ./
cp ~/spark-2.2.1-bin-hadoop2.6/jars/jackson-core-2.6.5.jar ./
cp ~/spark-2.2.1-bin-hadoop2.6/jars/jackson-databind-2.6.5.jar ./

遇到如下报错:

ERROR [2018-09-07 11:36:18,453] ({pool-2-thread-2}
NewSparkInterpreter.java[open]:124) - Fail to open SparkInterpreter
INFO [2018-09-07 11:36:18,453] ({pool-1-thread-3}
NewSparkInterpreter.java[open]:83) - Using Scala Version: 2.11 ERROR
[2018-09-07 11:36:18,453] ({pool-2-thread-2} Job.java[run]:190) - Job
failed org.apache.zeppelin.interpreter.InterpreterException: Fail to
open SparkInterpreter
at org.apache.zeppelin.spark.NewSparkInterpreter.open(NewSparkInterpreter.java:125)
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:62)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at org.apache.zeppelin.spark.SparkSqlInterpreter.getSparkInterpreter(SparkSqlInterpreter.java:76)
at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:92)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:103)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer InterpretJob.jobRun(RemoteInterpreterServer.java:633)atorg.apache.zeppelin.scheduler.Job.run(Job.java:188)atorg.apache.zeppelin.scheduler.FIFOScheduler I n t e r p r e t J o b . j o b R u n ( R e m o t e I n t e r p r e t e r S e r v e r . j a v a : 633 ) a t o r g . a p a c h e . z e p p e l i n . s c h e d u l e r . J o b . r u n ( J o b . j a v a : 188 ) a t o r g . a p a c h e . z e p p e l i n . s c h e d u l e r . F I F O S c h e d u l e r 1.run(FIFOScheduler.java:140)
at java.util.concurrent.Executors RunnableAdapter.call(Executors.java:511)atjava.util.concurrent.FutureTask.run(FutureTask.java:266)atjava.util.concurrent.ScheduledThreadPoolExecutor R u n n a b l e A d a p t e r . c a l l ( E x e c u t o r s . j a v a : 511 ) a t j a v a . u t i l . c o n c u r r e n t . F u t u r e T a s k . r u n ( F u t u r e T a s k . j a v a : 266 ) a t j a v a . u t i l . c o n c u r r e n t . S c h e d u l e d T h r e a d P o o l E x e c u t o r ScheduledFutureTask.access 201(ScheduledThreadPoolExecutor.java:180)atjava.util.concurrent.ScheduledThreadPoolExecutor 201 ( S c h e d u l e d T h r e a d P o o l E x e c u t o r . j a v a : 180 ) a t j a v a . u t i l . c o n c u r r e n t . S c h e d u l e d T h r e a d P o o l E x e c u t o r ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor Worker.run(ThreadPoolExecutor.java:624)atjava.lang.Thread.run(Thread.java:748)Causedby:java.lang.reflect.InvocationTargetExceptionatsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)atjava.lang.reflect.Method.invoke(Method.java:498)atorg.apache.zeppelin.spark.BaseSparkScalaInterpreter.spark2CreateContext(BaseSparkScalaInterpreter.scala:189)atorg.apache.zeppelin.spark.BaseSparkScalaInterpreter.createSparkContext(BaseSparkScalaInterpreter.scala:124)atorg.apache.zeppelin.spark.SparkScala211Interpreter.open(SparkScala211Interpreter.scala:87)atorg.apache.zeppelin.spark.NewSparkInterpreter.open(NewSparkInterpreter.java:102)15moreCausedby:java.lang.NoSuchMethodError:io.netty.buffer.PooledByteBufAllocator.metric()Lio/netty/buffer/PooledByteBufAllocatorMetric;atorg.apache.spark.network.util.NettyMemoryMetrics.registerMetrics(NettyMemoryMetrics.java:80)atorg.apache.spark.network.util.NettyMemoryMetrics.(NettyMemoryMetrics.java:76)atorg.apache.spark.network.client.TransportClientFactory.(TransportClientFactory.java:109)atorg.apache.spark.network.TransportContext.createClientFactory(TransportContext.java:99)atorg.apache.spark.rpc.netty.NettyRpcEnv.(NettyRpcEnv.scala:71)atorg.apache.spark.rpc.netty.NettyRpcEnvFactory.create(NettyRpcEnv.scala:461)atorg.apache.spark.rpc.RpcEnv W o r k e r . r u n ( T h r e a d P o o l E x e c u t o r . j a v a : 624 ) a t j a v a . l a n g . T h r e a d . r u n ( T h r e a d . j a v a : 748 ) C a u s e d b y : j a v a . l a n g . r e f l e c t . I n v o c a t i o n T a r g e t E x c e p t i o n a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( N a t i v e M e t h o d A c c e s s o r I m p l . j a v a : 62 ) a t s u n . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( D e l e g a t i n g M e t h o d A c c e s s o r I m p l . j a v a : 43 ) a t j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( M e t h o d . j a v a : 498 ) a t o r g . a p a c h e . z e p p e l i n . s p a r k . B a s e S p a r k S c a l a I n t e r p r e t e r . s p a r k 2 C r e a t e C o n t e x t ( B a s e S p a r k S c a l a I n t e r p r e t e r . s c a l a : 189 ) a t o r g . a p a c h e . z e p p e l i n . s p a r k . B a s e S p a r k S c a l a I n t e r p r e t e r . c r e a t e S p a r k C o n t e x t ( B a s e S p a r k S c a l a I n t e r p r e t e r . s c a l a : 124 ) a t o r g . a p a c h e . z e p p e l i n . s p a r k . S p a r k S c a l a 211 I n t e r p r e t e r . o p e n ( S p a r k S c a l a 211 I n t e r p r e t e r . s c a l a : 87 ) a t o r g . a p a c h e . z e p p e l i n . s p a r k . N e w S p a r k I n t e r p r e t e r . o p e n ( N e w S p a r k I n t e r p r e t e r . j a v a : 102 ) … 15 m o r e C a u s e d b y : j a v a . l a n g . N o S u c h M e t h o d E r r o r : i o . n e t t y . b u f f e r . P o o l e d B y t e B u f A l l o c a t o r . m e t r i c ( ) L i o / n e t t y / b u f f e r / P o o l e d B y t e B u f A l l o c a t o r M e t r i c ; a t o r g . a p a c h e . s p a r k . n e t w o r k . u t i l . N e t t y M e m o r y M e t r i c s . r e g i s t e r M e t r i c s ( N e t t y M e m o r y M e t r i c s . j a v a : 80 ) a t o r g . a p a c h e . s p a r k . n e t w o r k . u t i l . N e t t y M e m o r y M e t r i c s . ( N e t t y M e m o r y M e t r i c s . j a v a : 76 ) a t o r g . a p a c h e . s p a r k . n e t w o r k . c l i e n t . T r a n s p o r t C l i e n t F a c t o r y . ( T r a n s p o r t C l i e n t F a c t o r y . j a v a : 109 ) a t o r g . a p a c h e . s p a r k . n e t w o r k . T r a n s p o r t C o n t e x t . c r e a t e C l i e n t F a c t o r y ( T r a n s p o r t C o n t e x t . j a v a : 99 ) a t o r g . a p a c h e . s p a r k . r p c . n e t t y . N e t t y R p c E n v . ( N e t t y R p c E n v . s c a l a : 71 ) a t o r g . a p a c h e . s p a r k . r p c . n e t t y . N e t t y R p c E n v F a c t o r y . c r e a t e ( N e t t y R p c E n v . s c a l a : 461 ) a t o r g . a p a c h e . s p a r k . r p c . R p c E n v .create(RpcEnv.scala:57)
at org.apache.spark.SparkEnv .create(SparkEnv.scala:249)atorg.apache.spark.SparkEnv . c r e a t e ( S p a r k E n v . s c a l a : 249 ) a t o r g . a p a c h e . s p a r k . S p a r k E n v .createDriverEnv(SparkEnv.scala:175)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:256)
at org.apache.spark.SparkContext.(SparkContext.scala:423)
at org.apache.spark.SparkContext .getOrCreate(SparkContext.scala:2493)atorg.apache.spark.sql.SparkSession . g e t O r C r e a t e ( S p a r k C o n t e x t . s c a l a : 2493 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n Builder

anonfun$7.apply(SparkSession.scala:933)atorg.apache.spark.sql.SparkSession$Builder a n o n f u n $ 7. a p p l y ( S p a r k S e s s i o n . s c a l a : 933 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n $ B u i l d e r
anonfun 7.apply(SparkSession.scala:924)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.sql.SparkSession 7. a p p l y ( S p a r k S e s s i o n . s c a l a : 924 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . s q l . S p a r k S e s s i o n Builder.getOrCreate(SparkSession.scala:924)
… 23 more

出现NoSuchMethodErro这种问题,一般是因为jar包冲突,这个异常是因为netty包冲突导致的
解决方法:将spark里面的netty-all-4.0.43.Final.jar包替换zeppelin里面的包;

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值