spark 1.0.1 worker启动失败:at java.lang.ClassLoader.loadClass(libgcj.so.10)

错误现象:单机安装的spark1.0.1版本:master可以正常启动 但是worker启动时报错 错误见下面log

下面是控制台的错误:

master: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/spark-1.0.1-bin-hadoop2/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-master.out
master: failed to launch org.apache.spark.deploy.worker.Worker:
master:      at java.lang.ClassLoader.loadClass(libgcj.so.10)
master:      at gnu.java.lang.MainThread.run(libgcj.so.10)
master: full log in /home/hadoop/spark-1.0.1-bin-hadoop2/sbin/../logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-master.out

log文件中的错误:

[hadoop@master spark-1.0.1-bin-hadoop2]$ cat logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-master.out
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Spark Command: java -cp ::/home/hadoop/spark-1.0.1-bin-hadoop2/conf:/home/hadoop/spark-1.0.1-bin-hadoop2/lib/spark-assembly-1.0.1-hadoop2.2.0.jar:/home/hadoop/spark-1.0.1-bin-hadoop2/lib/datanucleus-rdbms-3.2.1.jar:/home/hadoop/spark-1.0.1-bin-hadoop2/lib/datanucleus-core-3.2.2.jar:/home/hadoop/spark-1.0.1-bin-hadoop2/lib/datanucleus-api-jdo-3.2.1.jar:/home/hadoop/hadoop/etc/hadoop -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m org.apache.spark.deploy.worker.Worker spark://master:7077 --webui-port 8081
========================================

Exception in thread "main" java.lang.NoClassDefFoundError: org.apache.spark.deploy.worker.Worker
   at gnu.java.lang.MainThread.run(libgcj.so.10)
Caused by: java.lang.ClassNotFoundException: akka.actor.Actor not found in gnu.gcj.runtime.SystemClassLoader{urls=[file:./,file:/home/hadoop/spark-1.0.1-bin-hadoop2/conf/,file:/home/hadoop/spark-1.0.1-bin-hadoop2/lib/spark-assembly-1.0.1-hadoop2.2.0.jar,file:/home/hadoop/spark-1.0.1-bin-hadoop2/lib/datanucleus-rdbms-3.2.1.jar,file:/home/hadoop/spark-1.0.1-bin-hadoop2/lib/datanucleus-core-3.2.2.jar,file:/home/hadoop/spark-1.0.1-bin-hadoop2/lib/datanucleus-api-jdo-3.2.1.jar,file:/home/hadoop/hadoop/etc/hadoop/], parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}}
   at java.net.URLClassLoader.findClass(libgcj.so.10)
   at java.lang.ClassLoader.loadClass(libgcj.so.10)
   at java.lang.ClassLoader.loadClass(libgcj.so.10)
   at java.lang.VMClassLoader.defineClass(libgcj.so.10)
   at java.lang.ClassLoader.defineClass(libgcj.so.10)
   at java.security.SecureClassLoader.defineClass(libgcj.so.10)
   at java.net.URLClassLoader.findClass(libgcj.so.10)
   at java.lang.ClassLoader.loadClass(libgcj.so.10)
   at java.lang.ClassLoader.loadClass(libgcj.so.10)
   at gnu.java.lang.MainThread.run(libgcj.so.10)

开始以为是libgcj的错误 更新了libgcj之后还是报错 

最后发现问题是系统自带的jdk没有卸载干净 于是卸载之后worker可以正常启动

[root@master yum.repos.d]# echo $JAVA_HOME
/usr/java/jdk
[root@master yum.repos.d]# echo $PATH
/usr/lib64/qt-3.3/bin:/usr/java/jdk/bin:/usr/java/jdk/jre/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
[root@master yum.repos.d]# rpm -qa | grep java
gcc-java-4.4.7-4.el6.x86_64
java_cup-0.10k-5.el6.x86_64
java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64
tzdata-java-2013g-1.el6.noarch
[root@master yum.repos.d]# rpm -qa | grep jdk
jdk-1.7.0_65-fcs.x86_64
[root@master yum.repos.d]# rpm -e --nodeps java_cup-0.10k-5.el6.x86_64
[root@master yum.repos.d]# rpm -e --nodeps java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64
[root@master yum.repos.d]# 
或者是在spark-env.sh里边明确指定java_home

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值