pyspark连hbase报org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter

ERROR python.Converter: Failed to load converter: org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/var/lib/spark/cspark/python/pyspark/context.py", line 678, in newAPIHadoopRDD
    jconf, batchSize)
  File "/var/lib/spark/cspark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/var/lib/spark/cspark/python/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/var/lib/spark/cspark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
: java.lang.ClassNotFoundException: org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:229)
	at org.apache.spark.api.python.Converter$$anonfun$getInstance$1$$anonfun$1.apply(PythonHadoopUtil.scala:46)
	at org.apache.spark.api.python.Converter$$anonfun$getInstance$1$$anonfun$1.apply(PythonHadoopUtil.scala:45)
	at scala.util.Try$.apply(Try.scala:192)
	at org.apache.spark.api.python.Converter$$anonfun$getInstance$1.apply(PythonHadoopUtil.scala:45)
	at org.apache.spark.api.python.Converter$$anonfun$getInstance$1.apply(PythonHadoopUtil.scala:44)
	at scala.Option.map(Option.scala:146)
	at org.apache.spark.api.python.Converter$.getInstance(PythonHadoopUtil.scala:44)
	at org.apache.spark.api.python.PythonRDD$.getKeyValueConverters(PythonRDD.scala:743)
	at org.apache.spark.api.python.PythonRDD$.convertRDD(PythonRDD.scala:756)
	at org.apache.spark.api.python.PythonRDD$.newAPIHadoopRDD(PythonRDD.scala:580)
	at org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD(PythonRDD.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:280)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:214)
	at java.lang.Thread.run(Thread.java:745)


解决办法:

在Spark 2.0版本上缺少相关把hbase的数据转换python可读取的jar包,需要我们另行下载。

下载jar包spark-examples_2.11-1.6.0-typesafe-001.jar(https://mvnrepository.com/artifact/org.apache.spark/spark-examples_2.11/1.6.0-typesafe-001),然后在你的spark安装目录下创建目录存放这个jar

 执行如下命令

 1: mkdir /var/lib/spark/jars/hbase/

 2:rz命令上传刚才下载的jar包(现在最新版本是spark-examples_2.11-1.6.0-typesafe-001.jar)

3:进入spark的conf目录下修改spark-env.sh添加:

export SPARK_DIST_CLASSPATH=$(/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/bin/hadoop classpath):$(/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/bin/hbase classpath):/var/lib/spark/jars/hbase/*

/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/bin/hadoop:改成自己的hadoop安装目录

/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/bin/hbase:改成自己的hbase安装目录

最后重启下hbase,ok了



评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值