我写了一个pyspark脚本,它读取两个json文件,coGroup它们并将结果发送到elasticsearch集群;当我在本地运行它时,一切都按预期工作(大部分),我为org.elasticsearch.hadoop.mr.EsOutputFormat和org.elasticsearch.hadoop.mr.LinkedMapWritable类下载了elasticsearch-hadoop jar文件,然后运行我的作业pyspark使用–jars参数,我可以看到弹性搜索集群中出现的文档.
但是,当我尝试在spark集群上运行它时,我收到此错误:
Traceback (most recent call last):
File "/root/spark/spark_test.py", line 141, in
conf=es_write_conf
File "/root/spark/python/pyspark/rdd.py", line 1302, in saveAsNewAPIHadoopFile
keyConverter, valueConverter, jconf)
File "/root/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/root/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile.
: java.lang.ClassN