如题,我想用pyspark连接mongodb时一直给我报各种错,我把scala,spark,hadoop的版本各种换,系统路径各种改,上百度搜问chatgpt都没有解决,只要我一调用spark.read.format('mongo').load就报各种.35oload这种错误,最后看了一下日志,发现应该是spark-mongo-connecter的版本问题,网上所有的答案所给的版本对我来说都不对,去官网发现应该用最新的10.11,我的mongodb版本为6.0.4,spark3.2,scala也是3.2,官网说10.11能匹配以往的3.x和2.x,并适合mongo版本4.0以上,试了一下还真成了。看来看日志分析问题还是非常重要的,不能依靠百度和网络。
下面是我的代码
def get_MongoDB_Data(database,collection):
os.environ['SPARK_HOME'] = 'D:/spark/spark-3.2.3-bin-hadoop3.2'
os.environ['SCALA_HOME'] = 'D:/scala/scala3-3.2.2(1)/scala3-3.2.2'
os.environ['HADOOP_HOME'] = 'D:/hadoop/hadoop-3.2.2'
my_spark = SparkSession \
.builder \
.appName("myApp") \
.config("spark.mongodb.read.connection.uri", 'mongodb://127.0.0.1/'+database+'.'+collection) \
.config("spark.mongodb.write.connection.uri", 'mongodb://127.0.0.1/'+database+'.'+collection) \
.config("spark.jars.packages", "org.mongodb.spark:mongo-spark-connector_2.12:10.1.1") \
.getOrCreate()
df = my_spark.read.format("mongodb").load()
return df
报错示例:
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Traceback (most recent call last):
File "D:\PyCharmProjects\Recipe_Detection\main.py", line 193, in <module>
df = my_spark.read.format("mongodb").load()
File "D:\Anaconda\lib\site-packages\pyspark\sql\readwriter.py", line 164, in load
return self._df(self._jreader.load())
File "D:\Anaconda\lib\site-packages\py4j\java_gateway.py", line 1321, in __call__
return_value = get_return_value(
File "D:\Anaconda\lib\site-packages\pyspark\sql\utils.py", line 111, in deco
return f(*a, **kw)
File "D:\Anaconda\lib\site-packages\py4j\protocol.py", line 326, in get_return_value
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o35.load.
: java.lang.ClassNotFoundException:
Failed to find data source: mongodb. Please find packages at
http://spark.apache.org/third-party-projects.html