I am trying to write to a MySQL table using the spark jdbc() function inside of a partition task that is called from executing foreachPartitions(test). I am however receiving a picking error.
I am not sure if the issue is due to spark already being inside of a task and spark runs the write.jdbc() as a task itself. From my understanding this isn't allowed? I can return the list "row" from my test() function and call write.jdbc() inside main but i would rather not have to collect the data structures back to the master. code and error:
CODE:
def test(partition_iter):
row = []
row.append({'col1': 26, 'col2': 12, 'col2': 153.49353894392, 'col4': 1})
df_row = SPARK.createDataFrame(row)
df_row.write.jdbc(url="jdbc:mysql://rds-url/db_name", table="db_name", properties={"driver":"com.mysql.jdbc.Driver","user":"user", "password":"password"}, mode="append")
def main():
SPARK.sparkcontext.parallelize([1, 2, 3, 4]).foreachPartition(test)
main()
ERROR:
Traceback (most recent call last):
File "/usr/lib/spark/python/pyspark/cloudpickle.py", line 107, in dump
return Pickler.dump(self, obj)
File "/usr/lib64