spark python pickle对象_无法使用spark API写入MySQL - pickle.PicklingError:无法序列化对象(Can't write to MySQL using...

I am trying to write to a MySQL table using the spark jdbc() function inside of a partition task that is called from executing foreachPartitions(test). I am however receiving a picking error.

I am not sure if the issue is due to spark already being inside of a task and spark runs the write.jdbc() as a task itself. From my understanding this isn't allowed? I can return the list "row" from my test() function and call write.jdbc() inside main but i would rather not have to collect the data structures back to the master. code and error:

CODE:

def test(partition_iter):

row = []

row.append({'col1': 26, 'col2': 12, 'col2': 153.49353894392, 'col4': 1})

df_row = SPARK.createDataFrame(row)

df_row.write.jdbc(url="jdbc:mysql://rds-url/db_name", table="db_name", properties={"driver":"com.mysql.jdbc.Driver","user":"user", "password":"password"}, mode="append")

def main():

SPARK.sparkcontext.parallelize([1, 2, 3, 4]).foreachPartition(test)

main()

ERROR:

Traceback (most recent call last):

File "/usr/lib/spark/python/pyspark/cloudpickle.py", line 107, in dump

return Pickler.dump(self, obj)

File "/usr/lib64

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值