spark python pickle对象_PySpark:在RDD中使用对象

I am currently learning Python and want to apply it on/with Spark.

I have this very simple (and useless) script:

import sys

from pyspark import SparkContext

class MyClass:

def __init__(self, value):

self.v = str(value)

def addValue(self, value):

self.v += str(value)

def getValue(self):

return self.v

if __name__ == "__main__":

if len(sys.argv) != 1:

print("Usage CC")

exit(-1)

data = [1, 2, 3, 4, 5, 2, 5, 3, 2, 3, 7, 3, 4, 1, 4]

sc = SparkContext(appName="WordCount")

d = sc.parallelize(data)

inClass = d.map(lambda input: (input, MyClass(input)))

reduzed = inClass.reduceByKey(lambda a, b: a.addValue(b.getValue))

print(reduzed.collect())

When executing it with

spark-submit CustomClass.py

..the following error is thorwn (output shortened):

Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):

File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main

process()

File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process

serializer.dump_stream(func(split_index, iterator), outfile)

File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 133, in dump_stream

for obj in iterator:

File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1728, in add_shuffle_key

File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 415, in dumps

return pickle.dumps(obj, protocol)

PicklingError: Can't pickle __main__.MyClass: attribute lookup __main__.MyClass failed

at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)...

To me the statement

PicklingError: Can't pickle __main__.MyClass: attribute lookup __main__.MyClass failed

seems to be important. It means that the class instances can't be serialized, right?

Do you know how to solve this issue?

Thanks and regards

解决方案

There are a number of issues:

If you put MyClass in a separate file it can be pickled. This is a common problem for many Python uses of pickle. This is simple to solve by moving MyClass and the using from myclass import MyClass. Normally dill can fix these issues (as in import dill as pickle), but it didn't work for me here.

Once this is solved, your reduce doesn't work since calling addValue return None (no return), not an instance of MyClass. You need to change addValue to return self.

Finally, the lambda need to call getValue, so should have a.addValue(b.getValue())

Together:

myclass.py

class MyClass:

def __init__(self, value):

self.v = str(value)

def addValue(self, value):

self.v += str(value)

return self

def getValue(self):

return self.v

main.py

import sys

from pyspark import SparkContext

from myclass import MyClass

if __name__ == "__main__":

if len(sys.argv) != 1:

print("Usage CC")

exit(-1)

data = [1, 2, 3, 4, 5, 2, 5, 3, 2, 3, 7, 3, 4, 1, 4]

sc = SparkContext(appName="WordCount")

d = sc.parallelize(data)

inClass = d.map(lambda input: (input, MyClass(input)))

reduzed = inClass.reduceByKey(lambda a, b: a.addValue(b.getValue()))

print(reduzed.collect())

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值