python spark社区_python-Spark笛卡尔积

我必须比较坐标才能获得距离.因此,我用sc.textFile()加载数据并制成笛卡尔积.文本文件中大约有2.000.000行,因此需要比较2.000.000 x 2.000.000坐标.

我用大约2.000的坐标测试了代码,并且在几秒钟内运行良好.但是使用大文件似乎在某个时刻停止了,我不知道为什么.该代码如下所示:

def concat(x,y):

if(isinstance(y, list)&(isinstance(x,list))):

return x + y

if(isinstance(x,list)&isinstance(y,tuple)):

return x + [y]

if(isinstance(x,tuple)&isinstance(y,list)):

return [x] + y

else: return [x,y]

def haversian_dist(tuple):

lat1 = float(tuple[0][0])

lat2 = float(tuple[1][0])

lon1 = float(tuple[0][2])

lon2 = float(tuple[1][2])

p = 0.017453292519943295

a = 0.5 - cos((lat2 - lat1) * p)/2 + cos(lat1 * p) * cos(lat2 * p) * (1 - cos((lon2 - lon1) * p)) / 2

print(tuple[0][1])

return (int(float(tuple[0][1])), (int(float(tuple[1][1])),12742 * asin(sqrt(a))))

def sort_val(tuple):

dtype = [("globalid", int),("distance",float)]

a = np.array(tuple[1], dtype=dtype)

sorted_mins = np.sort(a, order="distance",kind="mergesort")

return (tuple[0], sorted_mins)

def calc_matrix(sc, path, rangeval, savepath, name):

data = sc.textFile(path)

data = data.map(lambda x: x.split(";"))

data = data.repartition(100).cache()

data.collect()

matrix = data.cartesian(data)

values = matrix.map(haversian_dist)

values = values.reduceByKey(concat)

values = values.map(sort_val)

values = values.map(lambda x: (x[0], x[1][1:int(rangeval)].tolist()))

values = values.map(lambda x: (x[0], [y[0] for y in x[1]]))

dicti = values.collectAsMap()

hp.save_pickle(dicti, savepath, name)

甚至包含大约15.000条目的文件也不起作用.我知道笛卡尔会导致O(n ^ 2)运行时.但是火花不应该解决吗?还是有问题?唯一的起点是一条错误消息,但我不知道它是否与实际问题有关:

16/08/06 22:21:12 WARN TaskSetManager: Lost task 15.0 in stage 1.0 (TID 16, hlb0004): java.net.SocketException: Daten?bergabe unterbrochen (broken pipe)

at java.net.SocketOutputStream.socketWrite0(Native Method)

at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)

at java.net.SocketOutputStream.write(SocketOutputStream.java:153)

at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)

at java.io.DataOutputStream.write(DataOutputStream.java:107)

at java.io.FilterOutputStream.write(FilterOutputStream.java:97)

at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:440)

at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:452)

at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:452)

at scala.collection.Iterator$class.foreach(Iterator.scala:727)

at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)

at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:452)

at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:280)

at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1741)

at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:239)

16/08/06 22:21:12 INFO TaskSetManager: Starting task 15.1 in stage 1.0 (TID 17, hlb0004, partition 15,PROCESS_LOCAL, 2408 bytes)

16/08/06 22:21:12 WARN TaskSetManager: Lost task 7.0 in stage 1.0 (TID 8, hlb0004): java.net.SocketException: Connection reset

at java.net.SocketInputStream.read(SocketInputStream.java:209)

at java.net.SocketInputStream.read(SocketInputStream.java:141)

at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)

at java.io.BufferedInputStream.read(BufferedInputStream.java:265)

at java.io.DataInputStream.readInt(DataInputStream.java:387)

at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:139)

at org.apache.spark.api.python.PythonRunner$$anon$1.(PythonRDD.scala:207)

at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)

at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:342)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)

at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)

at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

at org.apache.spark.scheduler.Task.run(Task.scala:89)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值