java-spark广播变量

1. java spark使用广播变量方式

在java spark中如果想要使用广播变量需要使用JavaSparkContext.broadcast()方法
代码如下


    SparkSession sparkSession = SparkSession.builder().config(sparkConf).getOrCreate();
    JavaSparkContext javaSparkContext = JavaSparkContext.fromSparkContext(sparkSession.sparkContext());

    Dataset<Row> labelDimensionTable = sparkSession.read().parquet(labelDimPath);
    Map<String, Long> labelNameToId = getNameToId(labelDimensionTable);
    Broadcast<Map<String, Long>> labelNameIdBroadcast = javaSparkContext.broadcast(labelNameToId);

    Map<String, Long> getNameToId(Dataset<Row> labelDimTable) {
        return  labelDimTable.javaRDD().mapToPair(
                new PairFunction() {
                    @Override
                    public Tuple2 call(Object object) throws Exception {
                        Row curRow = (Row) object;
                        Long labelId = curRow.getAs("label_id");
                        String labelTitle = curRow.getAs("label_title");

                        return Tuple2.apply(labelTitle, labelId);
                    }
                }
        ).collectAsMap();
     }

2. 运行时spark任务报错


20/09/09 18:23:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 5.0 (TID 4008, node-hadoop67.com, executor 3, partition 0, RACK_LOCAL, 8608 bytes)
20/09/09 18:23:00 INFO storage.BlockManagerInfo: Added broadcast_9_piece0 in memory on node-hadoop67.com:23191 (size: 41.1 KB, free: 2.5 GB)
20/09/09 18:23:01 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in memory on node-hadoop67.com:23191 (size: 33.5 KB, free: 2.5 GB)
20/09/09 18:23:02 INFO storage.BlockManagerInfo: Added broadcast_5_piece1 in memory on node-hadoop67.com:23191 (size: 698.1 KB, free: 2.5 GB)
20/09/09 18:23:02 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in memory on node-hadoop67.com:23191 (size: 4.0 MB, free: 2.5 GB)
20/09/09 18:23:02 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 5.0 (TID 4008, node-hadoop67.com, executor 3): java.io.IOException: java.lang.UnsupportedOperationException
	at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1367)
	at org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:207)
	at org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:66)
	at org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:66)
	at org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:96)
	at com.kk.search.user_profile.task.user_profile.UserLabelProfile$1.call(UserLabelProfile.java:157)
	at org.apache.spark.sql.Dataset$$anonfun$44.apply(Dataset.scala:2605)
	at org.apache.spark.sql.Dataset$$anonfun$44.apply(Dataset.scala:2605)
	at org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$5.apply(objects.scala:188)
	at org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$5.apply(objects.scala:185)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:836)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:836)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:109)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:381)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.UnsupportedOperationException
	at java.util.AbstractMap.put(AbstractMap.java:209)
	at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:162)
	at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39)
	at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)
	at org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:278)
	at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$8.apply(TorrentBroadcast.scala:308)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1394)
	at org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:309)
	at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1$$anonfun$apply$2.apply(TorrentBroadcast.scala:235)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:211)
	at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1360)
	... 29 more

20/09/09 18:23:02 INFO scheduler.TaskSetManager: Starting task 0.1 in stage 5.0 (TID 4009, node-hadoop64.com, executor 7, partition 0, RACK_LOCAL, 8608 bytes)


关注一下具体的cause

Caused by: java.lang.UnsupportedOperationException
	at java.util.AbstractMap.put(AbstractMap.java:209)

1. 原因

原来是因为序列化的问题,在使用java api的时候,如果broadcast的变量是使用line_RDD_2.collectAsMap()的方式产生的,那么被广播的类型就是Map, kryo 不知道真实的对象类型,所以就会采用AbstractMap来进行解析。

2. 解决方案

所以我们要新建一个map,将line_RDD_2.collectAsMap()放入新建的map即可。

原来的代码为

    Map<String, Long> getNameToId(Dataset<Row> labelDimTable) {
        return  labelDimTable.javaRDD().mapToPair(
                new PairFunction() {
                    @Override
                    public Tuple2 call(Object object) throws Exception {
                        Row curRow = (Row) object;
                        Long labelId = curRow.getAs("label_id");
                        String labelTitle = curRow.getAs("label_title");

                        return Tuple2.apply(labelTitle, labelId);
                    }
                }
        ).collectAsMap();
     }

修改为


    Map<String, Long> getNameToId(Dataset<Row> labelDimTable) {

        Map<String, Long> res = new HashMap<>();
        Map<String, Long> apiMap=  labelDimTable.javaRDD().mapToPair(
                new PairFunction() {
                    @Override
                    public Tuple2 call(Object object) throws Exception {
                        Row curRow = (Row) object;
                        Long labelId = curRow.getAs("label_id");
                        String labelTitle = curRow.getAs("label_title");

                        return Tuple2.apply(labelTitle, labelId);
                    }
                }
        ).collectAsMap();
        res.putAll(apiMap);
        return res;
    }



参考
https://www.jianshu.com/p/f478376bdbb9
https://stackoverflow.com/questions/43023961/spark-kryo-serializers-and-broadcastmapobject-iterablegowalladatalocation

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值