spark foreach java,Spark:如何加速foreachRDD?

我们有一个Spark流应用程序,它可以提取数据 @10,000/ sec ...我们在DStream上使用foreachRDD操作(因为除非在DStream上找到输出操作,否则spark不会执行)

所以我们必须像这样使用foreachRDD输出操作,它需要 3 hours ...来编写一个单一的数据(10,000),这是 slow

CodeSnippet 1:

requestsWithState.foreachRDD { rdd =>

rdd.foreach {

case (topicsTableName, hashKeyTemp, attributeValueUpdate) => {

val client = new AmazonDynamoDBClient()

val request = new UpdateItemRequest(topicsTableName, hashKeyTemp, attributeValueUpdate)

try client.updateItem(request)

catch {

case se: Exception => println("Error executing updateItem!\nTable ", se)

}

}

case null =>

}

}

}

所以我认为foreachRDD中的代码可能是问题所以请注意它看看需要花多少时间.... to my surprise ...even with nocode inside the foreachRDD it still run's for 3 hours

CodeSnippet 2:

requestsWithState.foreachRDD {

rdd => rdd.foreach {

// No code here still takes a lot of time ( there used to be code but removed it to see if it's any faster without code) //

}

}

请告诉我们,如果我们遗漏了任何东西或另外一种方法来执行此操作,因为我理解没有DStream上的输出操作火花流应用程序将无法运行..此时我无法使用其他输出操作...

Note : To isolate the problem and make sure that dynamo code is not problem ...i ran with empty loop .....look's like foreachRDD is slow on it's own when iterating over a huge record set coming in @10,000/sec ...and not the dynamo code as empty foreachRDD and with dynamo code took the same time ...

ScreenShot显示 foreachRDD 执行的所有阶段和时间,即使它是jus循环并且没有内部代码

Time taken by the foreachRDD empty loop

f332e6696d39bd43bf40f2ac2b1c4adf.png

Task distribution for large running task among 9 worker nodes for the foreachRDD empty loop ...

7902f9969a2ca85bb5969175409495f3.png

使用Spark Streaming整合Kafka可以实现实时流式数据处理。下面是一个简单的Java代码示例: ```java import org.apache.spark.SparkConf; import org.apache.spark.streaming.Duration; import org.apache.spark.streaming.api.java.JavaInputDStream; import org.apache.spark.streaming.api.java.JavaStreamingContext; import org.apache.spark.streaming.kafka010.ConsumerStrategies; import org.apache.spark.streaming.kafka010.KafkaUtils; import org.apache.spark.streaming.kafka010.LocationStrategies; import java.util.Arrays; import java.util.Collection; import java.util.HashMap; import java.util.Map; public class KafkaSparkStreamingExample { public static void main(String[] args) throws InterruptedException { String brokers = "localhost:9092"; String groupId = "group1"; String topics = "topic1"; // Create context with a 2 seconds batch interval SparkConf sparkConf = new SparkConf().setAppName("KafkaSparkStreamingExample"); JavaStreamingContext streamingContext = new JavaStreamingContext(sparkConf, new Duration(2000)); // Create Kafka parameters map Map<String, Object> kafkaParams = new HashMap<>(); kafkaParams.put("bootstrap.servers", brokers); kafkaParams.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); kafkaParams.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); kafkaParams.put("group.id", groupId); kafkaParams.put("auto.offset.reset", "latest"); kafkaParams.put("enable.auto.commit", false); Collection<String> topicsSet = Arrays.asList(topics.split(",")); // Create direct kafka stream JavaInputDStream<String> messages = KafkaUtils.createDirectStream( streamingContext, LocationStrategies.PreferConsistent(), ConsumerStrategies.<String, String>Subscribe(topicsSet, kafkaParams) ).map(record -> record.value()); // Process each message in stream messages.foreachRDD(rdd -> { rdd.foreach(message -> System.out.println(message)); }); // Start the computation streamingContext.start(); streamingContext.awaitTermination(); } } ``` 在这个示例中,我们首先定义了Kafka的broker地址、消费者组ID和要消费的主题。然后,我们创建了一个Spark Streaming的JavaStreamingContext对象,并定义了每个批次的时间间隔为2秒。 接下来,我们定义了Kafka参数Map和要消费的主题集合,并使用KafkaUtils.createDirectStream()方法创建了一个JavaInputDStream对象。 最后,我们对每个批次中的每个消息进行处理,将其打印到控制台上。最后,我们启动了Spark Streaming上下文并等待它终止。 这只是一个简单的示例,你可以根据自己的需求进行修改和扩展。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值