我们有一个Spark流应用程序,它可以提取数据 @10,000/ sec ...我们在DStream上使用foreachRDD操作(因为除非在DStream上找到输出操作,否则spark不会执行)
所以我们必须像这样使用foreachRDD输出操作,它需要 3 hours ...来编写一个单一的数据(10,000),这是 slow
CodeSnippet 1:
requestsWithState.foreachRDD { rdd =>
rdd.foreach {
case (topicsTableName, hashKeyTemp, attributeValueUpdate) => {
val client = new AmazonDynamoDBClient()
val request = new UpdateItemRequest(topicsTableName, hashKeyTemp, attributeValueUpdate)
try client.updateItem(request)
catch {
case se: Exception => println("Error executing updateItem!\nTable ", se)
}
}
case null =>
}
}
}
所以我认为foreachRDD中的代码可能是问题所以请注意它看看需要花多少时间.... to my surprise ...even with nocode inside the foreachRDD it still run's for 3 hours
CodeSnippet 2:
requestsWithState.foreachRDD {
rdd => rdd.foreach {
// No code here still takes a lot of time ( there used to be code but removed it to see if it's any faster without code) //
}
}
请告诉我们,如果我们遗漏了任何东西或另外一种方法来执行此操作,因为我理解没有DStream上的输出操作火花流应用程序将无法运行..此时我无法使用其他输出操作...
Note : To isolate the problem and make sure that dynamo code is not problem ...i ran with empty loop .....look's like foreachRDD is slow on it's own when iterating over a huge record set coming in @10,000/sec ...and not the dynamo code as empty foreachRDD and with dynamo code took the same time ...
ScreenShot显示 foreachRDD 执行的所有阶段和时间,即使它是jus循环并且没有内部代码
Time taken by the foreachRDD empty loop
Task distribution for large running task among 9 worker nodes for the foreachRDD empty loop ...