如:
bin/run-example org.apache.spark.examples.streaming.NetworkWordCount localhost 9999
此时默认 --master 为 local .
这看起来毫无问题,但我在一台1核1G的主机上测试该样例,却永远无法成功。 原因这位老兄已道出:
Note
I experienced exactly the same problems when using SparkContext with “local[1]” master specification, because in that case one thread is used for receiving data, the others for processing. As there is only one thread running, no processing will take place. Once you shut down the connection, the receiver thread will be used for processing.
也就是说,spark streaming在启动时会启动两个线程: receving thread和 processing data thread.
local 模式下,spark streaming只启动了与CPU核数相同个数的线程(在我的例子中, 只有一个线程),无论如何是无法测试成功的;而在其它开发主机(4核主机)则完全不会出现此问题。
这涉及到了spark的线程模型。更深入的文档我还尚未找到。如果你有更深入的理解,欢迎探讨。