执行:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://192.168.0.63:7077 --executor-memory 10G --total-executor-cores 100 examples/lib/spark-examples_2.10-1.0.0-cdh5.1.0.jar 1000
报如下错误解决:
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
2014-10-13 09:52:35,142 ERROR akka.remote.EndpointWriter: AssociationError [akka.tcp://sparkWorker@namenode1:7078] -> [akka.tcp://sparkExecutor@namenode1:37398]: Error [Association failed with [akka.tcp://sparkExecutor@namenode1:37398]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor@namenode1:37398]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: namenode1/192.168.0.60:37398
]
2014-10-13 09:52:35,150 ERROR akka.remote.EndpointWriter: AssociationError [akka.tcp://sparkWorker@namenode1:7078] -> [akka.tcp://sparkExecutor@namenode1:37398]: Error [Association failed with [akka.tcp://sparkExecutor@namenode1:37398]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor@namenode1:37398]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: namenode1/192.168.0.60:37398
org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.
executor-memory
不能大于worker_max_heapsize,
192.168.0.63:7077
改成datanode3:7077