之前初学spark用spark-shell执行小程序的时候, 每次执行action操作(比如count,collect或者println),都会报错:
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
同时如果去spark ui上(公司默认为ip:18080)会看到spark-shell为核数core为0:
原因是启动spark-shell的时候没有给他分配资源, 所以我们应该在启动spark-shell的时候这么写:
/home/mr/spark/bin/spark-shell --executor-memory 4G \
--total-executor-cores 10 \