WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure

PYSPARK_DRIVER_PYTHON=jupyter PYSPARK_DRIVER_PYTHON_OPTS="notebook" MASTER=spark://master:7077 pyspark --num-executors 1 --total-executor-cores 2 --executor-memory 512m

[I 15:51:51.365 NotebookApp] JupyterLab beta preview extension loaded from /home/hduser/anaconda2/lib/python2.7/site-packages/jupyterlab
[I 15:51:51.365 NotebookApp] JupyterLab application directory is /home/hduser/anaconda2/share/jupyter/lab
[I 15:51:51.411 NotebookApp] Serving notebooks from local directory: /home/hduser
[I 15:51:51.411 NotebookApp] 0 active kernels
[I 15:51:51.411 NotebookApp] The Jupyter Notebook is running at:
[I 15:51:51.411 NotebookApp] http://localhost:8888/?token=2c6536cdc0240809b532c6fd44978440b34c452a0410c2a2
[I 15:51:51.411 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 15:51:51.412 NotebookApp] 

    Copy/paste this URL into your browser when you connect for the first time,
    to login with a token:
        http://localhost:8888/?token=2c6536cdc0240809b532c6fd44978440b34c452a0410c2a2
[I 15:52:02.855 NotebookApp] Accepting one-time-token-authenticated connection from 127.0.0.1
[I 15:52:20.273 NotebookApp] Kernel started: ba91865b-8fcf-41f9-9a9c-14c427927f6b
18/06/26 15:52:27 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[W 15:52:30.648 NotebookApp] Timeout waiting for kernel_info reply from ba91865b-8fcf-41f9-9a9c-14c427927f6b
[I 15:52:34.369 NotebookApp] Adapting to protocol v5.1 for kernel ba91865b-8fcf-41f9-9a9c-14c427927f6b
[Stage 0:>                                                          (0 + 0) / 2]18/06/26 15:53:06 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
18/06/26 15:53:20 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
18/06/26 15:53:35 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
18/06/26 15:53:50 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
[Stage 0:>                                                          (0 + 0) / 2]18/06/26 15:54:05 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
[I 15:54:20.298 NotebookApp] Saving file at /pythonwork/ipynotebook/ch09.ipynb

解决办法:

hduser@data4:~$ vim /usr/local/spark/conf/spark-env.sh

增加配置export SPARK_CONF_DIR=/usr/local/spark/conf
更改SPARK_WORKER_MEMORY的值为512m皆为尝试

export SPARK_MASTER_HOST=master
export SPARK_MASTER_PORT=7077
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=512m
export SPARK_WORKER_PORT=7078
export SPARK_WORKER_WEBUI_PORT=8081
export SPARK_WORKER_INSTANCES=1
export SPARK_CONF_DIR=/usr/local/spark/conf
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export SCALA_HOME=/usr/local/scala
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop

然后执行命令

PYSPARK_DRVIER_PYTHON=jupyter PYSPARK_DRIVER_PYTHON_OPTS="notebook" MASTER=spark://master:7077 pyspark --num-executors 1 --total-executor-cores 1 --executor-memory 512m

核数变为1,执行结果如下:
这里写

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值