spark启动master时提示端口8080被占用SelectChannelConnector@0.0.0.0:8080: java.net.BindException

原创 2014年07月10日 14:52:16

使用spark-shell时提示:

14/07/10 15:48:14 WARN AbstractLifeCycle: FAILED SelectChannelConnector@0.0.0.0:8080: java.net.BindException: アドレスは既に使用中です
java.net.BindException: アドレスは既に使用中です
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
	at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at scala.util.Try$.apply(Try.scala:161)
	at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
	at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
	at org.apache.spark.deploy.master.Master.preStart(Master.scala:124)
	at akka.actor.ActorCell.create(ActorCell.scala:562)
	at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
	at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
	at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
	at akka.dispatch.Mailbox.run(Mailbox.scala:218)
	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/07/10 15:48:14 WARN AbstractLifeCycle: FAILED org.eclipse.jetty.server.Server@1a33bbf0: java.net.BindException: アドレスは既に使用中です
java.net.BindException: アドレスは既に使用中です
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
	at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at scala.util.Try$.apply(Try.scala:161)
	at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
	at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
	at org.apache.spark.deploy.master.Master.preStart(Master.scala:124)
	at akka.actor.ActorCell.create(ActorCell.scala:562)
	at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
	at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
	at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
	at akka.dispatch.Mailbox.run(Mailbox.scala:218)
	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/07/10 15:48:14 INFO JettyUtils: Failed to create UI at port, 8080. Trying again.
14/07/10 15:48:14 INFO JettyUtils: Error was: Failure(java.net.BindException: アドレスは既に使用中です)
14/07/10 15:48:24 WARN AbstractLifeCycle: FAILED SelectChannelConnector@0.0.0.0:8081: java.net.BindException: アドレスは既に使用中です
java.net.BindException: アドレスは既に使用中です
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
	at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at scala.util.Try$.apply(Try.scala:161)
	at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
	at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
	at org.apache.spark.deploy.master.Master.preStart(Master.scala:124)
	at akka.actor.ActorCell.create(ActorCell.scala:562)
	at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
	at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
	at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
	at akka.dispatch.Mailbox.run(Mailbox.scala:218)
	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/07/10 15:48:24 WARN AbstractLifeCycle: FAILED org.eclipse.jetty.server.Server@506d41d8: java.net.BindException: アドレスは既に使用中です
java.net.BindException: アドレスは既に使用中です
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
	at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at scala.util.Try$.apply(Try.scala:161)
	at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
	at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
	at org.apache.spark.deploy.master.Master.preStart(Master.scala:124)
	at akka.actor.ActorCell.create(ActorCell.scala:562)
	at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
	at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
	at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
	at akka.dispatch.Mailbox.run(Mailbox.scala:218)
	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/07/10 15:48:24 INFO JettyUtils: Failed to create UI at port, 8081. Trying again.
14/07/10 15:48:24 INFO JettyUtils: Error was: Failure(java.net.BindException: アドレスは既に使用中です)
Exception in thread "main" java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
	at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
	at scala.concurrent.Await$.result(package.scala:107)
	at org.apache.spark.deploy.master.Master$.startSystemAndActor(Master.scala:791)
	at org.apache.spark.deploy.master.Master$.main(Master.scala:765)
	at org.apache.spark.deploy.master.Master.main(Master.scala)



在root权限下通过命令netstat  -apn | grep 8080查看使用该端口的应用程序:

[root@hadoop186 hadoop]# netstat  -apn | grep 8080
tcp        0      0 :::8080                     :::*                        LISTEN      3985/java  
提示是pid为3985的java程序在使用该端口

于是 我直接使用浏览器访问该端口:


通过窗口可以看出是hadoop占用了该端口。。。。

显而易见 现在有两种解决方案:

第一种是找到hadoop配置文件中使用8080的设置,然后修改这个端口,比较麻烦

所以我们采用第二种

第二种是找到spark的配置文件中8080的设置,让spark的webui使用其他端口:这个配置是在sbin/start-master.sh中

[hadoop@hadoop186 sbin]$ cat start-master.sh 
#!/usr/bin/env bash

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# Starts the master on the machine this script is executed on.

sbin=`dirname "$0"`
sbin=`cd "$sbin"; pwd`

START_TACHYON=false

while (( "$#" )); do
case $1 in
    --with-tachyon)
      if [ ! -e "$sbin"/../tachyon/bin/tachyon ]; then
        echo "Error: --with-tachyon specified, but tachyon not found."
        exit -1
      fi
      START_TACHYON=true
      ;;
  esac
shift
done

. "$sbin/spark-config.sh"

. "$SPARK_PREFIX/bin/load-spark-env.sh"

if [ "$SPARK_MASTER_PORT" = "" ]; then
  SPARK_MASTER_PORT=7077
fi

if [ "$SPARK_MASTER_IP" = "" ]; then
  SPARK_MASTER_IP=`hostname`
fi

if [ "$SPARK_MASTER_WEBUI_PORT" = "" ]; then
<span style="color:#ff0000;">  SPARK_MASTER_WEBUI_PORT=8080</span>
fi

"$sbin"/spark-daemon.sh start org.apache.spark.deploy.master.Master 1 --ip $SPARK_MASTER_IP --port $SPARK_MASTER_PORT --webui-port $SPARK_MASTER_WEBUI_PORT

if [ "$START_TACHYON" == "true" ]; then
  "$sbin"/../tachyon/bin/tachyon bootstrap-conf $SPARK_MASTER_IP
  "$sbin"/../tachyon/bin/tachyon format -s
  "$sbin"/../tachyon/bin/tachyon-start.sh master
fi
只要将这段端口修改成其他没有被使用的端口启动就没问题了:

启动日志:

[hadoop@hadoop186 spark]$ cd logs/
[hadoop@hadoop186 logs]$ ls
spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop186.out  spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop186.out
[hadoop@hadoop186 logs]$ cat spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop186.out 
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Spark Command: /usr/java/jdk1.7.0_45/bin/java -cp ::/home/hadoop/spark-1.0.0-bin-cdh4/conf:/home/hadoop/spark-1.0.0-bin-cdh4/lib/spark-assembly-1.0.0-hadoop2.0.0-mr1-cdh4.2.0.jar:/home/hadoop/spark-1.0.0-bin-cdh4/lib/datanucleus-core-3.2.2.jar:/home/hadoop/spark-1.0.0-bin-cdh4/lib/datanucleus-rdbms-3.2.1.jar:/home/hadoop/spark-1.0.0-bin-cdh4/lib/datanucleus-api-jdo-3.2.1.jar:/home/hadoop/hadoop/etc/hadoop:/home/hadoop/hadoop/etc/hadoop -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m org.apache.spark.deploy.master.Master --ip hadoop186 --port 7077 --webui-port 8080
========================================

14/07/10 15:23:40 INFO SecurityManager: Changing view acls to: hadoop
14/07/10 15:23:40 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop)
14/07/10 15:23:42 INFO Slf4jLogger: Slf4jLogger started
14/07/10 15:23:43 INFO Remoting: Starting remoting
14/07/10 15:23:43 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkMaster@hadoop186:7077]
14/07/10 15:23:44 INFO Master: Starting Spark master at spark://hadoop186:7077
14/07/10 15:23:55 INFO MasterWebUI: Started MasterWebUI at http://hadoop186:8080
14/07/10 15:23:55 INFO Master: I have been elected leader! New state: ALIVE
14/07/10 15:23:56 INFO Master: Registering worker hadoop186:47966 with 1 cores, 846.0 MB RAM
[hadoop@hadoop186 logs]$ 
jps查看进程:

[hadoop@hadoop186 logs]$ jps
2675 QuorumPeerMain
<span style="background-color: rgb(255, 0, 0);">32031 Worker</span>
2764 JournalNode
32558 Jps
3163 DFSZKFailoverController
3985 NodeManager
2847 NameNode
<span style="color:#ff0000;">31900 Master</span>
3872 ResourceManager
2927 DataNode
[hadoop@hadoop186 logs]$ 


可以看到Master和Worker的进程



解决nginx无法启动, bind() to 0.0.0.0:8080 failed的错误

nginx启动后,不能正常显示网页。(即在浏览器里输入http://localhost/8080 nginx监听的端口信息,在nginx.conf配置文件里设置) 第一步:查看nginx的主进程号 ...

IP地址0.0.0.0是什么意思?

严格说来,0.0.0.0已经不是一个真正意义上的IP地址了。   它表示的是这样一个集合:   1、所有不清楚的主机和目的网络。这里的“不清楚”是指在本机的路由表里没有特定条目指明如何到达。   2、...
  • yjh314
  • yjh314
  • 2016年08月15日 11:08
  • 5223

spark开发中一些错误

1. ERROR SparkContext: Error initializing SparkContext. com.typesafe.config.ConfigException$Missing:...

Tomcat启动报错 Failed to initialize connector [Connector[HTTP/1.1-8080]]

清早上班,开机后运行服务,我的tomcat服务的端口是8080,结果服务报错! 2016-5-17 10:27:14 org.apache.coyote.http11.Http11AprProtoc...

Hadoop和Spark修改shh端口以及web监控端口

SSH端口不是默认端口22 如果ssh端口不是默认的22,在etc/hadoop/hadoop-env.sh里改下。如: export HADOOP_SSH_OPTS="-p 18921"...

spark集群启动命令和Web端口查看

namenode的webUI端口:50070yarn的web端口:8088spark集群的web端口:8080

spark提交任务端口占用异常

当在同一台机器上提交多个spark任务时 并且是以client的方式提交,会报端口占用错误 17/05/05 15:51:07 WARN AbstractLifeCycle: FAILED org.s...

spark集群8080端口页面只显示master的情况

电脑配置是一台物理机作为master,一台物理机作为slave,在master启动运行后,使用jps命令分别查看两台机器的运行状况,master与slave均运行正常,但是进入master:8080的...

spark Standalone

Spark任务调度方式 Spark运行模式 Spark的运行模式取决于传递给SparkContext的deployMode和master环境变量的值,个别模式还需要辅助的程序接口配合使用,目前mast...
  • lsshlsw
  • lsshlsw
  • 2014年11月08日 23:37
  • 2492

Hive启动提示端口10000被占用:SelectChannelConnector@0.0.0.0:10000: java.net.BindException

问题描述: 在使用hive --service hiveserver2启动hiveserver2服务的时候,提示SelectChannelConnector@0.0.0.0:10000: java....
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:spark启动master时提示端口8080被占用SelectChannelConnector@0.0.0.0:8080: java.net.BindException
举报原因:
原因补充:

(最多只允许输入30个字)