spark启动master时提示端口8080被占用SelectChannelConnector@0.0.0.0:8080: java.net.BindException

原创 2014年07月10日 14:52:16

使用spark-shell时提示:

14/07/10 15:48:14 WARN AbstractLifeCycle: FAILED SelectChannelConnector@0.0.0.0:8080: java.net.BindException: アドレスは既に使用中です
java.net.BindException: アドレスは既に使用中です
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
	at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at scala.util.Try$.apply(Try.scala:161)
	at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
	at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
	at org.apache.spark.deploy.master.Master.preStart(Master.scala:124)
	at akka.actor.ActorCell.create(ActorCell.scala:562)
	at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
	at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
	at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
	at akka.dispatch.Mailbox.run(Mailbox.scala:218)
	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/07/10 15:48:14 WARN AbstractLifeCycle: FAILED org.eclipse.jetty.server.Server@1a33bbf0: java.net.BindException: アドレスは既に使用中です
java.net.BindException: アドレスは既に使用中です
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
	at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at scala.util.Try$.apply(Try.scala:161)
	at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
	at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
	at org.apache.spark.deploy.master.Master.preStart(Master.scala:124)
	at akka.actor.ActorCell.create(ActorCell.scala:562)
	at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
	at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
	at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
	at akka.dispatch.Mailbox.run(Mailbox.scala:218)
	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/07/10 15:48:14 INFO JettyUtils: Failed to create UI at port, 8080. Trying again.
14/07/10 15:48:14 INFO JettyUtils: Error was: Failure(java.net.BindException: アドレスは既に使用中です)
14/07/10 15:48:24 WARN AbstractLifeCycle: FAILED SelectChannelConnector@0.0.0.0:8081: java.net.BindException: アドレスは既に使用中です
java.net.BindException: アドレスは既に使用中です
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
	at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at scala.util.Try$.apply(Try.scala:161)
	at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
	at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
	at org.apache.spark.deploy.master.Master.preStart(Master.scala:124)
	at akka.actor.ActorCell.create(ActorCell.scala:562)
	at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
	at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
	at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
	at akka.dispatch.Mailbox.run(Mailbox.scala:218)
	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/07/10 15:48:24 WARN AbstractLifeCycle: FAILED org.eclipse.jetty.server.Server@506d41d8: java.net.BindException: アドレスは既に使用中です
java.net.BindException: アドレスは既に使用中です
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
	at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
	at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.eclipse.jetty.server.Server.doStart(Server.java:293)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply$mcV$sp(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at org.apache.spark.ui.JettyUtils$$anonfun$1.apply(JettyUtils.scala:192)
	at scala.util.Try$.apply(Try.scala:161)
	at org.apache.spark.ui.JettyUtils$.connect$1(JettyUtils.scala:191)
	at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:205)
	at org.apache.spark.ui.WebUI.bind(WebUI.scala:99)
	at org.apache.spark.deploy.master.Master.preStart(Master.scala:124)
	at akka.actor.ActorCell.create(ActorCell.scala:562)
	at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
	at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
	at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
	at akka.dispatch.Mailbox.run(Mailbox.scala:218)
	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/07/10 15:48:24 INFO JettyUtils: Failed to create UI at port, 8081. Trying again.
14/07/10 15:48:24 INFO JettyUtils: Error was: Failure(java.net.BindException: アドレスは既に使用中です)
Exception in thread "main" java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
	at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
	at scala.concurrent.Await$.result(package.scala:107)
	at org.apache.spark.deploy.master.Master$.startSystemAndActor(Master.scala:791)
	at org.apache.spark.deploy.master.Master$.main(Master.scala:765)
	at org.apache.spark.deploy.master.Master.main(Master.scala)



在root权限下通过命令netstat  -apn | grep 8080查看使用该端口的应用程序:

[root@hadoop186 hadoop]# netstat  -apn | grep 8080
tcp        0      0 :::8080                     :::*                        LISTEN      3985/java  
提示是pid为3985的java程序在使用该端口

于是 我直接使用浏览器访问该端口:


通过窗口可以看出是hadoop占用了该端口。。。。

显而易见 现在有两种解决方案:

第一种是找到hadoop配置文件中使用8080的设置,然后修改这个端口,比较麻烦

所以我们采用第二种

第二种是找到spark的配置文件中8080的设置,让spark的webui使用其他端口:这个配置是在sbin/start-master.sh中

[hadoop@hadoop186 sbin]$ cat start-master.sh 
#!/usr/bin/env bash

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# Starts the master on the machine this script is executed on.

sbin=`dirname "$0"`
sbin=`cd "$sbin"; pwd`

START_TACHYON=false

while (( "$#" )); do
case $1 in
    --with-tachyon)
      if [ ! -e "$sbin"/../tachyon/bin/tachyon ]; then
        echo "Error: --with-tachyon specified, but tachyon not found."
        exit -1
      fi
      START_TACHYON=true
      ;;
  esac
shift
done

. "$sbin/spark-config.sh"

. "$SPARK_PREFIX/bin/load-spark-env.sh"

if [ "$SPARK_MASTER_PORT" = "" ]; then
  SPARK_MASTER_PORT=7077
fi

if [ "$SPARK_MASTER_IP" = "" ]; then
  SPARK_MASTER_IP=`hostname`
fi

if [ "$SPARK_MASTER_WEBUI_PORT" = "" ]; then
<span style="color:#ff0000;">  SPARK_MASTER_WEBUI_PORT=8080</span>
fi

"$sbin"/spark-daemon.sh start org.apache.spark.deploy.master.Master 1 --ip $SPARK_MASTER_IP --port $SPARK_MASTER_PORT --webui-port $SPARK_MASTER_WEBUI_PORT

if [ "$START_TACHYON" == "true" ]; then
  "$sbin"/../tachyon/bin/tachyon bootstrap-conf $SPARK_MASTER_IP
  "$sbin"/../tachyon/bin/tachyon format -s
  "$sbin"/../tachyon/bin/tachyon-start.sh master
fi
只要将这段端口修改成其他没有被使用的端口启动就没问题了:

启动日志:

[hadoop@hadoop186 spark]$ cd logs/
[hadoop@hadoop186 logs]$ ls
spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop186.out  spark-hadoop-org.apache.spark.deploy.worker.Worker-1-hadoop186.out
[hadoop@hadoop186 logs]$ cat spark-hadoop-org.apache.spark.deploy.master.Master-1-hadoop186.out 
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Spark Command: /usr/java/jdk1.7.0_45/bin/java -cp ::/home/hadoop/spark-1.0.0-bin-cdh4/conf:/home/hadoop/spark-1.0.0-bin-cdh4/lib/spark-assembly-1.0.0-hadoop2.0.0-mr1-cdh4.2.0.jar:/home/hadoop/spark-1.0.0-bin-cdh4/lib/datanucleus-core-3.2.2.jar:/home/hadoop/spark-1.0.0-bin-cdh4/lib/datanucleus-rdbms-3.2.1.jar:/home/hadoop/spark-1.0.0-bin-cdh4/lib/datanucleus-api-jdo-3.2.1.jar:/home/hadoop/hadoop/etc/hadoop:/home/hadoop/hadoop/etc/hadoop -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m org.apache.spark.deploy.master.Master --ip hadoop186 --port 7077 --webui-port 8080
========================================

14/07/10 15:23:40 INFO SecurityManager: Changing view acls to: hadoop
14/07/10 15:23:40 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop)
14/07/10 15:23:42 INFO Slf4jLogger: Slf4jLogger started
14/07/10 15:23:43 INFO Remoting: Starting remoting
14/07/10 15:23:43 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkMaster@hadoop186:7077]
14/07/10 15:23:44 INFO Master: Starting Spark master at spark://hadoop186:7077
14/07/10 15:23:55 INFO MasterWebUI: Started MasterWebUI at http://hadoop186:8080
14/07/10 15:23:55 INFO Master: I have been elected leader! New state: ALIVE
14/07/10 15:23:56 INFO Master: Registering worker hadoop186:47966 with 1 cores, 846.0 MB RAM
[hadoop@hadoop186 logs]$ 
jps查看进程:

[hadoop@hadoop186 logs]$ jps
2675 QuorumPeerMain
<span style="background-color: rgb(255, 0, 0);">32031 Worker</span>
2764 JournalNode
32558 Jps
3163 DFSZKFailoverController
3985 NodeManager
2847 NameNode
<span style="color:#ff0000;">31900 Master</span>
3872 ResourceManager
2927 DataNode
[hadoop@hadoop186 logs]$ 


可以看到Master和Worker的进程



版权声明:本文为博主原创文章,未经博主允许不得转载。

相关文章推荐

Spark编程指南入门之Java篇二-基本操作

4. RDD的操作 4.1 基本操作 RDD有2种类型的操作,一种是转换transformations,它基于一个存在的数据集创建出一个新的数据集;另一种是行动actions,它通过对一个存在的数...

解决nginx无法启动, bind() to 0.0.0.0:8080 failed的错误

nginx启动后,不能正常显示网页。(即在浏览器里输入http://localhost/8080 nginx监听的端口信息,在nginx.conf配置文件里设置) 第一步:查看nginx的主进程号 ...

报错:java.net.bindexception: address already in use: jvm_bind:8080

原因:8080端口被占用 (我遇到的是8005的Tomcat被占用啊) 严重: StandardServer.await: create[localhost:8005]: java.net.Bin...

项目报错java.net.bindexception: address already in use: jvm_bind:8080

今天做项目,突然间启动tomcat出错了,说address already in use: jvm_bind:8080。查了资料按照如下步骤,问题解决。将解决办法记录一下。 这说明80端口(该端口是...

java.net.SocketException: Permission denied(将80端口重定向到8080端口)

在Linux的下面部署了apache,为了安全我们使用非root用户进行启动, 但是在域名绑定时无法直接访问80端口号。众所周知,在unix下,非root用户不能监听1024以下的端口号, 这个apa...

java.net.SocketException: Permission denied(将80端口重定向到8080端口)

很多时候,tomcat是非root账号,直接将8080改成80,tomcat会报错 Java.NET.SocketException: Permission denied。原因是非root用户不能访问...

当eclipse卡死的时候关闭重启eclipse,运行TOMCAT时,提示8080端口被占用的解决方法

此方法也可以解决,windows上如何结束进程的详细过程,下面附详细,图文说明 在dos下,输入  netstat   -ano|findstr  8080  //说明:查看占用8080端口...

android连接服务器时,报:java.net.ConnectException: localhost/127.0.0.1:8080

android连接服务器时,报:java.net.ConnectException: localhost/127.0.0.1:8080

利用命令行解决Tomcat启动时8080端口被占用的问题

当我们部署项目到Tomcat的时候,通常会遇到项目部署失败而重启MyEclipse,然而当我们再次部署项目到Tomcat上的时候,我们会发现8080端口已经被占用了,这个时候我们应该怎么处理呢? ...

Linux服务器上Tomcat6绑定80端口提示java.net.BindException: Permission denied <null>:80

在买的vps上想将tomcat6绑定到80端口上. 端口占用分析但是提示我绑定失败,想起来自带的Apache Server占用了这个端口. 首先用 ps -ef | grep apache2查看A...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:深度学习:神经网络中的前向传播和反向传播算法推导
举报原因:
原因补充:

(最多只允许输入30个字)