【问题解决】Spark集群,通过master节点执行start-all.sh启动slave报错

Spark集群,通过master节点执行start-all.sh启动slave报错:./sbin/start-all.sh

所以最初我搭建完Spark集群后,一直是分别执行master和各slave的脚本来进行启动的:

启动master:

./sbin/start-master.sh -h 192.168.3.207 --host 192.168.3.207

启动slave:

/sbin/start-slave.sh spark://192.168.3.207:7077

而且各操作都是在各自的机器上。

今天同事说正常应该是直接通过master节点,直接start-all就可以启动所有节点。

我在master启动全部时,报错如下:

$ ../sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /home/ubutnu/spark_2_2_1/logs/spark-ubutnu-org.apache.spark.deploy.master.Master-1-ubutnu-Super-Server.out
192.168.3.104: bash: 第 0 行: cd: /home/ubutnu/spark_2_2_1: 没有那个文件或目录
192.168.3.104: bash: /home/ubutnu/spark_2_2_1/sbin/start-slave.sh: 没有那个文件或目录
192.168.3.102: bash: 第 0 行: cd: /home/ubutnu/spark_2_2_1: 没有那个文件或目录
192.168.3.102: bash: /home/ubutnu/spark_2_2_1/sbin/start-slave.sh: 没有那个文件或目录

关键点:路径找不到,是192.168.3.102和104报的。因为master机器只配了各slave的ip,并不知道他们的spark安装路径,所以我猜测master是按本机的路径去调,但可惜spark在另外两台的目录和master并不一样,因此就找不到指定的路径了。

解决办法:将所有slave机器的spark路径改成和master一致,便可解决。

$ ./start-all.sh 
starting org.apache.spark.deploy.master.Master, logging to /home/ubutnu/spark_2_2_1/logs/spark-ubutnu-org.apache.spark.deploy.master.Master-1-ubutnu-Super-Server.out
192.168.3.104: starting org.apache.spark.deploy.worker.Worker, logging to /home/ubutnu/spark_2_2_1/logs/spark-he-org.apache.spark.deploy.worker.Worker-1-he-V660.out
192.168.3.102: starting org.apache.spark.deploy.worker.Worker, logging to /home/ubutnu/spark_2_2_1/logs/spark-he-org.apache.spark.deploy.worker.Worker-1-he-200.out
但是打开webui页面,URL仍然是带主机名的:URL: spark://ubutnu-Super-Server:7077

而且worker机器报错:

18/04/17 17:03:33 INFO Worker: Spark home: /home/ubutnu/spark_2_2_1
18/04/17 17:03:33 INFO Utils: Successfully started service 'WorkerUI' on port 8081.
18/04/17 17:03:33 INFO WorkerWebUI: Bound WorkerWebUI to 192.168.3.102, and started at http://192.168.3.102:8081
18/04/17 17:03:33 INFO Worker: Connecting to master ubutnu-Super-Server:7077...
18/04/17 17:03:38 WARN TransportClientFactory: DNS resolution for ubutnu-Super-Server:7077 took 5028 ms
18/04/17 17:03:38 WARN Worker: Failed to connect to master ubutnu-Super-Server:7077
Caused by: java.io.IOException: Failed to connect to ubutnu-Super-Server:7077

解决办法:将master和slave各机器的conf/spark-env.sh加上:
export SPARK_MASTER_HOST=192.168.3.207

总结:

master直接启动所有节点:./sbin/start-all.sh

master直接停止所有节点:./sbin/stop-all.sh

spark集群slave访问master被拒绝

04-09

16/04/09 23:18:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicablern16/04/09 23:18:45 ERROR RetryingBlockFetcher: Exception while beginning fetch of 1 outstanding blocks rnjava.io.IOException: Failed to connect to master/218.192.172.48:35542rn at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)rn at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)rn at org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:91)rn at org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)rn at org.apache.spark.network.shuffle.RetryingBlockFetcher.start(RetryingBlockFetcher.java:120)rn at org.apache.spark.network.netty.NettyBlockTransferService.fetchBlocks(NettyBlockTransferService.scala:100)rn at org.apache.spark.storage.ShuffleBlockFetcherIterator.sendRequest(ShuffleBlockFetcherIterator.scala:169)rn at org.apache.spark.storage.ShuffleBlockFetcherIterator.fetchUpToMaxBytes(ShuffleBlockFetcherIterator.scala:351)rn at org.apache.spark.storage.ShuffleBlockFetcherIterator.initialize(ShuffleBlockFetcherIterator.scala:286)rn at org.apache.spark.storage.ShuffleBlockFetcherIterator.(ShuffleBlockFetcherIterator.scala:119)rn at org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:43)rn at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:98)rn at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)rn at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)rn at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)rn at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)rn at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)rn at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)rn at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)rn at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)rn at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)rn at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)rn at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)rn at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)rn at org.apache.spark.scheduler.Task.run(Task.scala:82)rn at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)rn at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)rn at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)rn at java.lang.Thread.run(Thread.java:745)rnCaused by: java.net.ConnectException: 拒绝连接: master/218.192.172.48:35542rn at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)rn at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)rn at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)rn at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)rn at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)rn at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)rn at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)rn at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)rn at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)rn ... 1 morern16/04/09 23:18:50 ERROR RetryingBlockFetcher: Exception while beginning fetch of 1 outstanding blocks (after 1 retries)rnjava.io.IOException: Failed to connect to master/218.192.172.48:35542rn at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)rn at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)rn at org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:91)rn at org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)rn at org.apache.spark.network.shuffle.RetryingBlockFetcher.access$200(RetryingBlockFetcher.java:43)rn at org.apache.spark.network.shuffle.RetryingBlockFetcher$1.run(RetryingBlockFetcher.java:170)rn at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)rn at java.util.concurrent.FutureTask.run(FutureTask.java:262)rn at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)rn at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)rn at java.lang.Thread.run(Thread.java:745)rnCaused by: java.net.ConnectException: 拒绝连接: master/218.192.172.48:35542rn at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)rn at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)rn at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)rn at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)rn at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)rn at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)rn at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)rn at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)rn at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)rn ... 1 morern16/04/09 23:18:55 ERROR RetryingBlockFetcher: Exception while beginning fetch of 1 outstanding blocks (after 2 retries)rnjava.io.IOException: Failed to connect to master/218.192.172.48:35542rn at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)rn at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)rn at org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:91)rn at org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)rn at org.apache.spark.network.shuffle.RetryingBlockFetcher.access$200(RetryingBlockFetcher.java:43)rn at org.apache.spark.network.shuffle.RetryingBlockFetcher$1.run(RetryingBlockFetcher.java:170)rn at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)rn at java.util.concurrent.FutureTask.run(FutureTask.java:262)rn at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)rn at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)rn at java.lang.Thread.run(Thread.java:745)rnrn配置是按照教程:http://wuchong.me/blog/2015/04/04/spark-on-yarn-cluster-deploy/ 来的,但是始终解决不了这个问题,rn主从互拼也能拼通,请问各位前辈有遇到过类似情形么,我应该如何排错?rn其中我把master同时也作为worder,但是master的日志里我也发现了同样的错误.rn另外,SparkPi的例程却可以运行,而我现在运行的程序在local模式下是没有问题的。

没有更多推荐了,返回首页

私密
私密原因:
请选择设置私密原因
  • 广告
  • 抄袭
  • 版权
  • 政治
  • 色情
  • 无意义
  • 其他
其他原因:
120
出错啦
系统繁忙,请稍后再试