spark2.3.1集群模式搭建

 

首先感谢这篇文章的老哥:https://blog.csdn.net/Vitamin__C/article/details/80670029

1.所用虚拟机及Lunix系统

虚拟机:VMware Workstation Pro v12.5.9

Linux:Ubuntu-16.04.4

2.准备3台虚拟机

hostname分别为master,slave1,slave2,并固定好对应的ip即可(slave1和slave2可以后面通过克隆快速建立)

# 修改hosts文件
sudo vim /etc/hosts

# 添加如下内容
对应master的ip地址(例如:192.168.2.100)    master
对应slave1的ip地址(例如:192.168.2.101)    slave1
对应slave2的ip地址(例如:192.168.2.102)    slave2
# 修改hostname文件

sudo vim /etc/hostname

##将对应机器的hostname分别改为master、slave1、slave2

#改完最好重启机器

3.更新apt-get软件索引

sudo apt-get update

4.使用apt-get安装vim, java, ssh, openssl

sudo apt-get install vim

sudo apt-get install default-jdk

sudo apt-get install ssh

sudo apt-get install openssl

5.查看java是否安装成功及安装路径

java -version
update-alternatives --display java

6.配置ssh免密登录

# 生成密钥文件
ssh-keygen -t rsa -P "" -f ~/.ssh/id_rsa

# 查看密钥文件
ls ~/.ssh/

# 将产生的key放置到许可证文件中
cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys

# 登录master查看是否配置成功
ssh master

###################
如果这里输入password一直被拒绝,可以尝试重新设置对应用户的密码

7.安装hadoop2.8.4、scala2.12.6、spark2.3.1

# 解压并移动到/usr/local/目录下
tar -xzvf hadoop-2.8.4.tar.gz -C /usr/local
tar -xzvf scala-2.12.6.tgz -C /usr/local
tar -xzvf spark-2.3.1-bin-hadoop2.7.tgz -C /usr/local

# 分别重命名为hadoop、scala、spark
sudo mv hadoop-2.8.4.tar.gz hadoop
sudo mv scala-2.12.6.tgz scala
sudo mv spark-2.3.1-bin-hadoop2.7.tgz spark
# 给/usr/local权限

sudo chown -R ubuntu:ubuntu /usr/local

8.修改环境变量及配置文件 

vim ~/.bashrc

添加如下内容:

#Hadoop set
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPPED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HODOOP_HOME
export CLASSPATH=$CLASSPATH:/usr/local/hadoop/lib/*:.
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
#Hadoop set

#SCALA set
export PATH=${JAVA_HOME}/bin:${PATH}
export HADOOP_CLASSPATH=${JAVA_HOME}/lib/tools.jar
export SCALA_HOME=/usr/local/scala
export PATH=$PATH:$SCALA_HOME/bin
export PATH=$PATH:$SCALA_HOME/sbin
#SCALA set

#SPARK set
export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin
export SPARK_DIST_CLASSPATH=/usr/local/hadoop/bin/hadoop
export PYTHONPATH=$SPARK_HOME/python/:$SPARK_HOME/python/lib/py4j-0.10.6-src.zip:$PYTHONPATH
#SPARK set
# 刷新用户变量

source ~/.bashrc

修改/usr/local/hadoop/etc/hadoop中的配置文件

# 修改hadoop-env.sh

JAVA_HOME=${JAVA_HOME}>>>>>JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
# 修改core-site.xml
<configuration>

<!--配置namenode的地址-->
 <property>
   <name>fs.default.name</name>
   <value>hdfs://master:9000</value>
 </property>

<!-- 指定hadoop运行时产生文件的存储目录 -->
 <property>
   <name>hadoop.tmp.dir</name>
   <value>file:/usr/local/hadoop/hadoop_data/hdfs/tmp</value>
 </property> 
      
</configuration>
# 修改hdfs-site.xml

# namenode节点(master)设置
<configuration>

<!--指定hdfs的副本数-->
  <property>
       <name>dfs.replication</name>
        <value>1</value>
  </property>

<!--设置hdfs的权限-->
  <property>
        <name>dfs.permissions</name>
         <value>false</value>
  </property>

<!--secondary name node web 监听端口 -->
  <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>master:50090</value>
  </property>

<!--name node web 监听端口 -->
  <property>
   <name>dfs.namenode.http-address</name>
    <value>master:50070</value>
  </property>

<!--NN所使用的元数据保存-->
  <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/usr/local/hadoop/hadoop_data/hdfs/namenode</value>
  </property>

<!--存放 edit 文件-->
  <property>
   <name>dfs.namenode.edits.dir</name>
   <value>file:/usr/local/hadoop/hadoop_data/hdfs/edits</value>
  </property>

<!--secondary namenode 节点存储 checkpoint 文件目录-->
  <property>
    <name>dfs.namenode.checkpoint.dir</name>
   <value>file:/usr/local/hadoop/hadoop_data/hdfs/checkpoints</value>
  </property>

<!--secondary namenode 节点存储 edits 文件目录-->
  <property>
   <name>dfs.namenode.checkpoint.edits.dir</name>
   <value>file:/usr/local/hadoop/hadoop_data/hdfs/checkpoints/edits</value>
  </property>

</configuration>


######################################################################################

# datanode节点(data1、data2)设置

<configuration>

<!--指定hdfs的副本数-->
 <property>
       <name>dfs.replication</name>
       <value>1</value>
 </property>

<!--设置hdfs的权限-->
 <property>
        <name>dfs.permissions</name>
        <value>false</value>
 </property>

<!-- secondary name node web 监听端口 -->
 <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>master:50090</value>
 </property>

<!-- name node web 监听端口 -->
 <property>
   <name>dfs.namenode.http-address</name>
   <value>master:50070</value>
 </property>

<!-- DN所使用的元数据保存-->
 <property>
   <name>dfs.datanode.name.dir</name>
   <value>file:/usr/local/hadoop/hadoop_data/hdfs/datanode</value>
 </property>

<!--存放 edit 文件-->
 <property>
   <name>dfs.datanode.edits.dir</name>
   <value>file:/usr/local/hadoop/hadoop_data/hdfs/edits</value>
 </property>

 </configuration>
# 修改mapred-site.xml

<configuration>

<!-- 指定mr运行在yarn上 -->
 <property>
       <name>mapreduce.framework.name</name>
       <value>yarn</value>
 </property>

<property>
       <name>mapred.job.tracker</name>
       <value>master:54311</value>
</property>

<!--历史服务的web端口地址  -->
 <property>
   <name>mapreduce.jobhistory.webapp.address</name>
   <value>master:19888</value>
 </property>

<!--历史服务的端口地址-->
 <property>
   <name>mapreduce.jobhistory.address</name>
   <value>master:10020</value>
 </property>

<!--Uber运行模式-->
 <property>
   <name>mapreduce.job.ubertask.enable</name>
   <value>false</value>
 </property>

 

<!--是job运行时的临时文件夹-->
   <property>
       <name>yarn.app.mapreduce.am.staging-dir</name>
       <value>hdfs://master:9000/tmp/hadoop-yarn/staging</value>
       <description>The staging dir used while submittingjobs.</description>
   </property>

   <property>
       <name>mapreduce.jobhistory.intermediate-done-dir</name>
       <value>${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate</value>
   </property>

<!--MR JobHistory Server管理的日志的存放位置-->
   <property>
       <name>mapreduce.jobhistory.done-dir</name>
       <value>${yarn.app.mapreduce.am.staging-dir}/history/done</value>
   </property>

<property>
       <name>mapreduce.map.memory.mb</name>
       <value>512</value>
<description>每个Map任务的物理内存限制</description>
</property>

<property>
       <name>mapreduce.reduce.memory.mb</name>
       <value>1024</value>
<description>每个Reduce任务的物理内存限制</description>
</property>

<property>
       <name>yarn.app.mapreduce.am.resource.mb</name>
       <value>1024</value>
<description>MR ApplicationMaster占用的内存量</description>
</property>

</configuration>
# 配置slaves
删除localhost
添加
slave1
slave2
# 修改yarn-site.xml

<configuration>

<!-- Site specific YARN configurationproperties -->
<!-- 指定nodeManager组件在哪个机子上跑 -->
<property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
</property>

<property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.shuffleHandler</value>
</property>

<!-- 指定resourcemanager组件在哪个机子上跑 -->
 <property>
   <name>yarn.resourcemanager.hostname</name>
   <value>master</value>
 </property>

 <!--resourcemanager web地址-->
 <property>
   <name>yarn.resourcemanager.webapp.address</name>
   <value>master:8088</value>
 </property>

<property> 
   <name>yarn.resourcemanager.address</name> 
   <value>master:8050</value> 
 </property> 

 <property> 
   <name>yarn.resourcemanager.scheduler.address</name> 
   <value>master:8030</value> 
 </property> 

 <property> 
   <name>yarn.resourcemanager.resource-tracker.address</name> 
   <value>master:8025</value> 
 </property>

<!--启用日志聚集功能-->
 <property>
   <name>yarn.log-aggregation-enable</name>
   <value>true</value>
 </property>



<!--在HDFS上聚集的日志最多保存多长时间-->
 <property>
   <name>yarn.log-aggregation.retain-seconds</name>
   <value>86400</value>
 </property>    

<property>
<discription>单个任务可申请最少内存,默认1024MB</discription> 
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>64</value>
</property>

<property>
<discription>单个任务可申请最大内存,默认8192MB</discription> 
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>1920</value>
</property>

<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<discription>每个节点可用内存,单位MB</discription> 
<value>2048</value>
</property>

<property>
<description>Ratio between virtualmemory to physical memory when
setting memory limits for containers.Container allocations are
expressed in terms of physical memory, andvirtual memory usage
is allowed to exceed this allocation bythis ratio.
</description>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>4</value>
</property>

</configuration>

修改/usr/local/spark/conf

# 配置spark-env.sh
export SPARK_DIST_CLASSPATH=/usr/local/hadoop/bin/hadoop
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export SCALA_HOME=/usr/local/scala
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SPARK_HOME=/usr/local/spark
export SPARK_MASTER_IP=192.168.99.100#注意地址
export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=512m
export SPARK_WORKER_INSTANCES=2

9.克隆slave1、slave2

将slave1和slave2的hostname修改

将/usr/local/hadoop/etc/hadoop中的hdfs-site.xml修改为节点对应的配置

10.格式化namenode

# 在master上输入:

hdfs namenode -format

11.启动hadoop

# 在master上输入

start-all.sh
# 将spark目录里jars里面的jar包上传到hdfs,并在配置文件中添加此信息
hdfs dfs -mkdir -p /user/spark_conf/spark_jars/
hdfs dfs -put /usr/local/spark/jars/* /user/spark_conf/spark_jars/

# 在/usr/local/spark/conf/spark-defaults.conf
spark.yarn.archive=hdfs:///user/spark_conf/spark_jars/

# 启动spark history server,需在conf/spark-defaults.conf里添加如下内容(目录/tmp/spark/events需提前创建好)
spark.yarn.historyServer.address=master:18080
spark.history.ui.port=18080
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs://master:9000/tmp/spark/events
spark.history.fs.logDirectory=hdfs://master:9000/tmp/spark/events

12.启动spark

spark-shell --master yarn

Tips:

可能会遇到的一些问题及解决方案

ubuntu@master:~$ spark-shell --master yarn
2018-08-04 16:06:55 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2018-08-04 16:07:01 WARN  Client:87 - Failed to cleanup staging dir hdfs://master:9000/user/ubuntu/.sparkStaging/application_1533370000063_0001
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot delete /user/ubuntu/.sparkStaging/application_1533370000063_0001. Name node is in safe mode.
The reported blocks 231 has reached the threshold 0.9990 of total blocks 231. The number of live datanodes 2 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 21 seconds. NamenodeHostName:master
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1407)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1395)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2851)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1048)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:641)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489)

	at org.apache.hadoop.ipc.Client.call(Client.java:1475)
	at org.apache.hadoop.ipc.Client.call(Client.java:1412)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy14.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy15.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2044)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:707)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:703)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.spark.deploy.yarn.Client.org$apache$spark$deploy$yarn$Client$$cleanupStagingDirInternal$1(Client.scala:200)
	at org.apache.spark.deploy.yarn.Client.cleanupStagingDir(Client.scala:217)
	at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:182)
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
	at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
	at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
	at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:933)
	at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:924)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:924)
	at org.apache.spark.repl.Main$.createSparkSession(Main.scala:103)
	at $line3.$read$$iw$$iw.<init>(<console>:15)
	at $line3.$read$$iw.<init>(<console>:43)
	at $line3.$read.<init>(<console>:45)
	at $line3.$read$.<init>(<console>:49)
	at $line3.$read$.<clinit>(<console>)
	at $line3.$eval$.$print$lzycompute(<console>:7)
	at $line3.$eval$.$print(<console>:6)
	at $line3.$eval.$print(<console>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
	at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
	at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
	at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
	at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
	at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
	at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(SparkILoop.scala:79)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(SparkILoop.scala:79)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SparkILoop.scala:79)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:79)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:79)
	at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:78)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:78)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:78)
	at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)
	at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:77)
	at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:110)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
	at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
	at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
	at org.apache.spark.repl.Main$.doMain(Main.scala:76)
	at org.apache.spark.repl.Main$.main(Main.scala:56)
	at org.apache.spark.repl.Main.main(Main.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2018-08-04 16:07:01 ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /user/ubuntu/.sparkStaging/application_1533370000063_0001. Name node is in safe mode.
The reported blocks 231 has reached the threshold 0.9990 of total blocks 231. The number of live datanodes 2 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 21 seconds. NamenodeHostName:master
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1407)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1395)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3038)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1079)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489)

	at org.apache.hadoop.ipc.Client.call(Client.java:1475)
	at org.apache.hadoop.ipc.Client.call(Client.java:1412)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
	at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
	at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1881)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:600)
	at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:429)
	at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:869)
	at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169)
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
	at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
	at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
	at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:933)
	at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:924)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:924)
	at org.apache.spark.repl.Main$.createSparkSession(Main.scala:103)
	at $line3.$read$$iw$$iw.<init>(<console>:15)
	at $line3.$read$$iw.<init>(<console>:43)
	at $line3.$read.<init>(<console>:45)
	at $line3.$read$.<init>(<console>:49)
	at $line3.$read$.<clinit>(<console>)
	at $line3.$eval$.$print$lzycompute(<console>:7)
	at $line3.$eval$.$print(<console>:6)
	at $line3.$eval.$print(<console>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
	at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
	at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
	at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
	at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
	at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
	at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(SparkILoop.scala:79)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(SparkILoop.scala:79)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SparkILoop.scala:79)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:79)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:79)
	at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:78)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:78)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:78)
	at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)
	at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:77)
	at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:110)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
	at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
	at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
	at org.apache.spark.repl.Main$.doMain(Main.scala:76)
	at org.apache.spark.repl.Main$.main(Main.scala:56)
	at org.apache.spark.repl.Main.main(Main.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2018-08-04 16:07:01 WARN  YarnSchedulerBackend$YarnSchedulerEndpoint:66 - Attempted to request executors before the AM has registered!
2018-08-04 16:07:01 WARN  MetricsSystem:66 - Stopping a MetricsSystem that is not running
org.apache.hadoop.ipc.RemoteException: Cannot create directory /user/ubuntu/.sparkStaging/application_1533370000063_0001. Name node is in safe mode.
The reported blocks 231 has reached the threshold 0.9990 of total blocks 231. The number of live datanodes 2 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 21 seconds. NamenodeHostName:master
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1407)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1395)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3038)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1079)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489)

  at org.apache.hadoop.ipc.Client.call(Client.java:1475)
  at org.apache.hadoop.ipc.Client.call(Client.java:1412)
  at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
  at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
  at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)
  at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
  at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
  at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
  at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
  at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
  at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
  at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1881)
  at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:600)
  at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:429)
  at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:869)
  at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169)
  at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
  at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
  at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:933)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:924)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:924)
  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:103)
  ... 55 elided
<console>:14: error: not found: value spark
       import spark.implicits._
              ^
<console>:14: error: not found: value spark
       import spark.sql
              ^
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.3.1
      /_/
         
Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

这里的报错主要是:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot delete /user/ubuntu/.sparkStaging/application_1533370000063_0001.(之前的spark程序没有正常关闭,可以使用以下命令kill僵尸应用,问题即可解决,可以在8088端口查看所有的applications)

yarn application -kill application_1533370000063_0001
***************************************************************************
如果遇到 "Name node is in safe mode ", 可以通过在Hadoop的目录下输入:
                            bin/hadoop dfsadmin -safemode leave
来强制离开 Hadoop的安全模式

原因是:在分布式文件系统启动的时候,开始的时候会有安全模式,可以在启动之后等待一会

 

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值