除了前面写的一种解决方案,这里还有另一种解决方案:
集群下的机器都删除datas和logs文件夹
指令:
rm -fr datas
rm -fr logs
然后再重新创建临时文件夹:
mkdir -p /export/servers/hadoop-3.1.2/datas/tmp
mkdir -p /export/servers/hadoop-3.1.2/datas/dfs/nn/snn/edits
mkdir -p /export/servers/hadoop-3.1.2/datas/namenode/namenodedatas
mkdir -p /export/servers/hadoop-3.1.2/datas/datanode/datanodeDatas
mkdir -p /export/servers/hadoop-3.1.2/datas/dfs/nn/edits
mkdir -p /export/servers/hadoop-3.1.2/datas/dfs/snn/name
mkdir -p /export/servers/hadoop-3.1.2/datas/jobhsitory/intermediateDoneDatas
mkdir -p /export/servers/hadoop-3.1.2/datas/jobhsitory/DoneDatas
mkdir -p /export/servers/hadoop-3.1.2/datas/nodemanager/nodemanagerDatas
mkdir -p /export/servers/hadoop-3.1.2/datas/nodemanager/nodemanagerLogs
mkdir -p /export/servers/hadoop-3.1.2/datas/remoteAppLog/remoteAppLogs
mkdir -p /export/servers/hadoop-3.1.2/logs
把第一台机器的配置好的hadoop软件拷贝到其他机器中
进入hadoop文件存放目录:
cd /export/servers
用scp命令拷贝到其他机器(博主自己的机器是三台虚拟机,所以只需要拷贝两次即可):
scp -r hadoop-3.1.2/ hadoop122:$PWD
scp -r hadoop-3.1.2/ hadoop123:$PWD
随后进入hadoop的主目录,执行以下命令:
cd /export/servers/hadoop-3.1.2/ # 需要先进入这个目录再执行format
格式化:
bin/hdfs namenode -format
问题解决~