hdfs namenode -format 报错 Failed to start namenode. Encountered exception during format.

hdfs namenode -format

21/02/03 03:58:54 ERROR namenode.NameNode: Failed to start namenode.
java.lang.IllegalArgumentException: URI has an authority component
在这里插入图片描述
为此,查询发现了解决建议:
https://blog.csdn.net/weixin_34241036/article/details/92397679
修改内容:
core-site.xml中的一个配置


hadoop.tmp.dir
file:/opt/hadoop/tmp

修改为:


hadoop.tmp.dir
/opt/hadoop/tmp

然后更新执行:
继续报错:
21/02/03 04:15:43 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /opt/hadoop/tmp/dfs/name/current

在这里插入图片描述
参考链接:https://blog.csdn.net/emilsinclair4391/article/details/51520524

[root@Node1 ~]# sudo chmod -R a+w /opt/hadoop
[root@Node2 ~]# sudo chmod -R a+w /opt/hadoop
然后执行:
[hadoop@Node1 ~]$ hdfs namenode -format
执行后无报错:
成功执行标志:
21/02/03 06:21:41 INFO common.Storage: Storage directory /opt/hadoop/tmp/dfs/name has been successfully formatted.
在这里插入图片描述
查看生成的tmp文件夹里面的文件:正是想要的啊:
在这里插入图片描述
[hadoop@Node1 hadoop]$ ls
bin etc include lib libexec LICENSE.txt NOTICE.txt README.txt sbin share tmp
[hadoop@Node1 hadoop]$ cd tmp/dfs/name/current
[hadoop@Node1 current]$ ls
fsimage_0000000000000000000 fsimage_0000000000000000000.md5 seen_txid VERSION
[hadoop@Node1 current]$ ll
total 16
-rw-rw-r-- 1 hadoop hadoop 353 Feb 3 06:21 fsimage_0000000000000000000
-rw-rw-r-- 1 hadoop hadoop 62 Feb 3 06:21 fsimage_0000000000000000000.md5
-rw-rw-r-- 1 hadoop hadoop 2 Feb 3 06:21 seen_txid
-rw-rw-r-- 1 hadoop hadoop 202 Feb 3 06:21 VERSION
[hadoop@Node1 current]$ cd …

复制到其他节点:
[hadoop@Node1 hadoop]$ scp -r /opt/hadoop/tmp/ Node2:/opt/hadoop/
[hadoop@Node1 hadoop]$ scp -r /opt/hadoop/tmp/ Node3:/opt/hadoop/
[hadoop@Node1 hadoop]$ scp -r /opt/hadoop/tmp/ Node4:/opt/hadoop/
[hadoop@Node1 hadoop]$ scp -r /opt/hadoop/tmp/ Node5:/opt/hadoop/

继续执行启动hdfs命令
[root@Node1 ~]# start-dfs.sh在这里插入图片描述
在这里插入图片描述
报错:
Node3: [Fatal Error] hdfs-site.xml:30:2: The content of elements must consist of well-formed character data or markup.
Node4: [Fatal Error] hdfs-site.xml:30:2: The content of elements must consist of well-formed character data or markup.
Node5: [Fatal Error] hdfs-site.xml:30:2: The content of elements must consist of well-formed character data or markup.

进入到sbin目录重新执行所有的.sh文件
[root@Node1 sbin]# ./start-all.sh
然后执行jps查看进程:
[root@Node1 sbin]# jps
2529 NameNode
4916 ResourceManager
9447 Jps
2718 SecondaryNameNode
查看其它节点:
[root@Node2 sbin]# ./start-all.sh
[root@Node2 sbin]# jps
1984 SecondaryNameNode
2256 Jps

[root@Node3 tmp]# jps
5073 Jps
4625 NodeManager
1811 QuorumPeerMain

[root@Node4 bin]# jps
5224 Jps
1817 QuorumPeerMain
4777 NodeManager

[root@Node5 current]# jps
4819 NodeManager
5284 Jps
1821 QuorumPeerMain

######################## 补充,修改完成至最佳
执行过程中发现:Node3、Node4,Node5的 hdfs-site.xml文件并没有彻底替换,是导致 start-dfs.sh命令执行报错的原因:
[root@Node4 hadoop]# cat hdfs-site.xml
在这里插入图片描述

继续执行替换:
[root@Node1 sbin]# scp -r hadoop/ Node3:/opt/hadoop/etc/
[root@Node1 sbin]# scp -r hadoop/ Node4:/opt/hadoop/etc/
[root@Node1 sbin]# scp -r hadoop/ Node5:/opt/hadoop/etc/
在这里插入图片描述
然后主节点重新启动:
[root@Node1 sbin]# ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [Node1]
Node1: namenode running as process 2529. Stop it first.
Node3: datanode running as process 6112. Stop it first.
Node4: datanode running as process 6462. Stop it first.
Node5: datanode running as process 6235. Stop it first.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: secondarynamenode running as process 2718. Stop it first.
starting yarn daemons
resourcemanager running as process 4916. Stop it first.
Node3: nodemanager running as process 6210. Stop it first.
Node5: nodemanager running as process 6333. Stop it first.
Node4: nodemanager running as process 6561. Stop it first.
备用主节点:
[root@Node2 sbin]# ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [Node1]
Node1: namenode running as process 2529. Stop it first.
Node3: datanode running as process 6112. Stop it first.
Node5: datanode running as process 6235. Stop it first.
Node4: datanode running as process 6462. Stop it first.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: secondarynamenode running as process 1984. Stop it first.
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop/logs/yarn-root-resourcemanager-Node2.out
Node5: nodemanager running as process 6333. Stop it first.
Node4: nodemanager running as process 6561. Stop it first.
Node3: nodemanager running as process 6210. Stop it first.
[root@Node2 sbin]#
[root@Node2 sbin]#
[root@Node2 sbin]# jps
1984 SecondaryNameNode
4347 Jps

查看进程:
[root@Node1 sbin]# jps
2529 NameNode
4916 ResourceManager
15941 Jps
2718 SecondaryNameNode
[root@Node1 sbin]#

[root@Node3 etc]# jps
6112 DataNode
6210 NodeManager
1811 QuorumPeerMain
6478 Jps

[root@Node4 hadoop]# jps
6561 NodeManager
1817 QuorumPeerMain
6828 Jps
6462 DataNode

[root@Node5 current]# jps
6600 Jps
6235 DataNode
6333 NodeManager
1821 QuorumPeerMain
[root@Node5 current]#
这样是完全启动了 。

执行hadoop dfsadmin -report命令:显示:Live datanodes (3)

[root@Node1 ~]# hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 55510843392 (51.70 GB)
Present Capacity: 38132862976 (35.51 GB)
DFS Remaining: 38132789248 (35.51 GB)
DFS Used: 73728 (72 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0


Live datanodes (3):

Name: 10.10.10.6:50010 (Node4)
Hostname: Node4
Decommission Status : Normal
Configured Capacity: 18503614464 (17.23 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 5791039488 (5.39 GB)
DFS Remaining: 12712550400 (11.84 GB)
DFS Used%: 0.00%
DFS Remaining%: 68.70%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Feb 04 03:58:57 CST 2021

Name: 10.10.10.7:50010 (Node5)
Hostname: Node5
Decommission Status : Normal
Configured Capacity: 18503614464 (17.23 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 5790994432 (5.39 GB)
DFS Remaining: 12712595456 (11.84 GB)
DFS Used%: 0.00%
DFS Remaining%: 68.70%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Feb 04 03:58:57 CST 2021

Name: 10.10.10.5:50010 (Node3)
Hostname: Node3
Decommission Status : Normal
Configured Capacity: 18503614464 (17.23 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 5795946496 (5.40 GB)
DFS Remaining: 12707643392 (11.83 GB)
DFS Used%: 0.00%
DFS Remaining%: 68.68%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Feb 04 03:58:57 CST 2021

  • 5
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值