编辑好core-site.xml、hdfs-site.xml配置信息后,输入start-dfs.sh启动hdfs后,仅仅只有三台机器中的一台namenode启来了,其余两台起不来,出错信息:
Retrying connect to server: hadoop102/192.168.10.102:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
Retrying connect to server: hadoop104/192.168.10.104:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2022-01-03 16:50:46,417 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop103/192.168.10.103:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
折腾了两三天,期间尝试了防火墙,xml配置信息、未hdfs格式化、host主机文件、网络连接、端口开启等问题的检验,最后实在没办法,把三台机器上的高可用hadoop的文件全删了,重新安装配置了hadoop,问题解决,正常启动了。