HA服务的启动
第一步:初始化zookeeper
在hadoop01机器上进行zookeeper的初始化,其本质工作是创建对应的zookeeper节点
cd /export/servers/hadoop-2.6.0-cdh5.14.0
bin/hdfs zkfc –formatZK
[root@hadoop01 ~]# cd /export/servers/hadoop-2.6.0-cdh5.14.0/
[root@hadoop01 hadoop-2.6.0-cdh5.14.0]# bin/hdfs zkfc -formatZK
19/12/09 22:32:12 INFO tools.DFSZKFailoverController: registered UNIX signal handlers for [TERM, HUP, INT]
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:130)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:188)
19/12/09 22:32:13 INFO tools.DFSZKFailoverController: SHUTDOWN_MSG:
第二步:启动journalNode
三台机器执行以下命令启动journalNode,用于我们的元数据管理
cd /export/servers/hadoop-2.6.0-cdh5.14.0
sbin/hadoop-daemon.sh start journalnode
[root@hadoop01 hadoop-2.6.0-cdh5.14.0]# sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/hadoop-root-journalnode-hadoop01.out
第三步:初始化journalNode
node01机器上准备初始化journalNode
cd /export/servers/hadoop-2.6.0-cdh5.14.0
bin/hdfs namenode -initializeSharedEdits –force
[root@hadoop01 hadoop-2.6.0-cdh5.14.0]# bin/hdfs namenode -initializeSharedEdits -force
19/12/09 22:36:05 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
19/12/09 22:36:05 INFO namenode.NameNode: createNameNode [-initializeSharedEdits, -force]
19/12/09 22:36:05 ERROR namenode.NameNode: No shared edits directory configured for namespace null namenode null
19/12/09 22:36:05 INFO util.ExitUtil: Exiting with status 0
19/12/09 22:36:05 INFO namenode.NameNode: SHUTDOWN_MSG:
第四步:启动namenode
node01机器上启动namenode
cd /export/servers/hadoop-2.6.0-cdh5.14.0
sbin/hadoop-daemon.sh start namenode
[root@hadoop01 hadoop-2.6.0-cdh5.14.0]# sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/hadoop-root-namenode-hadoop01.out
node02机器上启动namenode
cd /export/servers/hadoop-2.6.0-cdh5.14.0
bin/hdfs namenode -bootstrapStandby
sbin/hadoop-daemon.sh start namenode
[root@hadoop02 hadoop-2.6.0-cdh5.14.0]# bin/hdfs namenode -bootstrapStandby
19/12/09 22:37:37 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
19/12/09 22:37:37 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
19/12/09 22:37:37 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:428)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1513)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
Caused by: org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.parseConfAndFindOtherNN(BootstrapStandby.java:380)
at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:104)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:423)
... 2 more
19/12/09 22:37:37 INFO util.ExitUtil: Exiting with status 1
19/12/09 22:37:37 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop02.Hadoop.com/192.168.100.202
************************************************************/
[root@hadoop02 hadoop-2.6.0-cdh5.14.0]# sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/hadoop-root-namenode-hadoop02.out
第五步:启动所有节点的datanode进程
在hadoop01机器上启动所有节点的datanode进程
cd /export/servers/hadoop-2.6.0-cdh5.14.0
sbin/hadoop-daemons.sh start datanode
第六步:启动zkfc
在hadoop01机器上面启动zkfc进程
cd /export/servers/hadoop-2.6.0-cdh5.14.0
sbin/hadoop-daemon.sh start zkfc
[root@hadoop01 hadoop-2.6.0-cdh5.14.0]# sbin/hadoop-daemons.sh start zkfc
hadoop01: starting zkfc, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/hadoop-root-zkfc-hadoop01.out
hadoop02: starting zkfc, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/hadoop-root-zkfc-hadoop02.out
hadoop03: starting zkfc, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/hadoop-root-zkfc-hadoop03.out
hadoop01: Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
hadoop01: at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:130)
hadoop01: at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:188)
在node02机器上面启动zkfc进程
cd /export/servers/hadoop-2.6.0-cdh5.14.0
sbin/hadoop-daemon.sh start zkfc
[root@hadoop02 hadoop-2.6.0-cdh5.14.0]# sbin/hadoop-daemon.sh start zkfc
starting zkfc, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/hadoop-root-zkfc-hadoop02.out
第八步:启动jobhsitory
hadoop01节点启动jobhistoryserver
cd /export/servers/hadoop-2.6.0-cdh5.14.0
sbin/mr-jobhistory-daemon.sh start historyserver
[root@hadoop01 hadoop-2.6.0-cdh5.14.0]# sbin/mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /export/servers/hadoop-2.6.0-cdh5.14.0/logs/mapred-root-historyserver-hadoop01.out
第九步:浏览器界面访问
hadoop01机器查看hdfs状态
http://192.168.100.201:50070/dfshealth.html#tab-overview
hadoop02机器查看hdfs状态
http://192.168.100.202:50070/dfshealth.html#tab-overview
yarn集群访问查看
http:// 192.168.100.201:8088/cluster
历史任务浏览界面
页面访问:
http://192.168.100.201:19888/jobhistory
总结
由于配置HA时报了很多错,所以不知道步骤是否完全正确,如果之后发现错误,再来更正= =