1.启动NN,DN,NM,RM
$>start-dfs.sh
$>start-yarn.sh
或单节点启动:
//启动名称节点
$>hadoop-daemon.sh start namenode
//启动资源管理器
$>yarn-daemon.sh start resourcemanager
//启动所有数据节点 会遍历slaves
$>hadoop-daemons.sh start datanode
//启动所有节点管理器 会遍历slaves
$>yarn-daemons.sh start nodemanager
2.启动JN
$>ssh s101 hadoop-daemon.sh start journalnode
$>ssh s102 hadoop-daemon.sh start journalnode
$>ssh s103 hadoop-daemon.sh start journalnode
3.复制NN的元数据到另一个NN
$>scp -r ~/hadoop/dfs hadoop@s106:/home/hadoop/hadoop/
$>hdfs namenode -bootstrapStandby
4.初始化NN的编辑日志到JN
$>hdfs namenode -initializeSharedEdits
NODE:会提示是否格式化名称节点,一定要选否,不格式化名称节点,否则白复制元数据了
5.启动standby NN
$>hadoop-daemon.sh start namenode
6.切换状态
$>hdfs haadmin -transitionToActive nn1
$>hdfs haadmin -transitionToStandby nn1
7.容灾切换
$>hdfs haadmin -failover nn1 nn2