HADOOP docker(三):HDFS高可用实验

 

前言

上一节学习了HDFS HA的原理,本节来做实验

1.机器环境

主机名IP角色
hadoop1172.18.0.11NN1 ZK RM
hadoop2172.18.0.12NN2 ZK RM JOBHISTORY
hadoop3172.18.0.13DN ZK ND
hadoop4172.18.0.14DN QJM1 ND
hadoop5172.18.0.15DN QJM2 ND
hadoop6172.18.0.16DN QJM3 ND

目前已经安装了hdfs yarn zookeeper

2.配置HA

2.1 修改hdfs-site.xml

  1. <property>
  2. <name>dfs.nameservices</name>
  3. <value>dockercluster</value>
  4. </property>
  5. <property>
  6. <name>dfs.ha.namenodes.dockercluster</name>
  7. <value>nn1,nn2</value>
  8. </property>
  9. <property>
  10. <name>dfs.namenode.rpc-address.dockercluster.nn1</name>
  11. <value>hadoop1:8020</value>
  12. </property>
  13. <property>
  14. <name>dfs.namenode.rpc-address.dockercluster.nn2</name>
  15. <value>hadoop2:8020</value>
  16. </property>
  17. <property>
  18. <name>dfs.namenode.http-address.dockercluster.nn1</name>
  19. <value>hadoop1:50070</value>
  20. </property>
  21. <property>
  22. <name>dfs.namenode.http-address.dockercluster.nn2</name>
  23. <value>hadoop2:50070</value>
  24. </property>
  25. <property>
  26. <name>dfs.namenode.shared.edits.dir</name>
  27. <value>qjournal://hadoop4:8485;hadoop5;hadoop6:8485/dockercluster</value>
  28. </property>
  29. <property>
  30. <name>dfs.client.failover.proxy.provider.dockercluster</name>
  31. <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  32. </property>
  33. <property>
  34. <name>dfs.ha.fencing.methods</name>
  35. <value>sshfence</value>
  36. </property>
  37. <property>
  38. <name>dfs.ha.fencing.ssh.private-key-files</name>
  39. <value>/home/hdfs/.ssh/id_rsa</value>
  40. </property>
  41. <property>
  42. <name>dfs.ha.fencing.ssh.connect-timeout</name>
  43. <value>30000</value>
  44. </property>
  45. <property>
  46. <name>dfs.journalnode.edits.dir</name>
  47. <value>/opt/hadoop/hadoop-2.7.3/JNSdatadir</value>
  48. </property>

2.2 设置core-site.xml

  1. <property>
  2. <name>fs.defaultFS</name>
  3. <value>hdfs://dockercluster</value>
  4. </property>

将以上配置文件分发到各个节点上

3.配置手动HA

3.1 关闭YARN、HDFS

关闭yarn:

  1. resourcemanager上执行:$HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager
  2. nodemanager上执行:$HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager

关闭hdfs:

  1. namenode上执行:$HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode
  2. datanode执行:$HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode

如果配置了ssh免密码登录:

stop-yarn.sh
stop-dfs.sh

3.2 启动HDFS HA

  1. 启动journal节点
    在hadoop4 5 6上执行:

    hadoop-daemon.sh start journalnode

  2. 在standby namenode上执行初始化

  1. [hdfs@hadoop2 hadoop-2.7.3]$ hdfs namenode -bootstrapStandby
  2. .........
  3. 17/04/1918:24:17 INFO ipc.Client:Retrying connect to server: hadoop1/172.18.0.11:8020.Already tried 9 time(s);retry policy isRetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
  4. 17/04/1918:24:17 FATAL ha.BootstrapStandby:Unable to fetch namespace information from active NN at hadoop1/172.18.0.11:8020:CallFrom hadoop2/172.18.0.12 to hadoop1:8020 failed on connection exception: java.net.ConnectException:Connection refused;For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
  5. 17/04/1918:24:17 INFO util.ExitUtil:Exitingwith status 2
  6. 17/04/1918:24:17 INFO namenode.NameNode: SHUTDOWN_MSG:
  7. /************************************************************
  8. SHUTDOWN_MSG: Shutting down NameNode at hadoop2/172.18.0.12

该语句把原namenode上的元数据目录复制到本节点上的相应目录中
执行这个语句报错了,连接不上hadoop1:8020.看来官网写的不对啊,操。在执行这一步之前要先启动原来的namenode.先向下继续。

  1. 初始化edit log
    在原来的namenode上执行:
  1. [hdfs@hadoop1 namenodedir]$ hdfs namenode -initializeSharedEdits
  2. .....
  3. 17/04/1918:28:24 INFO namenode.EditLogInputStream:Fast-forwarding stream '/opt/hadoop/hadoop-2.7.3/namenodedir/current/edits_0000000000000000001-0000000000000000256' to transaction ID 1
  4. 17/04/1918:28:24 INFO namenode.FSEditLog:Starting log segment at 1
  5. 17/04/1918:28:24 INFO namenode.FSEditLog:Ending log segment 1
  6. 17/04/1918:28:24 INFO namenode.FSEditLog:Number of transactions:256Total time for transactions(ms):63Number of transactions batched inSyncs:0Number of syncs:1SyncTimes(ms):11
  7. 17/04/1918:28:24 INFO util.ExitUtil:Exitingwith status 0
  8. 17/04/1918:28:24 INFO namenode.NameNode: SHUTDOWN_MSG:
  9. /************************************************************
  10. SHUTDOWN_MSG: Shutting down NameNode at hadoop1/172.18.0.11
  11. ************************************************************

这个脚本会把本地的edit log复制到journal节点中去

  1. 启动原来的namenode
  1. $HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
  1. 再次初始化standby namenode
    刚才初始化失败了,再来一次:
  1. =====================================================
  2. Re-format filesystem inStorageDirectory/opt/hadoop/hadoop-2.7.3/namenodedir ?(Y or N) y
  3. 17/04/1920:30:38 INFO common.Storage:Storage directory /opt/hadoop/hadoop-2.7.3/namenodedir has been successfully formatted.
  4. 17/04/1920:30:38 WARN common.Util:Path/opt/hadoop/hadoop-2.7.3/namenodedir should be specified as a URI in configuration files.Please update hdfs configuration.
  5. 17/04/1920:30:38 WARN common.Util:Path/opt/hadoop/hadoop-2.7.3/namenodedir should be specified as a URI in configuration files.Please update hdfs configuration.
  6. 17/04/1920:30:39 INFO namenode.TransferFsImage:Opening connection to http://hadoop1:50070/imagetransfer?getimage=1&txid=256&storageInfo=-63:51947955:0:CID-3adccc69-45b5-4b44-81b6-70ab593cc1ed
  7. 17/04/1920:30:39 INFO namenode.TransferFsImage:ImageTransfer timeout configured to 60000 milliseconds
  8. 17/04/1920:30:39 INFO namenode.TransferFsImage:Transfer took 0.00s at 500.00 KB/s
  9. 17/04/1920:30:39 INFO namenode.TransferFsImage:Downloaded file fsimage.ckpt_0000000000000000256 size 2780 bytes.
  10. 17/04/1920:30:39 INFO util.ExitUtil:Exitingwith status 0
  11. 17/04/1920:30:39 INFO namenode.NameNode: SHUTDOWN_MSG:
  12. /************************************************************
  13. SHUTDOWN_MSG: Shutting down NameNode at hadoop2/172.18.0.12
  14. ************************************************************/

这次就行了!!
记住,选"Re-format filesystem in Storage Directory /opt/hadoop/hadoop-2.7.3/namenodedir ? (Y or N) Y"

7.启动standby namenode

  1. $HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode

这次启动成功了。

所以,正确的顺序是:启动journal->初始化edit log->启动nameode->初始化standby->启动standby

8.检查HA web页面:

看到两个namenode都是standby!
那手动切换一下,设置hadoop1上的namenode为active:

  1. [hdfs@hadoop2 hadoop]$ hdfs haadmin -failover --forceactive nn1 nn2
  2. 17/04/1920:51:59 WARN ha.FailoverController:Serviceisnot ready to become active, but forcing:TheNameNodeisin safemode.The reported blocks 0 needs additional 16 blocks to reach the threshold 0.9990 of total blocks 16.
  3. The number of live datanodes 0 has reached the minimum number 0.Safe mode will be turned off automatically once the thresholds have been reached.
  4. Failoverfrom nn1 to nn2 successful

这样就把nn2切换为active的namenode了

为了验证sshfence的正确性,你可以反复的切几次~

4.配置自动HA

4.1 关闭集群

在两个namenode上:

  1. $HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode

在所有datanode上:

  1. $HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode

4.2 修改配置文件

1.在hdfs-site.xml中

  1. <property>
  2. <name>dfs.ha.automatic-failover.enabled</name>
  3. <value>true</value>
  4. </property>

2.在core-site.xml中:

  1. <property>
  2. <name>ha.zookeeper.quorum</name>
  3. <value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>
  4. </property>

将以上文件分发到各个节点。

4.3 启动HA

1.初始化ZKFC
在任意一个namenode上执行:

  1. [hdfs@hadoop1 hadoop]$ hdfs zkfc -formatZK
  2. ...........
  3. 17/04/1921:09:09 INFO zookeeper.ClientCnxn:Socket connection established to hadoop2/172.18.0.12:2181, initiating session
  4. 17/04/1921:09:09 INFO zookeeper.ClientCnxn:Session establishment complete on server hadoop2/172.18.0.12:2181, sessionid =0x25b6d24ae630000, negotiated timeout =5000
  5. 17/04/1921:09:09 INFO ha.ActiveStandbyElector:Session connected.
  6. 17/04/1921:09:10 INFO ha.ActiveStandbyElector:Successfully created /hadoop-ha/dockercluster in ZK.
  7. 17/04/1921:09:10 INFO zookeeper.ZooKeeper:Session:0x25b6d24ae630000 closed
  8. 17/04/1921:09:10 INFO zookeeper.ClientCnxn:EventThread shut down

2.启动ZKFC
在每个namenode上执行:

  1. $HADOOP_HOME/sbin/hadoop-daemon.sh --script $HADOOP_HOME/bin/hdfs start zkfc

注:如果配了SSH免密码登录,直接用start-dfs.sh启动集群即可

3.启动namenode

  1. $HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode

4.进web查看状态


4.4 测试自动切换

在hadoop1上直接kill n11:

  1. [hdfs@hadoop1 hadoop]$ jps
  2. 5588NameNode
  3. 5501DFSZKFailoverController
  4. 5695Jps
  5. [hdfs@hadoop1 hadoop]$ kill 5501

再看看:

说明自动切换正常!





转载于:https://www.cnblogs.com/skyrim/p/6735990.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值