hdfs 集群模式搭建(QJM)

version:hadoop-2.7.7

#所有hadoop节点

vi hdfs-site.xml

<property>

  <name>dfs.nameservices</name>

  <value>mycluster</value>

</property>

<property>

  <name>dfs.ha.namenodes.mycluster</name>

  <value>nn1,nn2</value>

</property>

<property>

  <name>dfs.namenode.rpc-address.mycluster.nn1</name>

  <value>machine1.example.com:8020</value>

</property>

<property>

  <name>dfs.namenode.rpc-address.mycluster.nn2</name>

  <value>machine2.example.com:8020</value>

</property>

<property>

  <name>dfs.namenode.http-address.mycluster.nn1</name>

  <value>machine1.example.com:50070</value>

</property>

<property>

  <name>dfs.namenode.http-address.mycluster.nn2</name>

  <value>machine2.example.com:50070</value>

</property>

<property>

  <name>dfs.namenode.shared.edits.dir</name>

<value>qjournal://node1.example.com:8485;node2.example.com:8485;node3.example.com:8485/mycluster</value>

</property>

<property>

  <name>dfs.client.failover.proxy.provider.mycluster</name>

  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>

<property>

      <name>dfs.ha.fencing.methods</name>

      <value>sshfence</value>

</property>

<property>

      <name>dfs.ha.fencing.ssh.private-key-files</name>

      <value>/home/exampleuser/.ssh/id_rsa</value>

</property>


vi core-site.xml 

<property>

  <name>fs.defaultFS</name>

  <value>hdfs://mycluster</value>

</property>

<property>

  <name>dfs.journalnode.edits.dir</name>

  <value>/path/to/journal/node/local/data</value>

</property>

<property>

  <name>hadoop.tmp.dir</name>

  <value>/opt/hadoop/</value>

  </property>

#name节点做免密登录

## A=>B

A:

ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa

scp ~/.ssh/id_rsa.pub root@B:/opt

B:

cat /opt/id_rsa.pub >> ~/.ssh/authorized_keys

##B=>A

同上

#启动所有journalnode节点

sbin/hadoop-daemon.sh start journalnode

#1.If you are setting up a fresh HDFS cluster, you should first run the format command (hdfs namenode -format) on one of NameNodes.

#2.If you have already formatted the NameNode, or are converting a non-HA-enabled cluster to be HA-enabled, you should now copy over the contents of your NameNode metadata directories to the other, unformatted NameNode by running the command “hdfs namenode -bootstrapStandby” on the unformatted NameNode. Running this command will also ensure that the JournalNodes (as configured by dfs.namenode.shared.edits.dir) contain sufficient edits transactions to be able to start both NameNodes.

#3.If you are converting a non-HA NameNode to be HA, you should run the command “hdfs namenode -initializeSharedEdits”, which will initialize the JournalNodes with the edits data from the local NameNode edits directories.

#nn1节点

bin/hdfs namenode -format

sbin/hadoop-daemon.sh start namenode

#nn2节点

bin/hdfs namenode -bootstrapStandby

sbin/hadoop-daemon.sh start namenode

#nn1节点设置为active

bin/hdfs haadmin -transitionToActive -forcemanual nn1

#所有DN节点

sbin/hadoop-daemon.sh start datanode


#Configuring automatic failover(可选)

vi hdfs-site.xml

<property>

   <name>dfs.ha.automatic-failover.enabled</name>

   <value>true</value>

 </property>

<property>

   <name>ha.zookeeper.quorum</name>

   <value>zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181</value>

 </property>

[hdfs]$ $HADOOP_PREFIX/bin/hdfs zkfc -formatZK

bin/start-dfs.sh

[hdfs]$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --script $HADOOP_PREFIX/bin/hdfs start zkfc

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

吃火锅的胖纸

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值