1、安装必要的rpm包
yum install -y hadoop-hdfs-namenode hadoop-hdfs-zkfc.x86_64
2,修改配置
主要修改的地方有core-site.xml和hdfs-site.xml
2.1 hdfs-site.xml,如下
,
标红的为新加的
<property>
<name>dfs.nameservices</name>
<value>CloudTestNameNode,CloudTestNameNode2,CloudTestNameNode3</value>
</property>
以下内容全部为新加的
<property>
<name>dfs.ha.namenodes.CloudTestNameNode3</name>
<value>nn5,nn6</value>
</property>
<property>
<name>dfs.namenode.rpc-address.CloudTestNameNode3.nn5</name>
<value>rsync.testnamenode0301.clouddev.user.nop.sogou-op.org:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.CloudTestNameNode3.nn6</name>
<value>rsync.testnamenode0302.clouddev.user.nop.sogou-op.org:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.CloudTestNameNode3.nn5</name>
<value>rsync.testnamenode0301.clouddev.user.nop.sogou-op.org:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.CloudTestNameNode3.nn6</name>
<value>rsync.testnamenode0302.clouddev.user.nop.sogou-op.org:50070</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.CloudTestNameNode3</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
2.2 core-site.xml
在新的namenode上创建一个目录,并建立映射关系
<property>
<name>fs.viewfs.mounttable.nsX.link./third</name>
<value>hdfs://CloudTestNameNode3/third</value>
</property>
3,格式化并创建目录
在新加的两台ha机器上格式化hdfs,
3.1 格式化hdfs,需要注意的是,下面标红的为现有集群的clus_id
hdfs namenode -format -clusterId CID-31da1e8e-2d02-4493-bf31-df55447f9e67
3.2 创建目录
在2.2步中,配置的目录需要管理员手动创建好
hdfs fs -mkdir hdfs://CloudTestNameNode3/third
4,启动服务
service hadoop-hdfs-namenode start
service hadoop-hdfs-zkfc start