部署分布
NameNode 是基于内存 最好单独部署
node1 NameNode DataNode
node2 DataNode
node3 DataNode
node4 SNN
node 2 node3 node4 分别部署Jdk 和环境变量HADOOP_HOME 必须跟node1一样
修改配置文件
node1修改 文件
vim hdfs-site.xml
<configuration>
<!-- 副本数量为2-->
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<!-- 修改默认NameNode持久化目录 防止内存紧缺时回收/tmp下文件 -->
<property>
<name>dfs.namenode.name.dir</name>
<value>/data/DataOrLogs/hadoop/full/hadoop/local/dfs/name</value>
</property>
<!-- 修改默认DataNode持久化目录 防止内存紧缺时回收/tmp下文件 -->
<property>
<name>dfs.datanode.data.dir</name>
<value>/data/DataOrLogs/hadoop/full/hadoop/local/dfs/data</value>
</property>
<!-- SecondaryNameNode 放在Node4 -->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node4:50090</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>/data/DataOrLogs/hadoop/full/hadoop/local/dfs/secondary</value>
</property>
</configuration>
vim slaves
node1
node2
node3
复制到node2 node3 node4
scp -r hadoop-2.6.5/ root@node2:/usr/local/soft/hadoop-2.6.5/
scp -r hadoop-2.6.5/ root@node3:/usr/local/soft/hadoop-2.6.5/
scp -r hadoop-2.6.5/ root@node4:/usr/local/soft/hadoop-2.6.5/
启动
格式化
hdfs namenode -format
启动
start-dfs.sh