NN:node01
SNN:node02
DN:node02、node03、node04
在伪分布式搭建的时候node01已经配置完成,所以我们先在node02,node03,node04将jdk环境安装完毕。
在基于伪分布式的node01上做调整后再拷贝到node02,node03,node04
1、修改文件/etc/hadoop/core-site.xml:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://node01:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/var/hx/hadoop/full</value> </property> </configuration> |
2、修改文件etc/hadoop/slaves
node02 node03 node04 |
3、修改文件etc/hadoop/hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>node02:50090</value> </property> </configuration> |
4、分发部署包到其他节点
cd /opt/ scp -r ./hx/ root@192.168.220.12:/opt/ scp -r ./hx/ root@192.168.220.13:/opt/ scp -r ./hx/ root@192.168.220.14:/opt/ |
5、免密钥设置
先在node02,node03,node04上做ssh 操作后生成 .ssh目录
ssh node02 ssh node03 ssh node04 |
scp ./id_dsa.pub root@192.168.220.12:`pwd`/node01.pub scp ./id_dsa.pub root@192.168.220.13:`pwd`/node01.pub scp ./id_dsa.pub root@192.168.220.14:`pwd`/node01.pub |
cat node01.pub >> authorized_keys |
6、namenode格式化
hdfs namenode -format |
7、启动NameNode,SecondaryNameNode,DataNode
start-dfs.sh |