首先虚拟机要用NAT模式 创建3个虚拟机
然后给虚拟机改名
hadoop000 192.168.200.100
NN RM
DN NM
hadoop001 192.168.200.101
DN NM
hadoop002 192.168.200.102
DN NM
在/etc/hosts 修改ip和hostname的映射关系
192.168.200.100 hadoop000
192.168.200.101 hadoop001
192.168.200.102 hadoop002
192.168.200.100 localhost
前置安装ssh
每台都用ssh免密码登录 : ssh-keygen -t rsa
在hadoop000 机器上进行操作
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop000
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop001
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop002
JDK安装
1)先在hadoop000机器上部署了jdk
2)将jdk bin配置到系统环境变量
3)将jdk拷贝到其他节点上去(从hadoop000机器出发)
vim .bash_profile
export JAVA_HOME=/home/hadoop/app/jdk1.8.0_221
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.15.1
export PATH=$HADOOP_HOME/bin:$PATH
source .bash_profile
scp -r jdk1.8.0_91 hadoop@hadoop001:~/app/
scp -r jdk1.8.0_91 hadoop@hadoop002:~/app/
scp ~/.bash_profile hadoop@hadoop001:~/
scp ~/.bash_profile hadoop@hadoop002:~/
hadoop部署
cd app/hadoop-2.6.0-cdh5.15.1/etc/hadoop
vim hadoop-env.sh
vim core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop000:8020</value>
</property>
vim hdfs-site.xml
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/app/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/app/tmp/dfs/data</value>
</property>
vim yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop000</value>
</property>
vim mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
vim slavaes
两个slave
分发hadoop到其他机器
scp -r hadoop-2.6.0-cdh5.15.1 hadoop@hadoop001:~/app/
scp -r hadoop-2.6.0-cdh5.15.1 hadoop@hadoop002:~/app/
scp ~/.bash_profile hadoop@hadoop001:~/
scp ~/.bash_profile hadoop@hadoop002:~/
格式化: hadoop namenode -format
启动
./start-dfs.sh
./start-yarn.sh
jps查看一下
输入http://192.168.200.100:50070查看
成功!