在虚拟机搭建hadoop分布式集群,一台虚拟机位主机,两台虚拟机为从机,实现HDFS文件系统和yarn的启动。
准备工具
Linux镜像(ubuntu-12.04-desktop-amd64.iso)、虚拟机(VMware Workstation)
JDK安装包(jdk-8u121-linux-x64.tar.gz)、hadoop安装包(hadoop-2.7.3.tar.gz)
步骤
1、安装64位的linux虚拟机,结果如下
2、安装JDK环境
2.1将jdk-8u121-linux-x64.tar.gz解压到/opt/目录下
tar zxvf jdk-8u121-linux-i586.tar.gz -C /opt/
2.2 配置JDK环境变量
在/etc/profile文件中添加如下内容,然后运行source /etc/profile使其生效
2.3检验环境配置是否成功
Java -version
3 克隆两份linux虚拟机,原先一台充当master,另外两台充当slave
4 修改三台虚拟机的主机名
4.1打开slave1的/etc/hostname文件,将内容修改位slave1,然后重启生效
4.2打开slave2的/etc/hostname文件,将内容修改位slave2,然后重启生效
4.3打开master的/etc/hostname文件,将内容修改位master,然后重启生效
5、在三台虚拟机的/etc/hosts文件中添加三台虚拟机的IP
6、配置ssh免密码登陆,在三台虚拟机中做如下操作
sudo apt-get install openssh-server
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cd .ssh/
cat id_dsa.pub >> authorized_keys
ssh localhost
exit
然后在slave1和slave2中做如下操作
cp hadoop@master:~/.ssh/id_dsa.pub ./master_dsa.pub
cat master_dsa.pub >> authorized_keys
上述操作完成后可在master上免密码登陆slave1和slave2
7 在master上安装hadoop,将hadoop安装在/home/hadoop/目录下,并重命名为hadoop
Tar zxvf hadoop-2.7.3.tar.gz -C /home/hadoop/
mv hadoop-2.7.3 hadoop
8、定位到hadoop下的配置文件目录/etc/hadoop,修改相应配置文件
8.1修改hadoop-env.sh
8.2 修改yarn-env.sh
8.3 修改core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
<final>true</final>
</property>
</configuration>
8.4 修改hdfs-site.xml
<configuration>
<property>
<name>dfs.http.address</name>
<value>master:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
8.5修改mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master:9001</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>20</value>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>4</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property><name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
8.6修改yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
8.7 修改slaves
9将配置好的hadoop复制slave1和slave2
scp -r ~/hadoop hadoop@Slave1:~/
scp -r ~/hadoop hadoop@Slave2:~/
10 启动验证
在master上格式化namenode
./bin/hdfs namenode -format
启动hdfs: ./sbin/start-dfs.sh
此时在Master上面运行的进程有:namenode secondarynamenode
Slave1和Slave2上面运行的进程有:datanode
启动yarn: ./sbin/start-yarn.sh
此时在Master上面运行的进程有:namenode secondarynamenode resourcemanager
Slave1和Slave2上面运行的进程有:datanode nodemanager
查看HDFS: http://192.168.232.139:50070
关闭hadoop: ./sbin/stop-all.sh