目前有6台centos环境,ip分别192.168.1.121,192.168.1.125,192.168.1.160,192.168.1.157,192.168.1.158,192.168.1.160,规划192.168.1.160机器为NameNode,其余5台均为DataNode,接下来执行以下步骤搭建
1. 首先确保已安装JDK环境,并配置好JAVA_HOME(此处省略搭建过程)
2.编辑6台机器的hosts文件 vim /etc/hosts
192.168.1.160 hadoop1
192.168.1.158 hadoop2
192.168.1.157 hadoop3
192.168.1.150 hadoop4
192.168.1.121 hadoop5
192.168.1.125 hadoop6
3.配置ssh环境无密码登录
vim /etc/ssh/sshd_config
找到以下内容,并去掉注释符”#“
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
如果修改了配置文件需要重启sshd服务 (需要root权限)
service sshd restart
假设配置192.168.1.160 与192.168.1.150无密码登录
1) 在160上执行 ssh-keygen -t rsa ,此时会在/root/.ssh目录生成 id_rsa id_rsa.pub两个文件
2) cd /root/.ssh/
3)cat id_rsa.pub>>authorized_keys
4)在150上执行 ssh-keygen -t rsa ,此时会在150的/root/.ssh目录生成 id_rsa id_rsa.pub两个文件,同时执 行2),3)步骤
5) 在160上执行 scp id_rsa.pub root@192.168.1.150:/root/.ssh/h160.pub
6) 在150上执行 cat h160.pub>>authorized_keys
7)此时可以发现在160上执行 ssh 192.168.1.150是不需要密码的
8)同样按5-7步骤在150上执行,实现了无密码登录
4.在以上6台机器中新建/home/hadoopCluster目录,上传hadoop安装包到/home/hadoopCluster目录,
1)tar -zxvf hadoop-2.5.2.tar.gz
2) cd /home/hadoopCluster/hadoop-2.5.2/etc/hadoop
3)vim core-site.xml
<configuration>
<property>
<name > hadoop.tmp.dir </name>
<value > /home/hadoop/tmp </value>
<description> Abase for other temporary directories. </description>
</property>
<property >
<name > fs.defaultFS </name>
<value>hdfs://hadoop1:9000</value>
</property>
<property >
<name > io.file.buffer.size </name>
<value > 4096 </value>
</property>
</configuration>
注意红色部分hdfs:前面不要有空格,不然可能会导致启动失败
4.vim yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop1:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop1:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop1:8088</value>
</property>
</configuration>
5)vim mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>hadoop1:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop1:19888</value>
</property>
</configuration>
6) vim hdfs-site.xml
<configuration>
<property>
<name>dfs.nameservices</name>
<value>hadoop-cluster1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop1:50090</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/hadoopCluster/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hadoopCluster/dfs/data</value>
</property>
<property>
<name>dfs.namenode.rpc-address</name>
<value>hadoop1:9000</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
7)在160上执行scp -rf /home/hadoopCluster/* root@192.168.1.150:/home/hadoopCluster,同时拷贝到其它4 台机器
8)进入 hadoop-2.5.2 目录 分别在6台机器执行 ./bin/hdfs namenode -format
9)启动 sbin/start-dfs.sh
10)启动 sbin/start-yarn.sh
11)访问http://192.168.1.160:8088/
12)访问http://192.168.1.160:50070/dfshealth.html#tab-datanode
至此hadoop集群环境搭建完毕