1.下载hadoop-0.20.2
wget http://mirror.bjtu.edu.cn/apache/hadoop/core/stable/hadoop-0.20.2.tar.gz
2.修改/etc/hosts文件,添加所有节点的ip-host映射
192.168.221.174 h1
192.168.221.175 h2
192.168.221.176 h3
3.修改hadoop安装目录下conf里的配置文件
(1)修改hadoop-env.sh,添加java_home和命名hadoop instance
export JAVA_HOME=/usr/java/jdk1.6.0_10
export HADOOP_IDENT_STRING=myhadoop
(2)在masters里添加
h1
(3)在slaves里添加数据节点
h1
h2
h3
(4)修改core-site.xml,添加以下信息
<property>
<name>hadoop.tmp.dir</name>
<value>/data0/hadoop</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://h1:9000</value>
</property>
<property>
<name>fs.trash.interval</name>
<value>20</value>
</property>
<property>
<name>fs.checkpoint.period</name>
<value>300</value>
<description>The number of seconds between two periodic checkpoints.
</description>
</property>
(5)修改hdfs-site.xml,添加以下信息
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
(6)修改mapred-site.xml,添加以下信息
<property>
<name>mapred.job.tracker</name>
<value>h1:9001</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>40</value>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>10</value>
</property>
4.添加无密码登陆
在其中一台机器上用root生成密钥,一路回车
/usr/bin/ssh-keygen
/root/.ssh/目录生成两个文件
id_rsa(私钥) id_rsa.pub(公钥)
生成认证文件cat id_rsa.pub >>authorized_keys
chmod 600 id_rsa id_rsa.pub authorized_keys
把这上面3个文件分发到hadoop所有机器上
注意.ssh目录应该为700 id_rsa应该为600
修改/etc/ssh/ssh_config
# StrictHostKeyChecking ask -》
StrictHostKeyChecking no
分发到hadoop所有机器,不需要重启服务器
5.把配置好的hadoop安装目录复制到所有节点相同的目录下
for loop in1 2 3;do rsync -av --delete /home/hadoop/* root@192.168.221.17$loop:/home/hadoop/ ; done
6.在/etc/profile里添加环境变量
export JAVA_HOME="/usr/java/jdk1.6.0_10"
export HADOOP_HOME="/home/hadoop/hadoop-0.20.2"
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin:$DERBY_HOME/bin
6.修改主机的hostname,mapreduce使用hostname识别主机进行数据传输
vi /etc/sysconfig/network
运行hostname命令使设置立即生效
7 cd /home/hadoop/hadoop-0.20.2 ;cp contrib/fairscheduler/hadoop-0.20.2-fairscheduler.jar lib/
./hadoop namenode -format