本人安装的Hadoop2.2的系统是 64 bit CentOS 6.5,安装步骤如下。
1. 预备条件
* 安装了Java 6.0以上版本的JDK;
* 必备开发库,运行脚本
yum -y install lzo-devel zlib-devel gcc autoconf automake libtool cmake openssl-devel
#---------------------------------
lzo-devel i686 2.03-3.1.el6 base 31 k
openssl-devel i686 1.0.1e-16.el6_5.7 updates 1.2 M
zlib-devel i686 1.2.3-29.el6 base 44 k
Installing for dependencies:
keyutils-libs-devel i686 1.4-4.el6 base 28 k
krb5-devel i686 1.10.3-15.el6_5.1 updates 493 k
libcom_err-devel i686 1.41.12-18.el6 base 32 k
libselinux-devel i686 2.0.94-5.3.el6_4.1 base 136 k
libsepol-devel i686 2.0.41-4.el6 base 64 k
lzo-minilzo
* Maven 3.0 or 之后的版本
* Findbugs 1.3.9 (可以忽略,官方文档说需要,我觉得没有用到)
* ProtocolBuffer 2.5.0
* CMake 2.6 or newer (第二点已经帮我们安装了)
* machine链接到互联网
2. 下载Hadoop 2.2 的源码,编译。如果是32bit系统,可以跳过此步骤。
原因是官方网站已经帮我们编译好了,可以直接下载来使用。
mvn clean package -Pdist,native -DskipTests -Dtar
3. 配置集群中的机器
编辑每台机的hosts文件
sudo vim /etc/hosts
#第一台是master机,其它是slave机
192.168.177.172 hadoop-master hbase-master 192.168.177.158 machine-0 192.168.177.167 machine-1 192.168.177.168 machine-2
4. 设置SSH无密码链接
4.1 在每台机上运行:
ssh-keygen -t rsa
接着不断按Enter键,记住不能够设置密码。不然,不能无密码链接
4.2 进入到.ssh 目录中,运行:
cp id_rsa.pub authorized_keys
4.3 将本机的蜜月复制到其它机器上,命令:
#master machine
ssh-copy-id -i ~/.ssh/id_rsa.pub machine-0
ssh-copy-id -i ~/.ssh/id_rsa.pub machine-1
ssh-copy-id -i ~/.ssh/id_rsa.pub machine-2
#slavemachine
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop-master
5. 配置Hadoop 文件(解压编译好的文件)
5.1 配置环境
$cd ~
$vi .bashrc
paste following to the end of the file
#Hadoop variables
export JAVA_HOME=/usr/lib/jvm/jdk/jdk1.6.0_43
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
###end of paste
$source ~/.bashrc
5.2 配置hadoop-en.sh
export JAVA_HOME=/usr/lib/jvm/jdk/jdk1.6.0_43
5.3 配置core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop-master:9000/</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/tmp</value>
</property>
</configuration>
5.4 配置hdfs-site.xml
<configuration>
<property>
<name>dfs.data.dir</name>
<value>/opt/hadoop/dfs/name/data</value>
<final>true</final>
</property>
<property>
<name>dfs.name.dir</name>
<value>/opt/hadoop/dfs/name</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
5.5 配置mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
5.6 配置yarn-site.xml
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop-master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop-master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop-master:8040</value>
</property>
5.7 格式化namenode节点
hdfs namenode -format
5.8 将配置好的hadoop分发到各个机器上,比如:
scp -r /opt/hadoop machine-0:/opt/
5.9 配置master主机,在slave文件中添加:
machine-0
machine-1
machine-2
5.10 启动服务:
start-dfs.sh
..........
start-yarn.sh
...........
6. 测试Hadoop
向hdfs中上传文件,将文件abc.txt 添加到input目录下
:
hdfs dfs –put abc.txt /input
运行测试实例:
hadoop jar /op/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-
2.2
.
0
.jar pi
2
5
转载请附加处处,谢谢!