Hadoop 0.20.0+RHEL 5+Eclipse plugin+Makefile开发环境搭建

  Hadoop框架已经将MapReduce的运行机制封装好,程序员在实际开发过程中,只需要将精力专注于各自的业务逻辑,而不必花费过多的时间和精力来考虑具体怎么调度和执行,因为MapReduce框架已经将这些做好了,这样降低了MapReduce程序开发的难度,具有很好的易用性。

这里通过在Linux系统下,搭建基于Hadoop + Eclipse plugin的开发环境,并在实际中使用。下面详述具体的配置和实践情况。

 

RHEL 5环境配置

 

(1)无密码验证配置

执行如下命令:

[shirdrn@localhost .ssh]$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
[shirdrn@localhost .ssh]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[shirdrn@localhost .ssh]$ ssh localhost
如果可以不需要密码,ssh到localhost,表示配置成功。

否则,如果出现需要输入密码进入,则需要检查一下你的.ssh目录的权限,包括.ssh目录下的authorized_keys和known_hosts是否具有读(r)的权限,如果问题出在这里,授权即可。

(2)环境变量配置

修改.bashrc文件,配置环境变量:

[shirdrn@localhost ~]$ vi .bashrc

# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi
# User specific aliases and functions
export JAVA_HOME=/usr/java/jdk1.6.0_16
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/jre/lib/*.jar:$JAVA_HOME/jre/lib/*.jar
export HADOOP_HOME=/home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0
export PATH=$PATH:$HADOOP_HOME/bin
 

Hadoop准备

 

1、下载hadoop-0.20.0.tar.gz压缩包,解压缩到/home/shirdrn/eclipse/eclipse-3.5.2/hadoop/目录下面:

[shirdrn@localhost hadoop]$ tar -xzvf hadoop-0.20.0.tar.gz

 

2、配置Hadoop

(1)修改hadoop-0.20.0/conf/hadoop-env.sh,在该文件中增加如下三行:

  export JAVA_HOME=/usr/java/jdk1.6.0_16
  export HADOOP_HOME=/home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0
  export PATH=$PATH:$HADOOP_HOME/bin

(2)修改hadoop-0.20.0/conf/core-site.xml,内容如下所示:

(3)修改hadoop-0.20.0/conf/hdfs-site.xml,内容如下所示:

(4)修改hadoop-0.20.0/conf/mapred-site.xml,内容如下所示:

 

 

Eclipse plugin配置

 

这个配置比较容易一点:

解压缩eclipse-SDK-3.5.2-linux-gtk.tar.gz到/home/shirdrn/eclipse/目录下面,然后将hadoop-0.20.0/contrib/eclipse-plugin/hadoop-0.20.0-eclipse-plugin.jar插件拷贝到/home/shirdrn/eclipse/eclipse-3.5.2/eclipse/plugins/目录下面:

 [shirdrn@localhost ~]$ cp /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/contrib/eclipse-plugin/hadoop-0.20.0-eclipse-plugin.jar /home/shirdrn/eclipse/eclipse-3.5.2/eclipse/plugins/

然后,就可以在RHEL 5下启动Eclipse开发工具了。

 

测试实践

 

1、通过Shell命令行,启动Hadoop

(1)格式化HDFS

[shirdrn@localhost hadoop-0.20.0]$ bin/hadoop namenode -format

格式化结果:

10/10/08 08:21:28 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.0
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
************************************************************/
10/10/08 08:21:28 INFO namenode.FSNamesystem: fsOwner=shirdrn,shirdrn
10/10/08 08:21:28 INFO namenode.FSNamesystem: supergroup=supergroup
10/10/08 08:21:28 INFO namenode.FSNamesystem: isPermissionEnabled=true
10/10/08 08:21:28 INFO common.Storage: Image file of size 97 saved in 0 seconds.
10/10/08 08:21:28 INFO common.Storage: Storage directory /tmp/hadoop/hadoop-shirdrn/dfs/name has been successfully formatted.
10/10/08 08:21:28 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
************************************************************/

(2)启动Hadoop后台线程

 [shirdrn@localhost hadoop-0.20.0]$ bin/start-all.sh

执行结果:

starting namenode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-namenode-localhost.out
localhost: starting datanode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-datanode-localhost.out
localhost: starting secondarynamenode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-secondarynamenode-localhost.out
starting jobtracker, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-jobtracker-localhost.out
localhost: starting tasktracker, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-tasktracker-localhost.out

(3)查看确认进程是否全部启动

[shirdrn@localhost hadoop-0.20.0]$ jps
8100 DataNode
8398 TaskTracker
8230 SecondaryNameNode
7994 NameNode
8301 JobTracker 
8459 Jps

可见,正常启动。

2、准备测试数据

上传测试数据,执行命令:

[shirdrn@localhost hadoop-0.20.0]$ bin/hadoop fs -put conf/ input

如果没有报错,说明上传成功。

可以通过如下命令进行验证:

[shirdrn@localhost hadoop-0.20.0]$ bin/hadoop fs -ls /user/shirdrn/input
Found 13 items
-rw-r--r--   1 shirdrn supergroup       6275 2010-10-08 08:24 /user/shirdrn/input/capacity-scheduler.xml
-rw-r--r--   1 shirdrn supergroup        535 2010-10-08 08:24 /user/shirdrn/input/configuration.xsl
-rw-r--r--   1 shirdrn supergroup        388 2010-10-08 08:24 /user/shirdrn/input/core-site.xml
-rw-r--r--   1 shirdrn supergroup       2396 2010-10-08 08:24 /user/shirdrn/input/hadoop-env.sh
-rw-r--r--   1 shirdrn supergroup       1245 2010-10-08 08:24 /user/shirdrn/input/hadoop-metrics.properties
-rw-r--r--   1 shirdrn supergroup       4190 2010-10-08 08:24 /user/shirdrn/input/hadoop-policy.xml
-rw-r--r--   1 shirdrn supergroup        259 2010-10-08 08:24 /user/shirdrn/input/hdfs-site.xml
-rw-r--r--   1 shirdrn supergroup       2815 2010-10-08 08:24 /user/shirdrn/input/log4j.properties
-rw-r--r--   1 shirdrn supergroup        275 2010-10-08 08:24 /user/shirdrn/input/mapred-site.xml
-rw-r--r--   1 shirdrn supergroup         10 2010-10-08 08:24 /user/shirdrn/input/masters
-rw-r--r--   1 shirdrn supergroup         10 2010-10-08 08:24 /user/shirdrn/input/slaves
-rw-r--r--   1 shirdrn supergroup       1243 2010-10-08 08:24 /user/shirdrn/input/ssl-client.xml.example
-rw-r--r--   1 shirdrn supergroup       1195 2010-10-08 08:24 /user/shirdrn/input/ssl-server.xml.example

3、在Eclipse上进行开发

(1)启动Eclipse 3.5.2,设置工作目录为/home/shirdrn/eclipse/eclipse-3.5.2/workspace。

这时,切换到Open Perspective,可以看到Map/Reduce视图,切换到这个视图,可以看到Eclipse IDE左侧的Project Explorer中出现了DFS Locations。在后面的实践中,我们创建一个Map/Reduce项目的时候,就会看到DFS Locations会显示当前相关的HDFS上的资源目录情况。

(2)创建并配置Map/Reduce项目

创建一个Map/Reduce项目,名称为hadoop,这时,需要在该选项卡上看到“Configure Hadoop install directory...”链接,打开,配置内容为,我们前面指定的$HADOOP_HOME的目录,即为/home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0;

点击Next,直到Finish,可以看到Eclipse IDE左侧的Project Explorer中的项目,展开后,可以看到除了src资源文文件以外,还有很多Hadoop相关的jar文件;

选中该项目hadoop,在包org.shirdrn.hadoop中分别创建Hadoop发行包中自带的、经过我们分解的WordCount例子的源代码,如下所示:

Mapper类为TokenizerMapper.java,如下所示:

Reducer类为IntSumReducer.java,如下所示:

 MapReduce Driver类为WordCount.java,如下所示:

4、使用Makefile进行打包运行

由于该版本的Eclipse plugin不能直接Run On Hadoop,解决的办法就是通过实现自己的Makefile来进行单独操作,实现Hadoop程序的运行。

针对上面我们使用的例子,编写如下Makefile:

JarFile="WordCount-V0.1.jar"
MainFunc="org.shirdrn.hadoop.WordCount"
LocalOutDir="/tmp/output"
all:help
jar:
 jar -cvf ${JarFile} -C bin/ .
run:
 hadoop jar ${JarFile} ${MainFunc} input output
clean:
 hadoop fs -rmr output
output:
 rm -rf ${LocalOutDir}
 hadoop fs -get output ${LocalOutDir}
 cat ${LocalOutDir}/part-r-00000
help:
 @echo "Usage:"
 @echo " make jar     - Build Jar File."
 @echo " make clean   - Clean up Output directory on HDFS."
 @echo " make run     - Run your MapReduce code on Hadoop."
 @echo " make output  - Download and show output file"
 @echo " make help    - Show Makefile options."
 @echo " "
 @echo "Example:"
 @echo " make jar; make run; make output; make clean"

(1)打包Jar文件

[shirdrn@localhost hadoop]$ make jar
jar -cvf "WordCount-V0.1.jar" -C bin/ .
added manifest
adding: org/(in = 0) (out= 0)(stored 0%)
adding: org/shirdrn/(in = 0) (out= 0)(stored 0%)
adding: org/shirdrn/hadoop/(in = 0) (out= 0)(stored 0%)
adding: org/shirdrn/hadoop/IntSumReducer.class(in = 2320) (out= 901)(deflated 61%)
adding: org/shirdrn/hadoop/WordCount.class(in = 2022) (out= 1066)(deflated 47%)
adding: org/shirdrn/hadoop/TokenizerMapper.class(in = 2232) (out= 887)(deflated 60%)

(2)运行程序

[shirdrn@localhost hadoop]$ make run
hadoop jar "WordCount-V0.1.jar" "org.shirdrn.hadoop.WordCount" input output
10/10/08 08:46:54 INFO input.FileInputFormat: Total input paths to process : 13
10/10/08 08:46:55 INFO mapred.JobClient: Running job: job_201010080822_0001
10/10/08 08:46:56 INFO mapred.JobClient:  map 0% reduce 0%
10/10/08 08:47:40 INFO mapred.JobClient:  map 15% reduce 0%
10/10/08 08:47:59 INFO mapred.JobClient:  map 30% reduce 0%
10/10/08 08:48:18 INFO mapred.JobClient:  map 46% reduce 10%
10/10/08 08:48:24 INFO mapred.JobClient:  map 61% reduce 15%
10/10/08 08:48:30 INFO mapred.JobClient:  map 76% reduce 15%
10/10/08 08:48:33 INFO mapred.JobClient:  map 76% reduce 20%
10/10/08 08:48:36 INFO mapred.JobClient:  map 92% reduce 20%
10/10/08 08:48:44 INFO mapred.JobClient:  map 100% reduce 25%
10/10/08 08:48:47 INFO mapred.JobClient:  map 100% reduce 30%
10/10/08 08:48:55 INFO mapred.JobClient:  map 100% reduce 100%
10/10/08 08:48:58 INFO mapred.JobClient: Job complete: job_201010080822_0001
10/10/08 08:48:58 INFO mapred.JobClient: Counters: 17
10/10/08 08:48:58 INFO mapred.JobClient:   Job Counters 
10/10/08 08:48:58 INFO mapred.JobClient:     Launched reduce tasks=1
10/10/08 08:48:58 INFO mapred.JobClient:     Launched map tasks=13
10/10/08 08:48:58 INFO mapred.JobClient:     Data-local map tasks=13
10/10/08 08:48:58 INFO mapred.JobClient:   FileSystemCounters
10/10/08 08:48:58 INFO mapred.JobClient:     FILE_BYTES_READ=17108
10/10/08 08:48:58 INFO mapred.JobClient:     HDFS_BYTES_READ=20836
10/10/08 08:48:58 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=34704
10/10/08 08:48:58 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=11807
10/10/08 08:48:58 INFO mapred.JobClient:   Map-Reduce Framework
10/10/08 08:48:58 INFO mapred.JobClient:     Reduce input groups=0
10/10/08 08:48:58 INFO mapred.JobClient:     Combine output records=832
10/10/08 08:48:58 INFO mapred.JobClient:     Map input records=624
10/10/08 08:48:58 INFO mapred.JobClient:     Reduce shuffle bytes=17180
10/10/08 08:48:58 INFO mapred.JobClient:     Reduce output records=0
10/10/08 08:48:58 INFO mapred.JobClient:     Spilled Records=1664
10/10/08 08:48:58 INFO mapred.JobClient:     Map output bytes=27728
10/10/08 08:48:58 INFO mapred.JobClient:     Combine input records=2010
10/10/08 08:48:58 INFO mapred.JobClient:     Map output records=2010
10/10/08 08:48:58 INFO mapred.JobClient:     Reduce input records=832

(3)查看结果

[shirdrn@localhost hadoop]$ make output
version="1.0">  1
version="1.0"?> 8
via     2
virtual 3
want    1
when    1
where   2
where,  1
whether 1
which   8
who     1
will    8
with    5
worker  1
would   5
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"        1

上述截取了部分结果。

 

参考文章

 

本文是按照下面作者给出实现方式,进行学习和实践的,作为个人学习记录,非常感谢。

1. 《hadoop 0.20 程式開發》 http://trac.nchc.org.tw/cloud/wiki/waue/2009/0617

 

 

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值