配置hadoop-1.2.1 eclipse开发环境

转自:http://blog.csdn.net/poisonchry/article/details/27535333

配置hadoop-eclipse开发环境

由于hadoop-eclipse-1.2.1插件需要自行编译,所以为了图省事而从网上直接下载了这个jar包,所以如果有需要可以从点击并下载资源。下载这个jar包后,将它放置在eclipse/plugins目录下,并重启eclipse即可。

如果你需要自己编译该插件,请参考文献

配置hadoop-1.2.1与eclipse链接信息

如果没有意外,在你的eclipse的右上角应该出现了一只蓝色的大象logo,请点击那只大象。之后,在正下方的区域将会多出一项Map/Reduce Locations的选项卡,点击该选项卡,并右键新建New Hadoop Location

这时应该会弹出一个对话框,需要你填写这些内容:

  • Location name
  • Map/Reduce Master
  • DFS Master
  • User name

Location name 指的是当前创建的链接名字,可以任意指定;Map/Reduce Master 指的是执行MR的主机地址,并且需要给定hdfs协议的通讯地址; DFS Master 指的是Distribution File System的主机地址,并且需要给定hdfs协议的通讯地址; User name 指定的是链接至Hadoop的用户名。

参考上一篇文章的设计,hadoop-1.2.1集群搭建,这里的配置信息将沿用上一篇文章的设定。

因此,我们的设置情况如下

参数名 配置参数 说明
Location name hadoop  
MapReduce Master Host: 192.168.145.100 NameNode 的IP地址
MapReduce Master Port: 8021 MapReduce Port,参考自己配置的mapred-site.xml
DFS Master Port: 8020 DFS Port,参考自己配置的core-site.xml
User name hadoop  

之后,切换到Advanced parameters,而你需要修改的有如下参数

参数名 配置参数 说明
fs.default.name hdfs://192.168.145.100:8020 参考core-site.xml
hadoop.tmp.dir /home/hadoop/hadoopdata/tmp 参考core-site.xml
mapred.job.tracker hdfs://192.168.145.100:8021 参考mapred-site.xml

之后确认,这样便在eclipse左边出现了HDFS的文件结构。但是现在你只能查看,而不能添加修改文件。因此你还需要手工登录到HDFS上,并使用命令修改权限。

./bin/hadoop fs -chmod -R 777 /

在完成这些步骤后,需要配置最后的开发环境了。

配置开发环境

如果是在Windows上模拟远程开发,那么你需要将hadoop-1.2.1.tar.gz解压一份,我们将解压后得到的hadoop-1.2.1放置在documents里

C:\Users\ISCAS\Documents\src\hadoop-1.2.1

之后,打开 eclipse -> Preferences -> Hadoop Map/Reduce,将解压后的路径添加在 hadoop installation directory 中,并点击apply使设置生效。

这个时候,我们可以试着编译一两个Hadoop程序, File -> Map/Reduce -> Map/Reduce Project 或者直接通过 Project Wizzard 新建一个Hadoop项目,并命名该项目为 Hadoop Test。

我们的第一个程序是 wordcount, 源代码可以从 ..\hadoop-1.2.1\src\examples\org\apache\hadoop\examples 中获得。

/**
 *  Licensed under the Apache License, Version 2.0 (the "License");
 *  you may not use this file except in compliance with the License.
 *  You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 *  Unless required by applicable law or agreed to in writing, software
 *  distributed under the License is distributed on an "AS IS" BASIS,
 *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 *  See the License for the specific language governing permissions and
 *  limitations under the License.
 */    

package org.apache.hadoop.examples;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

  public static class TokenizerMapper 
       extends Mapper<Object, Text, Text, IntWritable>{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class IntSumReducer 
       extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable<IntWritable> values, 
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
    if (otherArgs.length != 2) {
      System.err.println("Usage: wordcount <in> <out>");
      System.exit(2);
    }
    Job job = new Job(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

这里面,为了方便,我们直接贴出该代码。准备好后,就可以直接点击 Run 命令,对代码进行编译。不过在编译前,会弹出一个小窗口,选择 Run on Hadoop,并确认。

等待一段时间,编译后并执行后,你会发现出现一段提示:

Usage: wordcount <in> <out>

WordCount例程,需要输入文件,并且需要指定输出的文件存放目录。因此,我们还需要为程序设定参数。方法是,在Run命令下,选择Run Configurations。

在 Arguments 选项卡中,Program arguments一栏里,指定输入和输出的参数。

我们给定的需要进行统计的文本存放在 /Data/words。

Mary had a little lamb
its fleece very white as snow
and everywhere that Mary went
the lamb was sure to go

所以设定的参数为:

hdfs://192.168.145.100:8020/Data/words hdfs://192.168.145.100:8020/out

配置好参数,并运行,如果你使用的是Windows版本的eclipse,会报出这个错误:

14/05/29 13:49:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/05/29 13:49:16 ERROR security.UserGroupInformation: PriviledgedActionException as:ISCAS cause:java.io.IOException: Failed to set permissions of path: \tmp\hadoop-ISCAS\mapred\staging\ISCAS1655603947\.staging to 0700
Exception in thread "main" java.io.IOException: Failed to set permissions of path: \tmp\hadoop-ISCAS\mapred\staging\ISCAS1655603947\.staging to 0700
    at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:691)
    at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:664)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:514)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:349)
    at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:193)
    at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:126)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Unknown Source)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
    at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)

这个错误在 Linux 系统中是不存在的,因此我们需要对 hadoop 的代码做一些小修改。

修改 Hadoop 源码

导致这一问题的是Windows文件权限问题,不过这一问题在Linux系统下是不存在的,因此如果你需要在Windows下进行编程,那么建议你按照我们提供的方法对hadoop的源码进行修改。

出现问题的文件,位于 hadoop-1.2.1\src\core\org\apache\hadoop\fs\ 下的FileUtil.java。

修改方法是将

private static void checkReturnValue(boolean rv, File p, 
                                    FsPermission permission)
                                throws IOException
{
    /**
    * comment the following, disable this function

    if (!rv)
    {
        throw new IOException("Failed to set permissions of path: " + p + 
                        " to " +
                        String.format("%04o", permission.toShort()));
    }
    */       
}

然后将修改好的文件重新编译,并将.class文件打包到hadoop-core-1.2.1.jar中,并重新刷新工程。这里,我们提供了已经修改后的jar文件包,如果需要可以点击下载,并替换掉原有的hadoop-1.2.1中的jar包。

运行Hadoop源码

再次运行WordCount例程,Hadoop便会正常启动了。

14/05/29 15:13:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/05/29 15:13:59 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/05/29 15:13:59 INFO input.FileInputFormat: Total input paths to process : 1
14/05/29 15:13:59 WARN snappy.LoadSnappy: Snappy native library not loaded
14/05/29 15:13:59 INFO mapred.JobClient: Running job: job_local889277352_0001
14/05/29 15:13:59 INFO mapred.LocalJobRunner: Waiting for map tasks
14/05/29 15:13:59 INFO mapred.LocalJobRunner: Starting task: attempt_local889277352_0001_m_000000_0
14/05/29 15:13:59 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
14/05/29 15:13:59 INFO mapred.MapTask: Processing split: hdfs://192.168.145.100:8020/Data/words:0+109
14/05/29 15:13:59 INFO mapred.MapTask: io.sort.mb = 100
14/05/29 15:13:59 INFO mapred.MapTask: data buffer = 79691776/99614720
14/05/29 15:13:59 INFO mapred.MapTask: record buffer = 262144/327680
14/05/29 15:13:59 INFO mapred.MapTask: Starting flush of map output
14/05/29 15:13:59 INFO mapred.MapTask: Finished spill 0
14/05/29 15:13:59 INFO mapred.Task: Task:attempt_local889277352_0001_m_000000_0 is done. And is in the process of commiting
14/05/29 15:13:59 INFO mapred.LocalJobRunner: 
14/05/29 15:13:59 INFO mapred.Task: Task 'attempt_local889277352_0001_m_000000_0' done.
14/05/29 15:13:59 INFO mapred.LocalJobRunner: Finishing task: attempt_local889277352_0001_m_000000_0
14/05/29 15:13:59 INFO mapred.LocalJobRunner: Map task executor complete.
14/05/29 15:13:59 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
14/05/29 15:13:59 INFO mapred.LocalJobRunner: 
14/05/29 15:13:59 INFO mapred.Merger: Merging 1 sorted segments
14/05/29 15:13:59 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 219 bytes
14/05/29 15:13:59 INFO mapred.LocalJobRunner: 
14/05/29 15:14:00 INFO mapred.Task: Task:attempt_local889277352_0001_r_000000_0 is done. And is in the process of commiting
14/05/29 15:14:00 INFO mapred.LocalJobRunner: 
14/05/29 15:14:00 INFO mapred.Task: Task attempt_local889277352_0001_r_000000_0 is allowed to commit now
14/05/29 15:14:00 INFO output.FileOutputCommitter: Saved output of task 'attempt_local889277352_0001_r_000000_0' to hdfs://192.168.145.100:8020/out
14/05/29 15:14:00 INFO mapred.LocalJobRunner: reduce > reduce
14/05/29 15:14:00 INFO mapred.Task: Task 'attempt_local889277352_0001_r_000000_0' done.
14/05/29 15:14:00 INFO mapred.JobClient:  map 100% reduce 100%
14/05/29 15:14:00 INFO mapred.JobClient: Job complete: job_local889277352_0001
14/05/29 15:14:00 INFO mapred.JobClient: Counters: 19
14/05/29 15:14:00 INFO mapred.JobClient:   Map-Reduce Framework
14/05/29 15:14:00 INFO mapred.JobClient:     Spilled Records=40
14/05/29 15:14:00 INFO mapred.JobClient:     Map output materialized bytes=223
14/05/29 15:14:00 INFO mapred.JobClient:     Reduce input records=20
14/05/29 15:14:00 INFO mapred.JobClient:     Map input records=4
14/05/29 15:14:00 INFO mapred.JobClient:     SPLIT_RAW_BYTES=103
14/05/29 15:14:00 INFO mapred.JobClient:     Map output bytes=195
14/05/29 15:14:00 INFO mapred.JobClient:     Reduce shuffle bytes=0
14/05/29 15:14:00 INFO mapred.JobClient:     Reduce input groups=20
14/05/29 15:14:00 INFO mapred.JobClient:     Combine output records=20
14/05/29 15:14:00 INFO mapred.JobClient:     Reduce output records=20
14/05/29 15:14:00 INFO mapred.JobClient:     Map output records=22
14/05/29 15:14:00 INFO mapred.JobClient:     Combine input records=22
14/05/29 15:14:00 INFO mapred.JobClient:     Total committed heap usage (bytes)=290455552
14/05/29 15:14:00 INFO mapred.JobClient:   File Input Format Counters 
14/05/29 15:14:00 INFO mapred.JobClient:     Bytes Read=109
14/05/29 15:14:00 INFO mapred.JobClient:   FileSystemCounters
14/05/29 15:14:00 INFO mapred.JobClient:     HDFS_BYTES_READ=218
14/05/29 15:14:00 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=137726
14/05/29 15:14:00 INFO mapred.JobClient:     FILE_BYTES_READ=557
14/05/29 15:14:00 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=137
14/05/29 15:14:00 INFO mapred.JobClient:   File Output Format Counters 
14/05/29 15:14:00 INFO mapred.JobClient:     Bytes Written=137

查看在HDFS文件系统中新生成的out文件夹,可以看见生成的part-r-00000,其结果为:

Mary    2
a    1
and    1
as    1
everywhere    1
fleece    1
go    1
had    1
its    1
lamb    2
little    1
snow    1
sure    1
that    1
the    1
to    1
very    1
was    1
went    1
white    1
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值