windows Eclipse运行mapreduce配置说明

本文就最近学习使用hadoop,基于window7的eclipse运行mapreduce程序做以下步骤总结

1.下载或自行编译hadoop-eclipse-plugin-2.2.0.jar,具体细节请网上查找资料,将此jar包拷贝到eclipse的plugins目录下,重启eclipse

2.解压hadoop.tar.gz文件至本地磁盘中,配置eclipse关联的hadoop,见eclipse->Window->preference->Data Management->Hadoop Map/Reduce中的hadoop目录,见下图

3.新建mapreduceProject ,见File->New->Map/Reduce Project,新增过程省略

4.下载winutils.exe和hadoop.dll,拷贝是window-system32下面,重启电脑,可参照:http://blog.csdn.net/cnxieyang/article/details/51272093这两个请点击下载

5.下载hadoop的NativeIO.java文件,具体可通过git获取其源码,在当前工程中新建同包同类的文件(org.apache.hadoop.io.nativeio.NativeIO.java),修改access方法返回值为true,修改后的文件如下.

public static boolean access(String path, AccessRight desiredAccess)
        throws IOException {
      //return access0(path, desiredAccess.accessRight());
      return true;
    }

用来修改windows当前进程的访问权限,可解决如下异常:

Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:552)
    at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
    at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187)
    at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174)
    at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108)
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285)
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
    at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:131)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:163)
    at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:536)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Unknown Source)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
    at org.springframework.samples.hadoop.mapreduce.MyWordCount.main(MyWordCount.java:68)  

6.运行hadoop提供的WordCount.java即可。


fd





评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值