hadoop常见错误

1.   解决:是缺少相关jar没配入classpath.这错报得有点不懂。或直接拷贝相关jar到bin下。

 jar位置:hadoop-2.6.0-cdh5.8.0\share\hadoop\mapreduce2

 

解决这耗费不少时间,坑。

./hadoop jar /opt/hadoop-2.6.0-cdh5.8.0/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.8.0.jar wordcount  /user/hadoop/input /user/hadoop/inputcount

17/03/08 05:59:29 WARN security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.

java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.

     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)

     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)

     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)

     at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1277)

     at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1273)

     at java.security.AccessController.doPrivileged(Native Method)

     at javax.security.auth.Subject.doAs(Subject.java:415)

     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)

     at org.apache.hadoop.mapreduce.Job.connect(Job.java:1272)

     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1301)

     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325)

     at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)

     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

     at java.lang.reflect.Method.invoke(Method.java:606)

     at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)

     at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)

     at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)

     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

     at java.lang.reflect.Method.invoke(Method.java:606)

     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

 2.

windown eclipse执行hadoop程序,由于window登录用户是Administrator,执行hadoop程序时会默认用这系统用户,但上传文件的目录可能属于另一用户或组就报如下异常。.org.apache.hadoop.security.AccessControlException: Permission denied: user=Administrator, access=WRI

 

解决办法:

a./hdfs dfs -chmod 777  /user/hadoop

 

b. hdfs-site.xml中加入

<property>
<name>dfs.permissions</name>
<value>false</value>
<description>
If "true", enable permission checking in HDFS.
If "false", permission checking is turned off,
but all other behavior is unchanged.
Switching from one parameter value to the other does not change the mode,
owner or group of files or directories.
</description>
</property>

 

c.设置拥有权限的用户

 

设置环境变量

HADOOP_USER_NAME

或代码直接设置

System.setProperty("HADOOP_USER_NAME", "hadoop");

3. 主要可能由两原因引起。

可能加载了window本地的hadoop.dll,从而以为是一集群环境,但window又不支持hadoop集群环境。

 在本地(Window 7 环境)本地模式下运行却遇到了下述异常:

 

An exception or error caused a run to abort: org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Ljava/lang/String;JJJI)Ljava/io/FileDescriptor; 
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Ljava/lang/String;JJJI)Ljava/io/FileDescriptor;
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Native Method)
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileOutputStreamWithMode(NativeIO.java:559)
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:219)
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
    at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:295)
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:388)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:451)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:430)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:920)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:901)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:368)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
    at org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:82)

 

    at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1882)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值