hadoop
文章平均质量分 51
tucailing
这个作者很懒,什么都没留下…
展开
-
org.apache.hadoop.ipc.RemoteException: User: root is not allowed to impersonate root
org.apache.oozie.servlet.XServletException: E0902: Exception occured: [org.apache.hadoop.ipc.RemoteException: User: root is not allowed to impersonate root] at org.apache.oozie.servlet.BaseJobServle原创 2013-06-24 11:48:20 · 12040 阅读 · 2 评论 -
org.eclipse.core.runtime.CoreException: D:\workspace\hadoop-1.1.2\build.xml:83: Execute failed: java
在win7用Ant编译hadoop工程的时候,遇到了一个报错,如下: org.eclipse.core.runtime.CoreException: D:\workspace\hadoop-1.1.2\build.xml:83: Execute failed: java.io.IOException: Cannot run program "sed" 打开build.xml文件,找到s转载 2013-08-28 17:25:45 · 1033 阅读 · 0 评论 -
hadoop.security.AccessControlRxception:Pression denied user=xxx access=READ_EXECUTE,inode=".staging"
hadoop.security.AccessControlRxception:Pression denied user=xxx access=READ_EXECUTE,inode=".staging" inode="user":hadoop:supergroup:rwxr-xr-xsulution:added this entry to conf/hdfs-site.xmldf原创 2013-08-28 16:05:15 · 1477 阅读 · 0 评论 -
Unknown protocol to name node: org.apache.hadoop.mapred.JobSubmissionProtocol
Cannot connect to the Map/Reduce location: hadoopjava.io.IOException: Unknown protocol to name node: org.apache.hadoop.mapred.JobSubmissionProtocolat org.apache.hadoop.hdfs.server.namenode.NameNod原创 2013-07-26 14:55:06 · 2554 阅读 · 0 评论 -
Permission denied:user=tucailing ,access=READ_EXECUTE inode=".staging":root:supergroup:rwx
记录一下,解决方法是:mapreduce的jobtracker和namenode端口配置错误了原创 2013-07-26 14:11:28 · 1297 阅读 · 0 评论 -
FSNamesystem: Not able to place enough replicas, still in need of 1
2013-07-04 01:21:50,677 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 12013-07-04 01:21:50,677 ERROR org.apache.hadoop.security.UserGr原创 2013-07-04 18:20:33 · 6111 阅读 · 0 评论 -
DataNode: java.io.IOException: Incompatible namespaceIDs in /dfs/dfs/data: namenode namespaceID = 69
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /dfs/dfs/data: namenode namespaceID = 698802253; datanode namespaceID = 817847838at org.apach原创 2013-07-04 18:43:22 · 1453 阅读 · 0 评论 -
pig0.9.2安装与配置
一、下载pig在http://mirrors.cnnic.cn/apache/pig/上下载 pig-0.9.2.tar.gz,我的是下载到/usr/local/目录 二、解压安装pig打开进入到local目录下,执行# tar -zxvf pig-0.9.2.tar.gz解压到pig-0.9.2目录下三、配置环境变量# vi /etc/profil原创 2013-07-04 20:06:11 · 850 阅读 · 0 评论 -
Call to localhost/127.0.0.1:9001 failed on connection exception: java.net.ConnectException: Connecti
忘记这个错误什么时候出现的了,大概是一下两种情况会出现:1、启动eclipse的时候,DFS报错2、运行Map/Reduce程序的时候Cannot connect to the Map/Reduce location: hadoop2Call to localhost/127.0.0.1:9001 failed on connection exception: java.net原创 2013-06-25 20:02:48 · 6234 阅读 · 0 评论 -
org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /
启动hadoop的时候,DataNode启动不起来,看看log才发现有问题:2013-06-25 00:22:03,962 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************STAR原创 2013-06-25 16:09:39 · 1177 阅读 · 0 评论 -
bash: hadoop: command not found
输入[root@localhost bin]# hadoop fs -ls / 报bash: hadoop: command not found 这个错误需要将hadoop/bin路径加入PATH,配置环境变量[root@localhost bin]# vi ~/.bash_profile 打开文件,添加hadoop的bin路径到path中PATH=$PATH:$HOME/bin原创 2013-06-24 14:30:20 · 21418 阅读 · 0 评论 -
搭建hadoop伪分布节点时ssh的设置
目标:ssh localhost 无需键入密码步骤 :前提:确认主机安装ssh shhd 分别是 ssh客户端和server端使用hadoop账户方法一:ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsacat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys[r原创 2013-08-29 16:28:13 · 1015 阅读 · 0 评论