hadoop错误
文章平均质量分 69
迷途小码
大数据及后台开发
展开
-
SLF4J: Class path contains multiple SLF4J bindings 错误
SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [zip:E:/newbea/user_projects/domains/base_domain/servers/AdminServer/tmp/_WL_user/_appsdir_CTS_dir/6ctkfx/war/WEB-INF/lib/sl转载 2014-03-11 15:38:03 · 1297 阅读 · 0 评论 -
Incompatible namespaceIDs
发生的场景:本机上启动一个namenode和一个datanode ,然后datanode无法启动 。错误:namespaceIDs不一致 。原因:每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode format清空了namenode下的数据,但是没有晴空datanode下的数据,导致启动时失败转载 2013-09-27 11:45:24 · 763 阅读 · 0 评论 -
Jetty JVM NIO Bug
1.从hadoop日志里经常可以看到以下的输出日志: 2013-05-28 20:07:05,476 INFO org.mortbay.log: org.mortbay.io.nio.SelectorManager$SelectSet@3160e069 JVM BUG(s) - injecting delay35 times 2013-05-28 20:07:05,476转载 2013-11-07 09:35:37 · 4225 阅读 · 0 评论 -
hadoop-HDFS抛出错误 (java.io.IOException: config())
DEBUG [main]Configuration.(211) | java.io.IOException: config()atorg.apache.hadoop.conf.Configuration.(Configuration.java:211)atcom.netqin.hdfs.MyHdfs.isExists(MyHdfs.java:20)atcom.netqin.hdfs.M转载 2013-07-18 09:52:28 · 3079 阅读 · 0 评论 -
TaskTracker: Failed to get system directory...
192.168.14.168 master192.168.14.169 slave1192.168.14.170 slave2=========================core-site.xml fs.default.name hdfs://master:9000 true转载 2013-11-07 15:37:47 · 4374 阅读 · 0 评论 -
hadoop namenode连接异常的一种解决方案
在ubuntu下安装完hadoop 1.03版本的伪分布式环境后,在顺利安装成功后,一旦重启电脑,总是会发生如下错误2012-06-22 15:35:49,359 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:500702012-06-22 15:35:49,365 INFO o转载 2013-10-09 09:04:39 · 3006 阅读 · 0 评论 -
Number of Under-Replicated Blocks问题
跑了一个mapreduce发现集群上出现了7个Under-Replicated Blocks,在web页面上能看到,在主节点上执行:[html] view plaincopy$ bin/hadoop fsck -blocks 也能看到。删除导致问题的文件之后就好了。这个问题好像没导致什么大毛病。查了资料:http://lucene.472066转载 2014-03-06 09:36:40 · 3311 阅读 · 0 评论 -
Unable to load native-hadoop library 和 Snappy native library not loaded 的解决
日志中总会看到这两句话:13/05/03 11:58:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable13/05/03 11:58:57 WARN snappy.Load转载 2013-09-16 09:01:26 · 2751 阅读 · 0 评论 -
No FileSystem for scheme: hdfs
当引入hadoop-common-2.2.0.jar包进行二次开发,比如读写HDFS文件时,初次运行报错。java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)转载 2014-06-09 15:10:15 · 2683 阅读 · 0 评论 -
java.io.IOException: could only be replicated to 0 nodes, instead of 1.
使用默认的hadoop.tmp.dir的选线一般都为/tmp,而linux系统的/tmp目录往往文件系统的类型往往是Hadoop不支持的。所以我们在反复namenode format, restart之后,仍然启动不了HDFS,不妨修改一些hadoop.tmp.dir目录。转载 2014-06-30 19:21:46 · 1235 阅读 · 0 评论 -
集群datanode节点失败导致hdfs写失败
这几天由于杭州集群处于升级过度时期,任务量大,集群节点少(4个DN),集群不断出现问题,导致flume收集数据出现错误,以致数据丢失。出现数据丢失,最先拿来开刀的就是数据收集,好嘛,先看看flume的错误日志:Caused by: java.io.IOException: Failed to add a datanode. User may turn off转载 2014-03-25 18:00:30 · 4547 阅读 · 4 评论 -
Aggregation is not enabled. Try the nodemanager at IP:HOST
Question:Answer:翻译 2014-06-12 15:32:35 · 5868 阅读 · 0 评论 -
DataXceiver error processing WRITE_BLOCK operation src: /x.x.x.x:50373 dest: /x.x.x.x:50010
Error:DataXceiver error processing WRITE_BLOCK operation src: /x.x.x.x:50373 dest: /x.x.x.x:50010Solution:1.修改进程最大文件打开数/etc/security/limits.conf# End of file* - nofile转载 2014-03-14 16:20:07 · 13181 阅读 · 0 评论 -
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService for Block pool
错误:2015-08-13 14:23:14,169 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService for Block pool BP-1708135862-172.16.19.22-1397469359179 (storage id DS-1824083977-172.16原创 2015-08-13 14:38:36 · 4554 阅读 · 0 评论 -
java.lang.RuntimeException: java.lang.ClassNotFoundException: xxxxMapper
解决方法最近做hadoop集群试验,用的hadoop1.0.2,遇到这么个问题,将写好的jar包,放到linux上后,执行hadoop jar hadoopTest.jar test.XXXCount input output 后,运行时,会报下面的警告WARN mapred.JobClient: No job jar file set. User classes may转载 2013-09-06 16:02:21 · 1390 阅读 · 0 评论 -
The processing instruction target matching "[xX][mM][lL]" is n...
The processing instruction target matching "[xX][mM][lL]" is not allowed. Exception:org.xml.sax.SAXParseException: The processing instruction target matching "[xX][mM][lL]" is not allowed.这个异常解释转载 2014-02-14 13:08:49 · 1455 阅读 · 0 评论 -
磁盘损坏导致datanode异常结束
集群datanode节点挂掉一个。错误如下:[plain] view plaincopy2013-11-18 02:01:13,730 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.1.190:50010, storageID=DS-15565965转载 2014-03-06 14:42:22 · 3371 阅读 · 2 评论 -
Premature EOF: no length prefix available
Error:2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no length prefix availableSolution:由于DataNode终止了blo翻译 2014-03-14 16:26:45 · 28503 阅读 · 1 评论 -
Hadoop本地库与系统版本不一致引起的错误解决方法
部署Hadoop的集群环境为 操作系统 CentOS 5.8 hadoop版本为cloudera hadoop-0.20.2-cdh3u3集群中设置支持gzip lzo压缩后,在对压缩文件进行读取或者对输入文件压缩的时候要使用到hadoop的本地库,本地库的默认位置在$HADOOP_HOME/lib/native/Linux-amd64-64 (64位操作系统转载 2014-03-19 11:35:02 · 763 阅读 · 0 评论 -
Journal Storage Directory not formatted
当你从异常信息中看到JournalNode not formatted,如果在异常中看到三个节点都提示需要格式化JournalNode,如果你是新建集群,你可以重新格式化NameNode,同时你会发现,JournalNode的目录没有被格式化…如果只是其中一个JournalNode没有被格式化,那么首先检查你的JournalNode目录权限是否存在问题,然后从其他JournalNode转载 2014-04-15 14:50:11 · 12037 阅读 · 1 评论 -
Incompatible namespaceID for journal Storage Directory ...
Question:2014-04-15 15:00:02,235 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [172.16.19.21:8485原创 2014-04-15 15:19:58 · 5121 阅读 · 0 评论 -
Recover Hadoop NameNode Failure
In production, you should run the NameNodes in HA mode with a quorum of journalling nodes, or a shared HA-NFS storage for the edit log transaction files. If you do not want or use HA, you need to转载 2014-04-15 16:43:39 · 1167 阅读 · 0 评论 -
fix 421 Maximum login limit has been reached. on hdfs-over-ftp
hdfs-over-ftp是based on Apache ftpserver project的opensource软件,在我编dfs-fuse编不起来的时候实在是一个access hdfs的好方法,然而实际使用后发现常常会出现Maximum login limit has been reached的问题,trace一下code发现问题出现在ftpserver的ConnectionConfig原创 2013-11-05 17:21:10 · 5524 阅读 · 2 评论 -
TooManyOpenFiles
http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedToToo Many Open FilesYou can see this on Linux machines in client-side applications, server code or even in test runs.It is caused by p转载 2013-09-04 17:39:49 · 897 阅读 · 0 评论 -
Hadoop “Unable to load native-hadoop library for your platform” error
Question:I'm currently configuring hadoop on a server running CentOs. When I run start-dfs.sh or stop-dfs.sh, I get the following error:WARN util.NativeCodeLoader: Unable to load native-hadoop转载 2014-01-21 17:06:48 · 1915 阅读 · 0 评论 -
fuse_trash.c:119: error: too few arguments to function 'hdfsDelete'
在hadoop 1.1.2中使用命令编译fuse_dfs的时候原创 2014-04-08 22:06:44 · 2426 阅读 · 0 评论 -
There was an error collecting ganglia data (127.0.0.1:8652): fsockopen error: Permission denied
Ganglia访问失败:There was an error collecting ganglia data (127.0.0.1:8652): fsockopen error: Permission denied 解决:方法需要关闭selinux:vi /etc/selinux/config,把SELINUX=enforcing改成SELINUX=disable;需要转载 2014-04-10 14:33:22 · 4263 阅读 · 2 评论 -
hadoop fuse挂载问题 (fuse-dfs didn't recognize /tmp/hdfs, -2)
在使用fuse-dfs挂载HDFS时使用命令 ./fuse_dfs_wrapper.sh [/color][url=dfs://10.10.102.41:9000]dfs://10.10.102.41:9000[/url][color=#323d4f] /tmp/hdfs 想要把HDFS mount到/tmp/hdfs下面,总是遇到port=9000,server=10.10.102.41 fus转载 2014-05-15 17:05:37 · 1680 阅读 · 0 评论 -
hadoop2.2.0 源码编译 常见错误
ps: hadoop2.2.0的源码有个bug :修改 /hadoop-common-project/hadoop-auth/pom.xml中添加如下依赖:12 org.mortbay.jetty3 jetty-util4 tes原创 2014-05-19 17:11:13 · 1842 阅读 · 0 评论 -
Why does Hadoop report “Unhealthy Node local-dirs and log-dirs are bad”?
Question:In the health report, it provides the error:1/1 local-dirs are bad: /tmp/hadoop-hduser/nm-local-dir; 1/1 log-dirs are bad: /usr/local/hadoop/logs/userlogsI've redone the whole p转载 2017-05-25 14:01:53 · 2002 阅读 · 0 评论