联系两个晚上跑数据出现这个异常,具体内容如下:
2011-08-10 05:51:21,823 ERRORorg.apache.hadoop.hbase.regionserver.CompactSplitThread: Compaction/Splitfailed for region wb_content,00000000000001231426034_10630922139,1312541358653.19f954eef4d1fa134e71db00f3345f88.
java.io.IOException: Could not seekStoreFileScanner[HFileScanner for readerreader=hdfs://:9000/hbase/wb_content/19f954eef4d1fa134e71db00f3345f88/weiboType/4615137989595256997,compression=none, inMemory=false,firstKey=0000000000000si1231426034_si10630922139/weiboType:weiboType/1312533397173/Put,lastKey=0000000000000si1241801521_si11589283764/weiboType:weiboType/1312539248176/Put,avgKeyLen=68, avgValueLen=1, entries=1397940, length=108760902, cur=null]
atorg.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:104)
atorg.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:106)
at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:922)
at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:733)
atorg.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:770)
at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:715)
atorg.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:81)
Caused by: java.io.IOException: Could notobtain block: blk_-7641868513609446401_450401file=/hbase/wb_content/19f954eef4d1fa134e71db00f3345f88/weiboType/4615137989595256997
atorg.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
atorg.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1800)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1948)
at java.io.DataInputStream.read(DataInputStream.java:132)
atorg.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:105)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
atorg.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1094)
atorg.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:1036)
atorg.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1437)
atorg.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:139)
atorg.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:96)
... 6 more
查了些资料给出的结论是
hdfs-site.xml配置文件中参数限制了datanode所允许同时执行的发送和接受任务的数量,缺省为256,hadoop-defaults.xml中通常不设置这个参数。
这个限制看来实际有些偏小,高负载下,DFSClient 在put数据的时候会报 could not read from stream 的 Exception。
值得注意的是,程序里进行比较的数值来自于java的ThreadGroup,据JDK API文档解释,这个参数并不确保准确,仅供参考用途,Core Java也建议不要使用ThreadGroup,hadoop在此使用ThreadGroup有些值得商榷。
<property>
<name>dfs.datanode.max.xcievers</name>
<value>2047</value>
<description>xcieversnumber</description>
</property>
第一个晚上修改了该参数后仍然抛出异常,后根据异常内容分析应为数据块上锁导致,我 这有两个后台进程,同时对表进行读写操作,当我进行写操作时会进行split,压缩操作,当读操作访问该块时会被限制,后来对一个表单跑一个程序问题目前还没再出现,先记录一下,后面需要对程序及hadoop进行优化