1. Flume写入到HDFS产生很多小文件,如果不处理这些小文件,hive读取的时候会报以下错误:
Caused by: java.io.IOException: Cannot obtain block length for LocatedBlock{BP-2182603-172.16.1.54-1475087575490:blk_1073840800_100342; getBlockSize()=621818; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[172.16.1.54:50010,DS-bd377256-22fb-4705-9c17-95318249580a,DISK], DatanodeInfoWithStorage[172.16.1.55:50010,DS-32b7284f-1384-4764-b9cc-ba50324cb18e,DISK], DatanodeInfoWithStorage[172.16.1.56:50010,DS-e53c5c50-e47a-4c2e-b30e-ce99a2825424,DISK]]} at org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:427) at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:335) at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:271) at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:263) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1565) at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:309) at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:305) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:305) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:778) at com.hadoop.mapred.DeprecatedLzoLineRecordReader.<init>(DeprecatedLzoLineRecordReader.java:57) at com.hadoop.mapred.DeprecatedLzoTextInputFormat.getRecordReader(DeprecatedLzoTextInputFormat.java:158) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:250)
2. 小文件产生的原因是, 写入HDFS文件中断等因素, 小文件一直处于open write的状态,没有正确关闭,导致hive无法读取。
处理方式:删除这些小文件
#!/bin/bash
input_date=$1
ydt=$(date -d "$input_date 1 days ago " "+%Y-%m-%d")
echo ${ydt}
for flume_file in $(hadoop fsck / -openforwrite |egrep -v '^\.+$' |egrep "MISSING|OPENFORWRITE" |grep -o "/[^ ]*" |sed -e "s/:$//"|sort |uniq |grep "/flume/consumer_access.log/$ydt/")
do
echo $flume_file
hadoop fs -rm -skipTrash $flume_file
done