Hive过滤脏数据的一些经验

如下文件需要处理,每个文件大概13G,其中字段以空格(32)分隔的7个字段;最麻烦的是中间有脏数据:

-rw-r--r-- 1 hadoop ifengdev 1895843464 May  6 14:56 feedback201503_201.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1896885848 May  6 14:59 feedback201503_202.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1891790676 May  6 15:00 feedback201503_203.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1894197100 May  6 15:01 feedback201503_204.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1894074074 May  6 15:02 feedback201503_205.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1829224750 May  6 16:13 feedback201504_201.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1831709571 May  6 16:14 feedback201504_202.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1824710879 May  6 16:30 feedback201504_203.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1827164031 May  6 16:31 feedback201504_204.tar.gz
-rw-r--r-- 1 hadoop ifengdev 1827911208 May  6 16:31 feedback201504_205.tar.gz

直接Load进Hive报错:

Loading data to table default.tmp_20150506
Failed with exception Wrong file format. Please check the file's format.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask

没办法中间格式有问题:

网上说改变存储格式可以避免报错:

CREATE  TABLE tmp_20150506(
  dt string,
  unknown1 string,
  unknown2 string,
  reurl string,
  uid string,
  num1 int,
  num2 int)
ROW FORMAT DELIMITED
  FIELDS TERMINATED BY '32'
  LINES TERMINATED BY '10'
STORED AS INPUTFORMAT
  'org.apache.hadoop.hive.ql.io.RCFileInputFormat'
OUTPUTFORMAT
  'org.apache.hadoop.hive.ql.io.RCFileOutputFormat'

改为:

CREATE  TABLE tmp_20150506(
  dt string,
  unknown1 string,
  unknown2 string,
  reurl string,
  uid string,
  num1 int,
  num2 int)
ROW FORMAT DELIMITED
  FIELDS TERMINATED BY '32'
  LINES TERMINATED BY '10'
STORED AS TEXTFILE;

确实不报错了,根据具体需求也算一个方法;

 

最直接的方法:

zcat feedback201503_201.tar.gz|gawk -F ' ' 'NF==7 {print $1, "\t", $2, "\t", $3, "\t", $4, "\t", $5, F ' ' 'NF==7 {print $1, "\t", $2, "\t", $3, "\t", $4, "\t", $5, "\t", $6, "\t", $7}' >> feedback20150, "\t", $6, "\t", $7}' >> feedback201503_204.log

功能:替换空格为制表符;并且过滤字段不满足要求的脏数据;

接着Load进Hive即可;

上述方法比较直接,但觉得“体力劳动“过多,可能我比较懒,所以相对喜欢下边的方法:

基本思路就是把一行作为一个字段load进Hive,利用Hive本身筛选数据:

CREATE  TABLE tmp_20150506_raw(
  allfilds string
)
ROW FORMAT DELIMITED
  FIELDS TERMINATED BY '10'
  LINES TERMINATED BY '10'
STORED AS TEXTFILE;
FIELDS TERMINATED BY '10'
LINES TERMINATED BY '10'
都设置成换行符即可,进入Hive以后使用Hive筛选数据即可。
筛选数据并存入另外一张表中,本例的后续处理过程如下
from
(
from
(
select allfilds from tmp_20150506_raw where size(split(allfilds, ' ')) = 7) a
select split(allfilds, ' ')[0] as dt, split(allfilds, ' ')[1] as unknown1, split(allfilds, ' ')[2] as unknown2, split(allfilds, ' ')[3] as reurl, split(allfilds, ' ')[4] as uid, split(allfilds, ' ')[5] as num1, split(allfilds, ' ')[6] as num2) b
insert overwrite table tmp_20150506 partition(month = '2015-04')
select *




 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值