hive中的自定义函数支持Transform和UDF。UDF是将java代码打包上传,如果你不想写java代码也可以,那就用到了Transform,写一个脚本,通过脚本来处理。
本文是写的Python脚本来处理json数据,作用是和上篇文章的UDTF一样(上篇文章)。
一:编写Python脚本 json-udtf.py
#!/bin/python
import sys
import json
import time
for line in sys.stdin:
line = line.strip()
ss=json.loads(line.replace("'",'"'))['data'][0]
keys=["product","userAgent","clickElement","userId","clickTime"]
product=ss["product"]
userAgent1=ss["userAgent"]
clickElement=ss["clickElement"]
userId=ss["userId"]
x=time.localtime(long(ss["clickTime"]))
clickTime=time.strftime('%Y-%m-%d %H:%M:%S',x)
if "|" in product :
bb=product.split("|")
product=bb[0]
pversion=bb[1]
else:
pversion="null"
aa=userAgent1.split("|")
userAgent=aa[0]
version=aa[1]
if product is None :
product="null"
if pversion is None :
pversion="null"
if userAgent is None :
userAgent="null"
if version is None :
version="null"
if clickElement is None :
clickElement="null"
if userId is None :
userId="null"
if clickTime is None :
clickTime="null"
print '\t'.join([product,pversion,userAgent,version,clickElement,userId,clickTime])
二:在hive中添加脚本使用
添加:
hive> add FILE /usr/local/hive/auxlib/json-udtf.py;
Added resources: [/usr/local/hive/auxlib/json-udtf.py]
hive>
使用:
hive> select TRANSFORM(name)
> USING 'python json-udtf.py'
> from test_wcf.test_udtf;
三:运行结果
四:遇到的错误
最常见的两种错误信息:
一:
Error: java.lang.RuntimeException: Hive Runtime Error while closing operators
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:217)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20003]: An error occurred when trying to close the Operator running your custom script.
at org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:557)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:199)
... 8 more
二:
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20001]: An error occurred while reading or writing to your custom script. It may have crashed with an error.
at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:453)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:497)
... 9 more
注释:对于这两种错误,我也没找到特别好的解释,总结了一下主要问题还是python的代码的问题:
1:python中代码的缩进问题
2:python中代码逻辑问题或者是单引号和双引号的问题
以上两种总结仅供参考,如果有什么好的见解欢迎交流
五:UDF和Transform的对比
1:UDF所需要的jar包可以放在$HIVE_HOME/auxlib 目录下,当hive启动的时候可以自动去加载,并且UDF可以注册成永久函数,而Transform需要每次add FILE /usr/local/hive/auxlib/json-udtf.py; 比较麻烦
2:UDF的运行速度比Transform要快,个人感觉UDF比较稳定
运行速度对比照片
Transform:
UDF:
以上速度的对比是同一台服务器,同样的数据
最后:欢迎Transform大神指导!!!!!!