如果想及时了解Spark、Hadoop或者Hbase相关的文章,欢迎关注微信公共帐号:iteblog_hadoop
假设我们有个名为 data.txt 的文件,格式如下: RAVI kumar
Anish kumar
Rakesh jha
Vishal kumar
Ananya ghosh
上面文件的内容每一行代表一个人的名字,现在我们需要使用 Hive 分别获取到每个人的 First name 和 Last name。我们 Hive 表的建表语句如下: CREATE TABLE `mytable`(
`fname` string,
`lname` string
)
我们现在将上面的数据导入到这个表中: load data local inpath '/tmp/data.txt' into table mytable;
我们直接 select 出来的数据如下: hive (iteblog)> select * from iteblog.mytable;
OK
RAVI kumar NULL
Anish kumar NULL
Rakesh jha NULL
Vishal kumar NULL
Ananya ghosh NULL
Time taken: 0.297 seconds, Fetched: 5 row(s)
这不是我们要的数据,因为每一行的数据全部解析到 fname 字段,而 lname 字段并没有值,所以最后一列为 NULL。现在我们编写一个 #!/usr/bin/python
import sys
for line in sys.stdin:
line = line.strip()
fname , lname = line.split(' ')
l_name = lname.lower()
print '\t'.join([fname, str(l_name)])
上面的脚本意思是将每行的数据按照空格分割,然后分别赋值给 fname 和 lname。下面我们到 Hive 中使用 Python 编写好的 UDF,语法如下: SELECT TRANSFORM(stuff)
USING 'script'
AS thing1, thing2
or
SELECT TRANSFORM(stuff)
USING 'script'
AS (thing1 INT, thing2 INT)
通过 Python 解析好的数据全部都是 String 类型的,如果你需要转换成其他类型,可以使用第二个语法。所有我们的例子里面可以如下使用: hive (iteblog)> add FILE /tmp/iteblog.py;
Added resources: [/tmp/iteblog.py]
hive (iteblog)> select TRANSFORM (fname) USING "python iteblog.py" as (fname,lname) from mytable;
Query ID = iteblog_20180124092916_b8bdcd2d-8828-4803-877c-8586fc36c83d
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1482488541254_14175245, Tracking URL = https://www.iteblog.com:9981/proxy/application_1482488541254_14175245/
Kill Command = /home/iteblog/hadoop/bin/hadoop job -kill job_1482488541254_14175245
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2018-01-24 09:29:44,256 Stage-1 map = 0%, reduce = 0%
2018-01-24 09:29:49,421 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.49 sec
MapReduce Total cumulative CPU time: 1 seconds 490 msec
Ended Job = job_1482488541254_14175245
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Cumulative CPU: 1.49 sec HDFS Read: 3691 HDFS Write: 60 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 490 msec
OK
RAVI kumar
Anish kumar
Rakesh jha
Vishal kumar
Ananya ghosh
Time taken: 33.842 seconds, Fetched: 5 row(s)
正如上面的结果,我们已经获取到需要的结果。