感谢段海涛老师~
先写一个java类,定义函数逻辑(静态代码块模拟字典)
package club.drguo.hive;
import java.util.HashMap;
import org.apache.hadoop.hive.ql.exec.UDF;
//club.drguo.hive.PhoneNumToArea
public class PhoneNumToArea extends UDF{
private static HashMap<String, String> areaMap = new HashMap<>();
static{
areaMap.put("136", "北京");
areaMap.put("137", "南京");
areaMap.put("138", "东京");
}
//方法要用public修饰!!!
public String evaluate(String phoneNum) {
String result = areaMap.get(phoneNum.substring(0,3))==null?(phoneNum+"---未知"):(phoneNum+"---"+areaMap.get(phoneNum.substring(0,3)));
return result;
}
}
导出jar包
启动hive,进入你的数据库,加入jar包
hive> add jar /home/guo/hiveArea.jar;
Added /home/guo/hiveArea.jar to class path
Added resource: /home/guo/hiveArea.jar
创建函数
hive> create temporary function getarea as 'club.drguo.hive.PhoneNumToArea';#''里是包名+类名
OK
Time taken: 5.581 seconds
创建表
hive> create table flow(phoneNum string, upflow int, downflow int)
> row format delimited fields terminated by '\t';
OK
导入数据
hive> load data local inpath '/home/guo/hivedata/flow.data' into table flow;
Copying data from file:/home/guo/hivedata/flow.data
Copying file: file:/home/guo/hivedata/flow.data
Loading data to table guo.flow
Table guo.flow stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 80, raw_data_size: 0]
OK
使用自定义函数执行查询
hive> select getarea(phoneNum),upflow,downflow from flow;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1458987137545_0002, Tracking URL = http://drguo1:8088/proxy/application_1458987137545_0002/
Kill Command = /opt/Hadoop/hadoop-2.7.2/bin/hadoop job -kill job_1458987137545_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2016-03-26 21:50:40,260 Stage-1 map = 0%, reduce = 0%
2016-03-26 21:51:13,964 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.05 sec
2016-03-26 21:51:15,043 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.05 sec
2016-03-26 21:51:16,113 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.05 sec
2016-03-26 21:51:17,179 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.05 sec
MapReduce Total cumulative CPU time: 4 seconds 50 msec
Ended Job = job_1458987137545_0002
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 4.05 sec HDFS Read: 286 HDFS Write: 116 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 50 msec
OK
13666666666---北京 200 300
13777777777---南京 180 700
13888888888---东京 219 923
13999999999---未知 213 823
Time taken: 113.373 seconds, Fetched: 4 row(s)