UDF的定义
- UDF(User-Defined Functions)即是用户定义的hive函数。hive自带的函数并不能完全满足业务需求,这时就需要我们自定义函数了
UDF的分类
- UDF:one to one,进来一个出去一个,row mapping。是row级别操作,如:upper、substr函数
- UDAF:many to one,进来多个出去一个,row mapping。是row级别操作,如sum/min。
- UDTF:one to many ,进来一个出去多个。如alteral view与explode
-
这三类中,我们只对UDF类的函数进行改写
打jar包
上传jar包
- 注意:如果jar包是上传到$HIVE_HOME/lib/目录以下,就不需要执行add命令了
添加jar包到hive
- 语法:add jar +jar包所在的目录/jar包名字
hive> add jar /home/hadoop/data/hive/hadoop_udf.jar;
在hive中创建UDF函数
-
创建临时函数 -----只对当前黑窗口有效
CREATE TEMPORARY FUNCTION function_name AS class_name;
function_name函数名
*******class_name 类路径,包名+类名********* 这里就是你写的UDF函数的第一行的package后边的东西然后在加个点加个类的名字
实例:
hive>CREATE TEMPORARY FUNCTION HelloUDF AS 'org.apache.hadoop.hive.ql.udf.HelloUDF';
OK Time taken: 0.485 seconds
hive> show functions; 【查看可以看到HelloUDF】
测试:
hive>select HelloUDF('17');
OK
Hello:17
删除临时函数
drop TEMPORARY FUNCTION [IF EXISTS] function_name;
测试:
hive> DROP TEMPORARY FUNCTION IF EXISTS HelloUDF;
OK
Time taken: 0.003 seconds
hive> select HelloUDF('17');
FAILED: SemanticException [Error 10011]: Line 1:7 Invalid function 'HelloUDF'
创建永久函数:
CREATE TEMPORARY FUNCTION function_name AS class_name USING JAR path;
function_name函数名
class_name 类路径,
包名+类名
path jar包hdfs路径
将jar上传到指定目录:
[hadoop@hadoop hive]$ hadoop fs -mkdir /lib
[hadoop@hadoop hive]$ hadoop fs -put /home/hadoop/data/hive/hive_UDF.jar /lib/
[hadoop@hadoop ~]$ hadoop fs -mkdir /lib
[hadoop@hadoop ~]$ hadoop fs -ls /lib
[hadoop@hadoop ~]$ hadoop fs -put ~/lib/g6-hadoop-1.0.jar /lib/ 把本地的jar上传到HDFS的/lib/目录下
[hadoop@hadoop ~]$ hadoop fs -ls /lib
创建永久生效的UDF函数:
CREATE FUNCTION HelloUDF AS 'org.apache.hadoop.hive.ql.udf.HelloUDF'
USING JAR 'hdfs://hadoop001:9000/lib/g6-hadoop-1.0.jar';
#测试 hive> select HelloUDF("17") ;
OK hello:17
pom文件配置
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.wsk.bigdata</groupId>
<artifactId>g6-hadoop</artifactId>
<version>1.0</version>
<name>g6-hadoop</name>
<properties>
<maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<hadoop.version>2.6.0-cdh5.7.0</hadoop.version>
<hive.version>1.1.0-cdh5.7.0</hive.version>
</properties>
<!--添加CDH的仓库-->
<repositories>
<repository>
<id>nexus-aliyun</id>
<url>http://maven.aliyun.com/nexus/content/groups/public</url>
</repository>
<repository>
<id>cloudera</id>
<url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
</repository>
</repositories>
<dependencies>
<!--添加Hadoop的依赖-->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
<!--添加hive依赖-->
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>${hive.version}</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.4</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
<encoding>UTF-8</encoding>
</configuration>
</plugin>
</plugins>
</build>
</project>