Hive 自定义UDF函数
1.新建一个Maven工程
2.定义一个类,类名看心情,自己定 src/main/java 自己的包下建
3.在pom.xml中添加依赖
hive的maven依赖:放在<dependencies> </dependencies>标签中
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>1.2.1</version>
</dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>1.2.1</version>
</dependency>
阿里的镜像资源下载:
<repositories>
<repository>
<id>nexus-aliyun</id>
<name>Nexus aliyun</name>
<url>http://maven.aliyun.com/nexus/content/groups/public</url>
</repository>
</repositories>
<repository>
<id>nexus-aliyun</id>
<name>Nexus aliyun</name>
<url>http://maven.aliyun.com/nexus/content/groups/public</url>
</repository>
</repositories>
3.编写类:
继承UDF 类实现 evaluate
public Text evaluate(Text str){
if (str == null) {
return null;
}
if (StringUtils.isBlank(str.toString())) {
return null;
}
return new Text(str.toString().toLowerCase());
}
if (str == null) {
return null;
}
if (StringUtils.isBlank(str.toString())) {
return null;
}
return new Text(str.toString().toLowerCase());
}
4.导出jar包
5.上传jar包到Linux(上传目录自己定 例如 /opt/datas)
6.将jar包添加到hive add jar /opt/datas/lower_hive.jar;(add jar jar在linux上的目录)
7.hive客户端下创建函数:
1)永久函数:
2)临时函数:hive > create temporary fuction 函数名 as ‘ 包名.类名’;
8.使用函数:select empno ,ename,my_lower(ename) lower_name from emp;(emp 是存在的表)
9.使用函数结果: