hive sql udf 简单翻译

<a name="Y2B8P"></a>

LanguageManual UDF

Skip to end of metadataGo to start of metadata

Hive 运算符和用户定义的函数 (UDF)

Case-insensitive<br />所有 Hive 关键字都不区分大小写,包括 Hive 运算符和函数的名称。<br />在 Beeline 或 CLI 中,使用以下命令显示最新文档:

SHOW FUNCTIONS; 
DESCRIBE FUNCTION <function_name>; 
DESCRIBE FUNCTION EXTENDED <function_name>; 

当 hive.cache.expr.evaluation 设置为 true(这是默认值)时,如果 UDF 嵌套在另一个 UDF 或 Hive 函数中,它可能会给出不正确的结果。此错误影响版本 0.12.0、0.13.0 和 0.13.1。版本 0.14.0 修复了该错误 (HIVE-7314)。 该问题与 UDF 的 getDisplayString 方法实现有关,如 Hive 用户邮件列表中所述。<a name="TlQB8"></a>

内置运算符

<a name="rNHqA"></a>

运算符优先级

ExampleOperatorsDescription
A[B] , A.identifierbracket_op([]), dot(.)element selector, dot
-Aunary(+), unary(-), unary(~)unary prefix operators
A IS [NOT] (NULL&#124;TRUE&#124;FALSE)IS NULL,IS NOT NULL, ...unary suffix
A ^ Bbitwise xor(^)bitwise xor
A * Bstar(*), divide(/), mod(%), div(DIV)multiplicative operators
A + Bplus(+), minus(-)additive operators
A &#124;&#124; Bstring concatenate(&#124;&#124;)string concatenate
A & Bbitwise and(&)bitwise and
A &#124; Bbitwise or(&#124;)bitwise or

<a name="BDY9V"></a>

关系运算符

以下运算符比较传递的操作数并根据操作数之间的比较是否成立来生成 TRUE 或 FALSE 值。

OperatorOperand typesDescription
A = BAll primitive typesTRUE if expression A is equal to expression B otherwise FALSE.<br />如果表达式 A 等于表达式 B,则为 TRUE,否则为 FALSE。
A == BAll primitive typesSynonym for the = operator.<br />= 运算符的同义词。
A <=> BAll primitive typesReturns same result with EQUAL(=) operator for non-null operands, but returns TRUE if both are NULL, FALSE if one of the them is NULL. (As of version 0.9.0.)<br />对于非空操作数,使用 EQUAL(=) 运算符返回相同的结果,但如果两者都为 NULL,则返回 TRUE,如果其中之一为 NULL,则返回 FALSE。 (从 0.9.0 版开始。)
A <> BAll primitive typesNULL if A or B is NULL, TRUE if expression A is NOT equal to expression B, otherwise FALSE.<br />如果 A 或 B 为 NULL,则为 NULL,如果表达式 A 不等于表达式 B,则为 TRUE,否则为 FALSE。
A != BAll primitive typesSynonym for the <> operator.<br /><> 运算符的同义词。
A < BAll primitive typesNULL if A or B is NULL, TRUE if expression A is less than expression B, otherwise FALSE.<br />如果 A 或 B 为 NULL,则为 NULL,如果表达式 A 小于表达式 B,则为 TRUE,否则为 FALSE。
A <= BAll primitive typesNULL if A or B is NULL, TRUE if expression A is less than or equal to expression B, otherwise FALSE.<br />如果 A 或 B 为 NULL,则为 NULL,如果表达式 A 小于或等于表达式 B,则为 TRUE,否则为 FALSE。
A > BAll primitive typesNULL if A or B is NULL, TRUE if expression A is greater than expression B, otherwise FALSE.
A >= BAll primitive typesNULL if A or B is NULL, TRUE if expression A is greater than or equal to expression B, otherwise FALSE.
A [NOT] BETWEEN B AND CAll primitive typesNULL if A, B or C is NULL, TRUE if A is greater than or equal to B AND A less than or equal to C, otherwise FALSE. This can be inverted by using the NOT keyword. (As of version 0.9.0.)
A IS NULLAll typesTRUE if expression A evaluates to NULL, otherwise FALSE.
A IS NOT NULLAll typesFALSE if expression A evaluates to NULL, otherwise TRUE.
A IS [NOT] (TRUE&#124;FALSE)Boolean typesEvaluates to TRUE only if A mets the condition. (since:3.0.0 )<br />Note: NULL is UNKNOWN, and because of that (UNKNOWN IS TRUE) and (UNKNOWN IS FALSE) both evaluates to FALSE.
A [NOT] LIKE BstringsNULL if A or B is NULL, TRUE if string A matches the SQL simple regular expression B, otherwise FALSE. The comparison is done character by character. The _ character in B matches any character in A (similar to . in posix regular expressions) while the % character in B matches an arbitrary number of characters in A (similar to .* in posix regular expressions). For example, 'foobar' like 'foo' evaluates to FALSE whereas 'foobar' like 'foo_ _ ' evaluates to TRUE and so does 'foobar' like 'foo%'.<br />如果 A 或 B 为 NULL,则为 NULL,如果字符串 A 与 SQL 简单正则表达式 B 匹配,则为 TRUE,否则为 FALSE。比较是逐个字符进行的。 B 中的 _ 字符匹配 A 中的任何字符(类似于 posix 正则表达式中的 .),而 B 中的 % 字符匹配 A 中的任意数量的字符(类似于 posix 正则表达式中的 .* )。例如,'foobar' like 'foo' 评估为 FALSE,而 'foobar' like 'foo _ _' 评估为 TRUE,'foobar' like 'foo%' 也是如此。
A RLIKE BstringsNULL if A or B is NULL, TRUE if any (possibly empty) substring of A matches the Java regular expression B, otherwise FALSE. For example, 'foobar' RLIKE 'foo' evaluates to TRUE and so does 'foobar' RLIKE '^f.r$'.<br />如果 A 或 B 为 NULL,则为 NULL,如果 A 的任何(可能为空)子字符串与 Java 正则表达式 B 匹配,则为 TRUE,否则为 FALSE。例如,'foobar' RLIKE 'foo' 评估为 TRUE,'foobar' RLIKE '^f.r$' 也是如此。
A REGEXP BstringsSame as RLIKE.

<a name="FkKp4"></a>

算术运算符

以下运算符支持对操作数的各种常见算术运算。所有返回号码类型;如果任何操作数为 NULL,则结果也为 NULL。

OperatorOperand typesDescription
A + BAll number typesGives the result of adding A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. For example since every integer is a float, therefore float is a containing type of integer so the + operator on a float and an int will result in a float.
A - BAll number typesGives the result of subtracting B from A. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.
A * BAll number typesGives the result of multiplying A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. Note that if the multiplication causing overflow, you will have to cast one of the operators to a type higher in the type hierarchy.
A / BAll number typesGives the result of dividing A by B. The result is a double type in most cases. When A and B are both integers, the result is a double type except when the hive.compat configuration parameter is set to "0.13" or "latest" in which case the result is a decimal type.
A DIV BInteger typesGives the integer part resulting from dividing A by B. E.g 17 div 3 results in 5.
A % BAll number typesGives the reminder resulting from dividing A by B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.
A & BAll number typesGives the result of bitwise AND of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.
A &#124; BAll number typesGives the result of bitwise OR of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.
A ^ BAll number typesGives the result of bitwise XOR of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.
~AAll number typesGives the result of bitwise NOT of A. The type of the result is the same as the type of A.

<a name="fUcNm"></a>

逻辑运算符

以下运算符为创建逻辑表达式提供支持。根据操作数的布尔值,它们都返回布尔值 TRUE、FALSE 或 NULL。 NULL 表现为“未知”标志,因此如果结果取决于未知的状态,则结果本身是未知的。

OperatorOperand typesDescription
A AND BbooleanTRUE if both A and B are TRUE, otherwise FALSE. NULL if A or B is NULL.
A OR BbooleanTRUE if either A or B or both are TRUE, FALSE OR NULL is NULL, otherwise FALSE.
NOT AbooleanTRUE if A is FALSE or NULL if A is NULL. Otherwise FALSE.
! AbooleanSame as NOT A.
A IN (val1, val2, ...)booleanTRUE if A is equal to any of the values. As of Hive 0.13 subqueries are supported in IN statements.
A NOT IN (val1, val2, ...)booleanTRUE if A is not equal to any of the values. As of Hive 0.13 subqueries are supported in NOT IN statements.
[NOT] EXISTS (subquery)

| TRUE if the the subquery returns at least one row. Supported as of Hive 0.13. |

<a name="bbBZo"></a>

字符串运算符

OperatorOperand typesDescription
A &#124;&#124; BstringsConcatenates the operands - shorthand for concat(A,B) . Supported as of Hive 2.2.0.<br />连接操作数 - concat(A,B) 的简写。从 Hive 2.2.0 开始支持

<a name="KTHSc"></a>

复杂类型构造函数

以下函数构造复杂类型的实例。

Constructor FunctionOperandsDescription
map(key1, value1, key2, value2, ...)Creates a map with the given key/value pairs.<br />使用给定的键/值对创建映射。
struct(val1, val2, val3, ...)Creates a struct with the given field values. Struct field names will be col1, col2, ....<br />创建具有给定字段值的结构。结构字段名称将是 col1、col2、
named_struct(name1, val1, name2, val2, ...)Creates a struct with the given field names and values. (As of Hive 0.8.0.)<br />创建具有给定字段名称和值的结构。 (从 Hive 0.8.0 开始。)
array(val1, val2, ...)Creates an array with the given elements.<br />使用给定元素创建一个数组。
create_union(tag, val1, val2, ...)Creates a union type with the value that is being pointed to by the tag parameter.<br />使用 tag 参数指向的值创建联合类型。

<a name="Ckfu9"></a>

复杂类型的运算符

以下运算符提供访问复杂类型中元素的机制。

OperatorOperand typesDescription
A[n]A is an Array and n is an intReturns the nth element in the array A. The first element has index 0. For example, if A is an array comprising of ['foo', 'bar'] then A[0] returns 'foo' and A[1] returns 'bar'.<br />返回数组 A 中的第 n 个元素。第一个元素的索引为 0。例如,如果 A 是由 ['foo', 'bar'] 组成的数组,则 A[0] 返回 'foo' 而 A[1] 返回'酒吧'。
M[key]M is a Map<K, V> and key has type KReturns the value corresponding to the key in the map. For example, if M is a map comprising of {'f' -> 'foo', 'b' -> 'bar', 'all' -> 'foobar'} then M['all'] returns 'foobar'.<br />返回映射中键对应的值。例如,如果 M 是由 {'f' -> 'foo', 'b' -> 'bar', 'all' -> 'foobar'} 组成的映射,则 M['all'] 返回 'foobar'。
S.xS is a structReturns the x field of S. For example for the struct foobar {int foo, int bar}, foobar.foo returns the integer stored in the foo field of the struct.<br />返回 S 的 x 字段。例如对于 struct foobar {int foo, int bar},foobar.foo 返回存储在 struct 的 foo 字段中的整数。

<a name="BAqcG"></a>

内置函数

<a name="DLM5d"></a>

数学函数

Hive 支持以下内置数学函数;当参数为 NULL 时,大多数返回 NULL:

Return TypeName (Signature)Description
DOUBLEround(DOUBLE a)Returns the rounded BIGINT value of a.<br />返回 a 的舍入 BIGINT 值。
DOUBLEround(DOUBLE a, INT d)Returns a rounded to d decimal places.
DOUBLEbround(DOUBLE a)Returns the rounded BIGINT value of a using HALF_EVEN rounding mode (as of Hive 1.3.0, 2.0.0<br />). Also known as Gaussian rounding or bankers' rounding. Example: bround(2.5) = 2, bround(3.5) = 4.<br />返回使用 HALF_EVEN 舍入模式的舍入 BIGINT 值(从 Hive 1.3.0、2.0.0 开始)。也称为高斯四舍五入或银行家四舍五入。示例:bround(2.5) = 2,bround(3.5) = 4。
DOUBLEbround(DOUBLE a, INT d)Returns a rounded to d decimal places using HALF_EVEN rounding mode (as of Hive 1.3.0, 2.0.0<br />). Example: bround(8.25, 1) = 8.2, bround(8.35, 1) = 8.4.
BIGINTfloor(DOUBLE a)Returns the maximum BIGINT value that is equal to or less than a.
BIGINTceil(DOUBLE a), ceiling(DOUBLE a)Returns the minimum BIGINT value that is equal to or greater than a.
DOUBLErand(), rand(INT seed)Returns a random number (that changes from row to row) that is distributed uniformly from 0 to 1. Specifying the seed will make sure the generated random number sequence is deterministic.
DOUBLEexp(DOUBLE a), exp(DECIMAL a)Returns ea where e is the base of the natural logarithm. Decimal version added in Hive 0.13.0.
DOUBLEln(DOUBLE a), ln(DECIMAL a)Returns the natural logarithm of the argument a. Decimal version added in Hive 0.13.0.
DOUBLElog10(DOUBLE a), log10(DECIMAL a)Returns the base-10 logarithm of the argument a. Decimal version added in Hive 0.13.0.
DOUBLElog2(DOUBLE a), log2(DECIMAL a)Returns the base-2 logarithm of the argument a. Decimal version added in Hive 0.13.0.
DOUBLElog(DOUBLE base, DOUBLE a)<br />log(DECIMAL base, DECIMAL a)Returns the base-base logarithm of the argument a. Decimal versions added in Hive 0.13.0.
DOUBLEpow(DOUBLE a, DOUBLE p), power(DOUBLE a, DOUBLE p)Returns ap.
DOUBLEsqrt(DOUBLE a), sqrt(DECIMAL a)Returns the square root of a. Decimal version added in Hive 0.13.0.
STRINGbin(BIGINT a)Returns the number in binary format (see MySQL :: MySQL 8.0 Reference Manual :: 12.8 String Functions and Operators).
STRINGhex(BIGINT a) hex(STRING a) hex(BINARY a)If the argument is an INT or binary, hex returns the number as a STRING in hexadecimal format. Otherwise if the number is a STRING, it converts each character into its hexadecimal representation and returns the resulting STRING. (See MySQL :: MySQL 8.0 Reference Manual :: 12.8 String Functions and Operators, BINARY version as of Hive 0.12.0.)
BINARYunhex(STRING a)Inverse of hex. Interprets each pair of characters as a hexadecimal number and converts to the byte representation of the number. (BINARY version as of Hive 0.12.0, used to return a string.)
STRINGconv(BIGINT num, INT from_base, INT to_base), conv(STRING num, INT from_base, INT to_base)Converts a number from a given base to another (see MySQL :: MySQL 8.0 Reference Manual :: 12.6.2 Mathematical Functions).
DOUBLEabs(DOUBLE a)Returns the absolute value.
INT or DOUBLEpmod(INT a, INT b), pmod(DOUBLE a, DOUBLE b)Returns the positive value of a mod b.
DOUBLEsin(DOUBLE a), sin(DECIMAL a)Returns the sine of a (a is in radians). Decimal version added in Hive 0.13.0.
DOUBLEasin(DOUBLE a), asin(DECIMAL a)Returns the arc sin of a if -1<=a<=1 or NULL otherwise. Decimal version added in Hive 0.13.0.
DOUBLEcos(DOUBLE a), cos(DECIMAL a)Returns the cosine of a (a is in radians). Decimal version added in Hive 0.13.0.
DOUBLEacos(DOUBLE a), acos(DECIMAL a)Returns the arccosine of a if -1<=a<=1 or NULL otherwise. Decimal version added in Hive 0.13.0.
DOUBLEtan(DOUBLE a), tan(DECIMAL a)Returns the tangent of a (a is in radians). Decimal version added in Hive 0.13.0.
DOUBLEatan(DOUBLE a), atan(DECIMAL a)Returns the arctangent of a. Decimal version added in Hive 0.13.0.
DOUBLEdegrees(DOUBLE a), degrees(DECIMAL a)Converts value of a from radians to degrees. Decimal version added in Hive 0.13.0.
DOUBLEradians(DOUBLE a), radians(DOUBLE a)Converts value of a from degrees to radians. Decimal version added in Hive 0.13.0.
INT or DOUBLEpositive(INT a), positive(DOUBLE a)Returns a.
INT or DOUBLEnegative(INT a), negative(DOUBLE a)Returns -a.
DOUBLE or INTsign(DOUBLE a), sign(DECIMAL a)Returns the sign of a as '1.0' (if a is positive) or '-1.0' (if a is negative), '0.0' otherwise. The decimal version returns INT instead of DOUBLE. Decimal version added in Hive 0.13.0.
DOUBLEe()Returns the value of e.
DOUBLEpi()Returns the value of pi.
BIGINTfactorial(INT a)Returns the factorial of a (as of Hive 1.2.0<br />). Valid a is [0..20].
DOUBLEcbrt(DOUBLE a)Returns the cube root of a double value (as of Hive 1.2.0<br />).
INT<br />BIGINTshiftleft(TINYINT&#124;SMALLINT&#124;INT a, INT b)<br />shiftleft(BIGINT a, INT b)Bitwise left shift (as of Hive 1.2.0). Shifts a b positions to the left.<br />Returns int for tinyint, smallint and int a. Returns bigint for bigint a.
INT<br />BIGINTshiftright(TINYINT&#124;SMALLINT&#124;INT a, INT b)<br />shiftright(BIGINT a, INT b)Bitwise right shift (as of Hive 1.2.0). Shifts a b positions to the right.<br />Returns int for tinyint, smallint and int a. Returns bigint for bigint a.
INT<br />BIGINTshiftrightunsigned(TINYINT&#124;SMALLINT&#124;INT a, INT b),<br />shiftrightunsigned(BIGINT a, INT b)Bitwise unsigned right shift (as of Hive 1.2.0). Shifts a b positions to the right.<br />Returns int for tinyint, smallint and int a. Returns bigint for bigint a.
Tgreatest(T v1, T v2, ...)Returns the greatest value of the list of values (as of Hive 1.1.0<br />). Fixed to return NULL when one or more arguments are NULL, and strict type restriction relaxed, consistent with ">" operator (as of Hive 2.0.0<br />).
Tleast(T v1, T v2, ...)Returns the least value of the list of values (as of Hive 1.1.0<br />). Fixed to return NULL when one or more arguments are NULL, and strict type restriction relaxed, consistent with "<" operator (as of Hive 2.0.0<br />).
INTwidth_bucket(NUMERIC expr, NUMERIC min_value, NUMERIC max_value, INT num_buckets)Returns an integer between 0 and num_buckets+1 by mapping expr into the ith equally sized bucket. Buckets are made by dividing [min_value, max_value] into equally sized regions. If expr < min_value, return 1, if expr > max_value return num_buckets+1. See WIDTH_BUCKET (as of Hive 3.0.0)

<a name="OVuD0"></a>

十进制数据类型的数学函数和运算符

Version<br />在 Hive 0.11.0 (HIVE-2693) 中引入了十进制数据类型。<br />所有常规算术运算符(例如 +、-、*、/)和相关的数学 UDF(Floor、Ceil、Round 等)都已更新以处理十进制类型。有关受支持的 UDF 的列表,请参阅 Hive 数据类型中的数学 UDF。<a name="ie4gQ"></a>

集合函数

Hive 支持以下内置集合函数:

Return TypeName(Signature)Description
intsize(Map<K.V>)Returns the number of elements in the map type.
intsize(Array<T>)Returns the number of elements in the array type.
array<K>map_keys(Map<K.V>)Returns an unordered array containing the keys of the input map.
array<V>map_values(Map<K.V>)Returns an unordered array containing the values of the input map.
booleanarray_contains(Array<T>, value)Returns TRUE if the array contains value.
array<t>sort_array(Array<T>)Sorts the input array in ascending order according to the natural ordering of the array elements and returns it (as of version 0.9.0).

<a name="z8ZeV"></a>

类型转换函数

Hive 支持以下类型转换函数:

Return TypeName(Signature)Description
binarybinary(string&#124;binary)Casts the parameter into a binary.
Expected "=" to follow "type"cast(expr as <type>)Converts the results of the expression expr to <type>. For example, cast('1' as BIGINT) will convert the string '1' to its integral representation. A null is returned if the conversion does not succeed. If cast(expr as boolean) Hive returns true for a non-empty string.

<a name="JuTZy"></a>

日期函数

Hive 支持以下内置日期函数:

Return TypeName(Signature)Description
stringfrom_unixtime(bigint unixtime[, string format])Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone(using config "hive.local.time.zone") in the format of "uuuu-MM-dd HH:mm:ss" example"1970-01-01 00:00:00".<br />Prior to Hive 4.0.0 (HIVE-25458), it uses [SimpleDateFormat (Java Platform SE 7 )] and hence the supported format have changed.
bigintunix_timestamp()Gets current Unix timestamp in seconds. This function is not deterministic and its value is not fixed for the scope of a query execution, therefore prevents proper optimization of queries - this has been deprecated since 2.0 in favour of CURRENT_TIMESTAMP constant.
bigintunix_timestamp(string date)Converts time string in format uuuu-MM-dd HH:mm:ss to Unix timestamp (in seconds) via [DateTimeFormatter (Java Platform SE 8 )], using the default timezone and the default locale i.e (using config "hive.local.time.zone"), return 0 if fail: unix_timestamp('2009-03-20 11:30:01') = 1237573801<br />Prior to Hive 4.0.0 (HIVE-25458), it uses [SimpleDateFormat (Java Platform SE 7 )] and hence the string format referred was yyyy-MM-dd HH:mm:ss.
bigintunix_timestamp(string date, string pattern)Convert time string with given pattern (see [DateTimeFormatter (Java Platform SE 8 )]) to Unix time stamp (in seconds), return 0 if fail: unix_timestamp('2009-03-20', 'uuuu-MM-dd') = 1237532400.<br />Prior to Hive 4.0.0 (HIVE-25458), it uses [SimpleDateFormat (Java Platform SE 7 )] and hence the supported patterns have changed.
pre 2.1.0: string<br />2.1.0 on: dateto_date(string timestamp)Returns the date part of a timestamp string (pre-Hive 2.1.0): to_date("1970-01-01 00:00:00") = "1970-01-01". As of Hive 2.1.0, returns a date object.<br />Prior to Hive 2.1.0 (HIVE-13248) the return type was a String because no Date type existed when the method was created.
intyear(string date)Returns the year part of a date or a timestamp string: year("1970-01-01 00:00:00") = 1970, year("1970-01-01") = 1970.
intquarter(date/timestamp/string)Returns the quarter of the year for a date, timestamp, or string in the range 1 to 4 (as of Hive 1.3.0<br />). Example: quarter('2015-04-08') = 2.
intmonth(string date)Returns the month part of a date or a timestamp string: month("1970-11-01 00:00:00") = 11, month("1970-11-01") = 11.
intday(string date) dayofmonth(date)Returns the day part of a date or a timestamp string: day("1970-11-01 00:00:00") = 1, day("1970-11-01") = 1.
inthour(string date)Returns the hour of the timestamp: hour('2009-07-30 12:58:59') = 12, hour('12:58:59') = 12.
intminute(string date)Returns the minute of the timestamp.
intsecond(string date)Returns the second of the timestamp.
intweekofyear(string date)Returns the week number of a timestamp string: weekofyear("1970-11-01 00:00:00") = 44, weekofyear("1970-11-01") = 44.
intextract(field FROM source)Retrieve fields such as days or hours from source (as of Hive 2.2.0). Source must be a date, timestamp, interval or a string that can be converted into either a date or timestamp. Supported fields include: day, dayofweek, hour, minute, month, quarter, second, week and year.<br />Examples:<br />1. select extract(month from "2016-10-20") results in 10.<br />1. select extract(hour from "2016-10-20 05:06:07") results in 5.<br />1. select extract(dayofweek from "2016-10-20 05:06:07") results in 5.<br />1. select extract(month from interval '1-3' year to month) results in 3.<br />1. select extract(minute from interval '3 12:20:30' day to second) results in 20.<br />
intdatediff(string enddate, string startdate)Returns the number of days from startdate to enddate: datediff('2009-03-01', '2009-02-27') = 2.
pre 2.1.0: string<br />2.1.0 on: datedate_add(date/timestamp/string startdate, tinyint/smallint/int days)Adds a number of days to startdate: date_add('2008-12-31', 1) = '2009-01-01'.<br />Prior to Hive 2.1.0 (HIVE-13248) the return type was a String because no Date type existed when the method was created.
pre 2.1.0: string<br />2.1.0 on: datedate_sub(date/timestamp/string startdate, tinyint/smallint/int days)Subtracts a number of days to startdate: date_sub('2008-12-31', 1) = '2008-12-30'.<br />Prior to Hive 2.1.0 (HIVE-13248) the return type was a String because no Date type existed when the method was created.

| timestamp | from_utc_timestamp({any primitive type} ts, string timezone) | Converts a timestamp* in UTC to a given timezone (as of Hive 0.8.0).<br />* timestamp is a primitive type, including timestamp/date, tinyint/smallint/int/bigint, float/double and decimal.

Fractional values are considered as seconds. Integer values are considered as milliseconds. For example, from_utc_timestamp(2592000.0,'PST'), from_utc_timestamp(2592000000,'PST') and from_utc_timestamp(timestamp '1970-01-30 16:00:00','PST') all return the timestamp 1970-01-30 08:00:00. || timestamp | to_utc_timestamp({any primitive type} ts, string timezone) | Converts a timestamp* in a given timezone to UTC (as of Hive 0.8.0).<br />* timestamp is a primitive type, including timestamp/date, tinyint/smallint/int/bigint, float/double and decimal.<br />Fractional values are considered as seconds. Integer values are considered as milliseconds. For example, to_utc_timestamp(2592000.0,'PST'), to_utc_timestamp(2592000000,'PST') and to_utc_timestamp(timestamp '1970-01-30 16:00:00','PST') all return the timestamp 1970-01-31 00:00:00. || date | current_date | Returns the current date at the start of query evaluation (as of Hive 1.2.0). All calls of current_date within the same query return the same value. || timestamp | current_timestamp | Returns the current timestamp at the start of query evaluation (as of Hive 1.2.0). All calls of current_timestamp within the same query return the same value. || string | add_months(string start_date, int num_months, output_date_format) | Returns the date that is num_months after start_date (as of Hive 1.1.0). start_date is a string, date or timestamp. num_months is an integer. If start_date is the last day of the month or if the resulting month has fewer days than the day component of start_date, then the result is the last day of the resulting month. Otherwise, the result has the same day component as start_date. The default output format is 'yyyy-MM-dd'.<br />Before Hive 4.0.0, the time part of the date is ignored.<br />As of Hive 4.0.0, add_months supports an optional argument output_date_format, which accepts a String that represents a valid date format for the output. This allows to retain the time format in the output.<br />For example :<br />add_months('2009-08-31', 1) returns '2009-09-30'.<br />add_months('2017-12-31 14:15:16', 2, 'YYYY-MM-dd HH:mm:ss') returns '2018-02-28 14:15:16'. || string | last_day(string date) | Returns the last day of the month which the date belongs to (as of Hive 1.1.0<br />). date is a string in the format 'yyyy-MM-dd HH:mm:ss' or 'yyyy-MM-dd'. The time part of date is ignored. || string | next_day(string start_date, string day_of_week) | Returns the first date which is later than start_date and named as day_of_week (as of Hive 1.2.0<br />). start_date is a string/date/timestamp. day_of_week is 2 letters, 3 letters or full name of the day of the week (e.g. Mo, tue, FRIDAY). The time part of start_date is ignored. Example: next_day('2015-01-14', 'TU') = 2015-01-20. || string | trunc(string date, string format) | Returns date truncated to the unit specified by the format (as of Hive 1.2.0<br />). Supported formats: MONTH/MON/MM, YEAR/YYYY/YY. Example: trunc('2015-03-17', 'MM') = 2015-03-01. || double | months_between(date1, date2) | Returns number of months between dates date1 and date2 (as of Hive 1.2.0<br />). If date1 is later than date2, then the result is positive. If date1 is earlier than date2, then the result is negative. If date1 and date2 are either the same days of the month or both last days of months, then the result is always an integer. Otherwise the UDF calculates the fractional portion of the result based on a 31-day month and considers the difference in time components date1 and date2. date1 and date2 type can be date, timestamp or string in the format 'yyyy-MM-dd' or 'yyyy-MM-dd HH:mm:ss'. The result is rounded to 8 decimal places. Example: months_between('1997-02-28 10:30:00', '1996-10-30') = 3.94959677 || string | date_format(date/timestamp/string ts, string fmt) | Converts a date/timestamp/string to a value of string in the format specified by the date format fmt (as of Hive 1.2.0). Supported formats are Java DateTimeFormatter formats – DateTimeFormatter (Java Platform SE 8 ). The second argument fmt should be constant. Example: date_format('2015-04-08', 'y') = '2015'.<br />date_format can be used to implement other UDFs, e.g.:<br />- dayname(date) is date_format(date, 'EEEE')<br />- dayofyear(date) is date_format(date, 'D')<br />Prior to Hive 4.0.0 (HIVE-25458), it uses [SimpleDateFormat (Java Platform SE 7 )] and hence the supported patterns have changed. |

<a name="IqakY"></a>

条件函数

Return TypeName(Signature)Description
Tif(boolean testCondition, T valueTrue, T valueFalseOrNull)Returns valueTrue when testCondition is true, returns valueFalseOrNull otherwise.
booleanisnull( a )Returns true if a is NULL and false otherwise.
booleanisnotnull ( a )Returns true if a is not NULL and false otherwise.
Tnvl(T value, T default_value)Returns default value if value is null else returns value (as of HIve 0.11<br />).
TCOALESCE(T v1, T v2, ...)Returns the first v that is not NULL, or NULL if all v's are NULL.
TCASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] ENDWhen a = b, returns c; when a = d, returns e; else returns f.
TCASE WHEN a THEN b [WHEN c THEN d]* [ELSE e] ENDWhen a = true, returns b; when c = true, returns d; else returns e.
Tnullif( a, b )Returns NULL if a=b; otherwise returns a (as of Hive 2.3.0).<br />Shorthand for: CASE WHEN a = b then NULL else a
voidassert_true(boolean condition)Throw an exception if 'condition' is not true, otherwise return null (as of Hive 0.8.0<br />). For example, select assert_true (2<1).

<a name="HEkzS"></a>

字符串函数

Hive 支持以下内置字符串函数:

Return TypeName(Signature)Description
intascii(string str)Returns the numeric value of the first character of str.
stringbase64(binary bin)Converts the argument from binary to a base 64 string (as of Hive 0.12.0).
intcharacter_length(string str)Returns the number of UTF-8 characters contained in str (as of Hive 2.2.0<br />). The function char_length is shorthand for this function.
stringchr(bigint&#124;double A)Returns the ASCII character having the binary equivalent to A (as of Hive 1.3.0 and 2.1.0<br />). If A is larger than 256 the result is equivalent to chr(A % 256). Example: select chr(88); returns "X".
stringconcat(string&#124;binary A, string&#124;binary B...)Returns the string or bytes resulting from concatenating the strings or bytes passed in as parameters in order. For example, concat('foo', 'bar') results in 'foobar'. Note that this function can take any number of input strings.
array<struct<string,double>>context_ngrams(array<array<string>>, array<string>, int K, int pf)Returns the top-k contextual N-grams from a set of tokenized sentences, given a string of "context". See StatisticsAndDataMining for more information.
stringconcat_ws(string SEP, string A, string B...)Like concat() above, but with custom separator SEP.
stringconcat_ws(string SEP, array<string>)Like concat_ws() above, but taking an array of strings. (as of Hive 0.9.0)
stringdecode(binary bin, string charset)Decodes the first argument into a String using the provided character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16'). If either argument is null, the result will also be null. (As of Hive 0.12.0.)
stringelt(N int,str1 string,str2 string,str3 string,...)Return string at index number. For example elt(2,'hello','world') returns 'world'. Returns NULL if N is less than 1 or greater than the number of arguments.<br />(see MySQL :: MySQL 5.7 Reference Manual :: 12.8 String Functions and Operators)
binaryencode(string src, string charset)Encodes the first argument into a BINARY using the provided character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16'). If either argument is null, the result will also be null. (As of Hive 0.12.0.)
intfield(val T,val1 T,val2 T,val3 T,...)Returns the index of val in the val1,val2,val3,... list or 0 if not found. For example field('world','say','hello','world') returns 3.<br />All primitive types are supported, arguments are compared using str.equals(x). If val is NULL, the return value is 0.<br />(see MySQL :: MySQL 5.7 Reference Manual :: 12.8 String Functions and Operators)
intfind_in_set(string str, string strList)Returns the first occurance of str in strList where strList is a comma-delimited string. Returns null if either argument is null. Returns 0 if the first argument contains any commas. For example, find_in_set('ab', 'abc,b,ab,c,def') returns 3.
stringformat_number(number x, int d)Formats the number X to a format like '#,###,###.##', rounded to D decimal places, and returns the result as a string. If D is 0, the result has no decimal point or fractional part. (As of Hive 0.10.0; bug with float types fixed in Hive 0.14.0, decimal type support added in Hive 0.14.0)
stringget_json_object(string json_string, string path)Extracts json object from a json string based on json path specified, and returns json string of the extracted json object. It will return null if the input json string is invalid. NOTE: The json path can only have the characters [0-9a-z_], i.e., no upper-case or special characters. Also, the keys cannot start with numbers. This is due to restrictions on Hive column names.
booleanin_file(string str, string filename)Returns true if the string str appears as an entire line in filename.
intinstr(string str, string substr)Returns the position of the first occurrence of substr in str. Returns null if either of the arguments are null and returns 0 if substr could not be found in str. Be aware that this is not zero based. The first character in str has index 1.
intlength(string A)Returns the length of the string.
intlocate(string substr, string str[, int pos])Returns the position of the first occurrence of substr in str after position pos.
stringlower(string A) lcase(string A)Returns the string resulting from converting all characters of B to lower case. For example, lower('fOoBaR') results in 'foobar'.
stringlpad(string str, int len, string pad)Returns str, left-padded with pad to a length of len. If str is longer than len, the return value is shortened to len characters. In case of empty pad string, the return value is null.
stringltrim(string A)Returns the string resulting from trimming spaces from the beginning(left hand side) of A. For example, ltrim(' foobar ') results in 'foobar '.
array<struct<string,double>>ngrams(array<array<string>>, int N, int K, int pf)Returns the top-k N-grams from a set of tokenized sentences, such as those returned by the sentences() UDAF. See StatisticsAndDataMining for more information.
intoctet_length(string str)Returns the number of octets required to hold the string str in UTF-8 encoding (since Hive 2.2.0<br />). Note that octet_length(str) can be larger than character_length(str).
stringparse_url(string urlString, string partToExtract [, string keyToExtract])Returns the specified part from the URL. Valid values for partToExtract include HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, and USERINFO. For example, parse_url('http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1', 'HOST') returns 'facebook.com'. Also a value of a particular key in QUERY can be extracted by providing the key as the third argument, for example, parse_url('http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1', 'QUERY', 'k1') returns 'v1'.
stringprintf(String format, Obj... args)Returns the input formatted according do printf-style format strings (as of Hive 0.9.0).
stringquote(String text)Returns the quoted string (Includes escape character for any single quotes HIVE-4.0.0)
InputOutput
NULLNULL
DONT'DONT'
DON'T'DON\'T'
stringregexp_extract(string subject, string pattern, int index)Returns the string extracted using the pattern. For example, regexp_extract('foothebar', 'foo(.*?)(bar)', 2) returns 'bar.' Note that some care is necessary in using predefined character classes: using '\s' as the second argument will match the letter s; '\\s' is necessary to match whitespace, etc. The 'index' parameter is the Java regex Matcher group() method index. See docs/api/java/util/regex/Matcher.html for more information on the 'index' or Java regex group() method.
stringregexp_replace(string INITIAL_STRING, string PATTERN, string REPLACEMENT)Returns the string resulting from replacing all substrings in INITIAL_STRING that match the java regular expression syntax defined in PATTERN with instances of REPLACEMENT. For example, regexp_replace("foobar", "oo&#124;ar", "") returns 'fb.' Note that some care is necessary in using predefined character classes: using '\s' as the second argument will match the letter s; '\\s' is necessary to match whitespace, etc.
stringrepeat(string str, int n)Repeats str n times.
stringreplace(string A, string OLD, string NEW)Returns the string A with all non-overlapping occurrences of OLD replaced with NEW (as of Hive 1.3.0 and 2.1.0<br />). Example: select replace("ababab", "abab", "Z"); returns "Zab".
stringreverse(string A)Returns the reversed string.
stringrpad(string str, int len, string pad)Returns str, right-padded with pad to a length of len. If str is longer than len, the return value is shortened to len characters. In case of empty pad string, the return value is null.
stringrtrim(string A)Returns the string resulting from trimming spaces from the end(right hand side) of A. For example, rtrim(' foobar ') results in ' foobar'.
array<array<string>>sentences(string str, string lang, string locale)Tokenizes a string of natural language text into words and sentences, where each sentence is broken at the appropriate sentence boundary and returned as an array of words. The 'lang' and 'locale' are optional arguments. For example, sentences('Hello there! How are you?') returns ( ("Hello", "there"), ("How", "are", "you") ).
stringspace(int n)Returns a string of n spaces.
arraysplit(string str, string pat)Splits str around pat (pat is a regular expression).
map<string,string>str_to_map(text[, delimiter1, delimiter2])Splits text into key-value pairs using two delimiters. Delimiter1 separates text into K-V pairs, and Delimiter2 splits each K-V pair. Default delimiters are ',' for delimiter1 and ':' for delimiter2.
stringsubstr(string&#124;binary A, int start) substring(string&#124;binary A, int start)Returns the substring or slice of the byte array of A starting from start position till the end of string A. For example, substr('foobar', 4) results in 'bar' (see [MySQL :: MySQL 8.0 Reference Manual :: 12.8 String Functions and Operators]).
stringsubstr(string&#124;binary A, int start, int len) substring(string&#124;binary A, int start, int len)Returns the substring or slice of the byte array of A starting from start position with length len. For example, substr('foobar', 4, 1) results in 'b' (see [MySQL :: MySQL 8.0 Reference Manual :: 12.8 String Functions and Operators]).
stringsubstring_index(string A, string delim, int count)Returns the substring from string A before count occurrences of the delimiter delim (as of Hive 1.3.0<br />). If count is positive, everything to the left of the final delimiter (counting from the left) is returned. If count is negative, everything to the right of the final delimiter (counting from the right) is returned. Substring_index performs a case-sensitive match when searching for delim. Example: substring_index('www.apache.org', '.', 2) = 'www.apache'.
stringtranslate(string&#124;char&#124;varchar input, string&#124;char&#124;varchar from, string&#124;char&#124;varchar to)Translates the input string by replacing the characters present in the from string with the corresponding characters in the to string. This is similar to the translate function in PostgreSQL. If any of the parameters to this UDF are NULL, the result is NULL as well. (Available as of Hive 0.10.0, for string types)<br />Char/varchar support added as of Hive 0.14.0.
stringtrim(string A)Returns the string resulting from trimming spaces from both ends of A. For example, trim(' foobar ') results in 'foobar'
binaryunbase64(string str)Converts the argument from a base 64 string to BINARY. (As of Hive 0.12.0.)
stringupper(string A) ucase(string A)Returns the string resulting from converting all characters of A to upper case. For example, upper('fOoBaR') results in 'FOOBAR'.
stringinitcap(string A)Returns string, with the first letter of each word in uppercase, all other letters in lowercase. Words are delimited by whitespace. (As of Hive 1.1.0<br />.)
intlevenshtein(string A, string B)Returns the Levenshtein distance between two strings (as of Hive 1.2.0<br />). For example, levenshtein('kitten', 'sitting') results in 3.
stringsoundex(string A)Returns soundex code of the string (as of Hive 1.2.0<br />). For example, soundex('Miller') results in M460.

<a name="ZMKnv"></a>

数据屏蔽功能

Hive 支持以下内置数据屏蔽函数:

Return TypeName(Signature)Description
stringmask(string str[, string upper[, string lower[, string number]]])Returns a masked version of str (as of Hive 2.1.0). By default, upper case letters are converted to "X", lower case letters are converted to "x" and numbers are converted to "n". For example mask("abcd-EFGH-8765-4321") results in xxxx-XXXX-nnnn-nnnn. You can override the characters used in the mask by supplying additional arguments: the second argument controls the mask character for upper case letters, the third argument for lower case letters and the fourth argument for numbers. For example, mask("abcd-EFGH-8765-4321", "U", "l", "#") results in llll-UUUU-####-####.
stringmask_first_n(string str[, int n])Returns a masked version of str with the first n values masked (as of Hive 2.1.0<br />). Upper case letters are converted to "X", lower case letters are converted to "x" and numbers are converted to "n". For example, mask_first_n("1234-5678-8765-4321", 4) results in nnnn-5678-8765-4321.
stringmask_last_n(string str[, int n])Returns a masked version of str with the last n values masked (as of Hive 2.1.0<br />). Upper case letters are converted to "X", lower case letters are converted to "x" and numbers are converted to "n". For example, mask_last_n("1234-5678-8765-4321", 4) results in 1234-5678-8765-nnnn.
stringmask_show_first_n(string str[, int n])Returns a masked version of str, showing the first n characters unmasked (as of Hive 2.1.0<br />). Upper case letters are converted to "X", lower case letters are converted to "x" and numbers are converted to "n". For example, mask_show_first_n("1234-5678-8765-4321", 4) results in 1234-nnnn-nnnn-nnnn.
stringmask_show_last_n(string str[, int n])Returns a masked version of str, showing the last n characters unmasked (as of Hive 2.1.0<br />). Upper case letters are converted to "X", lower case letters are converted to "x" and numbers are converted to "n". For example, mask_show_last_n("1234-5678-8765-4321", 4) results in nnnn-nnnn-nnnn-4321.
stringmask_hash(string&#124;char&#124;varchar str)Returns a hashed value based on str (as of Hive 2.1.0<br />). The hash is consistent and can be used to join masked values together across tables. This function returns null for non-string types.

<a name="PV6vO"></a>

杂项。职能

Return TypeName(Signature)Description
variesjava_method(class, method[, arg1[, arg2..]])Synonym for reflect. (As of Hive 0.9.0.)
variesreflect(class, method[, arg1[, arg2..]])Calls a Java method by matching the argument signature, using reflection. (As of Hive 0.7.0.) See Reflect (Generic) UDF for examples.
inthash(a1[, a2...])Returns a hash value of the arguments. (As of Hive 0.4.)
stringcurrent_user()Returns current user name from the configured authenticator manager (as of Hive 1.2.0<br />). Could be the same as the user provided when connecting, but with some authentication managers (for example HadoopDefaultAuthenticator) it could be different.
stringlogged_in_user()Returns current user name from the session state (as of Hive 2.2.0<br />). This is the username provided when connecting to Hive.
stringcurrent_database()Returns current database name (as of Hive 0.13.0<br />).
stringmd5(string/binary)Calculates an MD5 128-bit checksum for the string or binary (as of Hive 1.3.0<br />). The value is returned as a string of 32 hex digits, or NULL if the argument was NULL. Example: md5('ABC') = '902fbdd2b1df0c4f70b4a5d23525e932'.
stringsha1(string/binary)<br />sha(string/binary)Calculates the SHA-1 digest for string or binary and returns the value as a hex string (as of Hive 1.3.0<br />). Example: sha1('ABC') = '3c01bdbb26f358bab27f267924aa2c9a03fcfdb8'.
bigintcrc32(string/binary)Computes a cyclic redundancy check value for string or binary argument and returns bigint value (as of Hive 1.3.0<br />). Example: crc32('ABC') = 2743272264.
stringsha2(string/binary, int)Calculates the SHA-2 family of hash functions (SHA-224, SHA-256, SHA-384, and SHA-512) (as of Hive 1.3.0<br />). The first argument is the string or binary to be hashed. The second argument indicates the desired bit length of the result, which must have a value of 224, 256, 384, 512, or 0 (which is equivalent to 256). SHA-224 is supported starting from Java 8. If either argument is NULL or the hash length is not one of the permitted values, the return value is NULL. Example: sha2('ABC', 256) = 'b5d4045c3f466fa91fe2cc6abe79232a1a57cdf104f7a26e716e0a1e2789df78'.
binaryaes_encrypt(input string/binary, key string/binary)Encrypt input using AES (as of Hive 1.3.0<br />). Key lengths of 128, 192 or 256 bits can be used. 192 and 256 bits keys can be used if Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files are installed. If either argument is NULL or the key length is not one of the permitted values, the return value is NULL. Example: base64(aes_encrypt('ABC', '1234567890123456')) = 'y6Ss+zCYObpCbgfWfyNWTw=='.
binaryaes_decrypt(input binary, key string/binary)Decrypt input using AES (as of Hive 1.3.0<br />). Key lengths of 128, 192 or 256 bits can be used. 192 and 256 bits keys can be used if Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files are installed. If either argument is NULL or the key length is not one of the permitted values, the return value is NULL. Example: aes_decrypt(unbase64('y6Ss+zCYObpCbgfWfyNWTw=='), '1234567890123456') = 'ABC'.
stringversion()Returns the Hive version (as of Hive 2.1.0<br />). The string contains 2 fields, the first being a build number and the second being a build hash. Example: "select version();" might return "2.1.0.2.5.0.0-1245 r027527b9c5ce1a3d7d0b6d2e6de2378fb0c39232". Actual results will depend on your build.
bigintsurrogate_key([write_id_bits, task_id_bits])Automatically generate numerical Ids for rows as you enter data into a table. Can only be used as default value for acid or insert-only tables.

<a name="nEXY7"></a>

xpath

LanguageManual XPathUDF 中描述了以下函数:

  • xpath, xpath_short, xpath_int, xpath_long, xpath_float, xpath_double, xpath_number, xpath_string<a name="SM813"></a>

get_json_object

支持 JSONPath 的有限版本:

  • $ : Root object

  • . : Child operator

  • [] : Subscript operator for array

    • : Wildcard for []

不支持的语法值得注意:

  • : Zero length string as key

  • .. : Recursive descent

  • @ : Current object/element

  • () : Script expression

  • ?() : Filter (script) expression.

  • [,] : Union operator

  • [start:end.step] : array slice operator

示例:src_json 表是单列(json)、单行表:

| +----+<br /> json<br />+----+<br />{"store":<br /> {"fruit":\[{"weight":8,"type":"apple"},{"weight":9,"type":"pear"}],<br /> "bicycle":{"price":19.95,"color":"red"}<br /> },<br /> "email":"amy@only_for_json_udf_test.net",<br /> "owner":"amy"<br />}<br />+----+ || --- |

可以使用以下查询提取 json 对象的字段:

| hive> SELECT get_json_object(src_json.json, '​.store.fruit\[0]') FROM src_json;<br />{"weight":8,"type":"apple"}<br /> <br />hive> SELECT get_json_object(src_json.json, '$.non_exist_key') FROM src_json;<br />NULL || --- |

<a name="M6KMQ"></a>

内置聚合函数 (UDAF)

Hive 支持以下内置聚合函数:

Return TypeName(Signature)Description
BIGINTcount(*), count(expr), count(DISTINCT expr[, expr...])count(*) - Returns the total number of retrieved rows, including rows containing NULL values.<br />count(expr) - Returns the number of rows for which the supplied expression is non-NULL.<br />count(DISTINCT expr[, expr]) - Returns the number of rows for which the supplied expression(s) are unique and non-NULL. Execution of this can be optimized with hive.optimize.distinct.rewrite.
DOUBLEsum(col), sum(DISTINCT col)Returns the sum of the elements in the group or the sum of the distinct values of the column in the group.
DOUBLEavg(col), avg(DISTINCT col)Returns the average of the elements in the group or the average of the distinct values of the column in the group.
DOUBLEmin(col)Returns the minimum of the column in the group.
DOUBLEmax(col)Returns the maximum value of the column in the group.
DOUBLEvariance(col), var_pop(col)Returns the variance of a numeric column in the group.
DOUBLEvar_samp(col)Returns the unbiased sample variance of a numeric column in the group.
DOUBLEstddev_pop(col)Returns the standard deviation of a numeric column in the group.
DOUBLEstddev_samp(col)Returns the unbiased sample standard deviation of a numeric column in the group.
DOUBLEcovar_pop(col1, col2)Returns the population covariance of a pair of numeric columns in the group.
DOUBLEcovar_samp(col1, col2)Returns the sample covariance of a pair of a numeric columns in the group.
DOUBLEcorr(col1, col2)Returns the Pearson coefficient of correlation of a pair of a numeric columns in the group.
DOUBLEpercentile(BIGINT col, p)Returns the exact pth percentile of a column in the group (does not work with floating point types). p must be between 0 and 1. NOTE: A true percentile can only be computed for integer values. Use PERCENTILE_APPROX if your input is non-integral.
array<double>percentile(BIGINT col, array(p1 [, p2]...))Returns the exact percentiles p1, p2, ... of a column in the group (does not work with floating point types). pi must be between 0 and 1. NOTE: A true percentile can only be computed for integer values. Use PERCENTILE_APPROX if your input is non-integral.
DOUBLEpercentile_approx(DOUBLE col, p [, B])Returns an approximate pth percentile of a numeric column (including floating point types) in the group. The B parameter controls approximation accuracy at the cost of memory. Higher values yield better approximations, and the default is 10,000. When the number of distinct values in col is smaller than B, this gives an exact percentile value.
array<double>percentile_approx(DOUBLE col, array(p1 [, p2]...) [, B])Same as above, but accepts and returns an array of percentile values instead of a single one.
doubleregr_avgx(independent, dependent)Equivalent to avg(dependent). As of Hive 2.2.0.
doubleregr_avgy(independent, dependent)Equivalent to avg(independent). As of Hive 2.2.0.
doubleregr_count(independent, dependent)Returns the number of non-null pairs used to fit the linear regression line. As of Hive 2.2.0.
doubleregr_intercept(independent, dependent)Returns the y-intercept of the linear regression line, i.e. the value of b in the equation dependent = a * independent + b. As of Hive 2.2.0.
doubleregr_r2(independent, dependent)Returns the coefficient of determination for the regression. As of Hive 2.2.0.
doubleregr_slope(independent, dependent)Returns the slope of the linear regression line, i.e. the value of a in the equation dependent = a * independent + b. As of Hive 2.2.0.
doubleregr_sxx(independent, dependent)Equivalent to regr_count(independent, dependent) * var_pop(dependent). As of Hive 2.2.0.
doubleregr_sxy(independent, dependent)Equivalent to regr_count(independent, dependent) * covar_pop(independent, dependent). As of Hive 2.2.0.
doubleregr_syy(independent, dependent)Equivalent to regr_count(independent, dependent) * var_pop(independent). As of Hive 2.2.0.
array<struct {'x','y'}>histogram_numeric(col, b)Computes a histogram of a numeric column in the group using b non-uniformly spaced bins. The output is an array of size b of double-valued (x,y) coordinates that represent the bin centers and heights
arraycollect_set(col)Returns a set of objects with duplicate elements eliminated.
arraycollect_list(col)Returns a list of objects with duplicates. (As of Hive 0.13.0.)
INTEGERntile(INTEGER x)Divides an ordered partition into x groups called buckets and assigns a bucket number to each row in the partition. This allows easy calculation of tertiles, quartiles, deciles, percentiles and other common summary statistics. (As of Hive 0.11.0.)

<a name="qfHbK"></a>

内置表生成函数 (UDTF)

普通的用户定义函数,例如 concat(),接受单个输入行并输出单个输出行。相反,表生成函数将单个输入行转换为多个输出行。

Row-set columns typesName(Signature)Description
Texplode(ARRAY<T> a)Explodes an array to multiple rows. Returns a row-set with a single column (col), one row for each element from the array.
Tkey,Tvalueexplode(MAP<Tkey,Tvalue> m)Explodes a map to multiple rows. Returns a row-set with a two columns (key,value) , one row for each key-value pair from the input map. (As of Hive 0.8.0.).
int,Tposexplode(ARRAY<T> a)Explodes an array to multiple rows with additional positional column of int type (position of items in the original array, starting with 0). Returns a row-set with two columns (pos,val), one row for each element from the array.
T1,...,Tninline(ARRAYSTRUCT<f1:T1,...,fn:Tn> a)Explodes an array of structs to multiple rows. Returns a row-set with N columns (N = number of top level elements in the struct), one row per struct from the array. (As of Hive 0.10.)
T1,...,Tn/rstack(int r,T1 V1,...,Tn/r Vn)Breaks up n values V1,...,Vn into _r _rows. Each row will have _n/r _columns. _r _must be constant.

|

|

|

|| string1,...,stringn | json_tuple(string jsonStr,string k1,...,string kn) | Takes JSON string and a set of n keys, and returns a tuple of n values. This is a more efficient version of the get_json_object UDF because it can get multiple keys with just one call. || string 1,...,stringn | parse_url_tuple(string urlStr,string p1,...,string pn) | Takes URL string and a set of n URL parts, and returns a tuple of n values. This is similar to the parse_url() UDF but can extract multiple parts at once out of a URL. Valid part names are: HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, USERINFO, QUERY:<KEY>. |

<a name="Ir2Vo"></a>

使用示例

<a name="jQ5hl"></a>

explode (array)

| select explode(array('A','B','C'));<br />select explode(array('A','B','C')) as col;<br />select tf.* from (select 0) t lateral view explode(array('A','B','C')) tf;<br />select tf.* from (select 0) t lateral view explode(array('A','B','C')) tf as col; || --- |

**

**<br /><a name="EEbMm"></a>

explode (map)

| select explode(map('A',10,'B',20,'C',30));<br />select explode(map('A',10,'B',20,'C',30)) as (key,value);<br />select tf.* from (select 0) t lateral view explode(map('A',10,'B',20,'C',30)) tf;<br />select tf.* from (select 0) t lateral view explode(map('A',10,'B',20,'C',30)) tf as key,value; || --- |

<a name="VXz43"></a>

posexplode (array)

| select posexplode(array('A','B','C'));<br />select posexplode(array('A','B','C')) as (pos,val);<br />select tf.* from (select 0) t lateral view posexplode(array('A','B','C')) tf;<br />select tf.* from (select 0) t lateral view posexplode(array('A','B','C')) tf as pos,val; || --- |

<a name="emwce"></a>

inline (array of structs)

| select inline(array(struct('A',10,date '2015-01-01'),struct('B',20,date '2016-02-02')));<br />select inline(array(struct('A',10,date '2015-01-01'),struct('B',20,date '2016-02-02'))) as (col1,col2,col3);<br />select tf.* from (select 0) t lateral view inline(array(struct('A',10,date '2015-01-01'),struct('B',20,date '2016-02-02'))) tf;<br />select tf.* from (select 0) t lateral view inline(array(struct('A',10,date '2015-01-01'),struct('B',20,date '2016-02-02'))) tf as col1,col2,col3; || --- |

**

**<br /><a name="krt5d"></a>

stack (values)

| select stack(2,'A',10,date '2015-01-01','B',20,date '2016-01-01');<br />select stack(2,'A',10,date '2015-01-01','B',20,date '2016-01-01') as (col0,col1,col2);<br />select tf.* from (select 0) t lateral view stack(2,'A',10,date '2015-01-01','B',20,date '2016-01-01') tf;<br />select tf.* from (select 0) t lateral view stack(2,'A',10,date '2015-01-01','B',20,date '2016-01-01') tf as col0,col1,col2; || --- |

使用语法“SELECT udtf(col) AS colAlias...”有一些限制:

  • SELECT 中不允许使用其他表达式

    • 不支持选择 pageid、explode(adid_list) AS myCol...

  • UDTF 不能嵌套

    • 不支持 SELECT explode(explode(adid_list)) AS myCol...

  • 不支持 GROUP BY / CLUSTER BY / DISTRIBUTE BY / SORT BY

    • SELECT explode(adid_list) AS myCol ... GROUP BY myCol 不受支持

请参阅 LanguageManual LateralView 了解没有这些限制的替代语法。 如果要创建自定义 UDTF,另请参阅编写 UDTF。<a name="OuCjU"></a>

explode

explode() 将数组(或地图)作为输入,并将数组(地图)的元素作为单独的行输出。 UDTF 可以在 SELECT 表达式列表中使用,也可以作为 LATERAL VIEW 的一部分。<br /> 作为在 SELECT 表达式列表中使用 explode() 的示例,请考虑一个名为 myTable 的表,该表具有单列 (myCol) 和两行:

Array<int> myCol
[100,200,300]
[400,500,600]

然后运行查询:

| SELECT explode(myCol) AS myNewCol FROM myTable; || --- |

将产生:

(int) myNewCol
100
200
300
400
500
600

与 Maps 的用法类似:

| SELECT explode(myMap) AS (myMapKey, myMapValue) FROM myMapTable; || --- |

<a name="u5VmL"></a>

posexplode

Version<br />Available as of Hive 0.13.0. See HIVE-4943.<br />posexplode() 类似于 explode ,但它不仅返回数组的元素,还返回元素及其在原始数组中的位置。<br />作为在 SELECT 表达式列表中使用 posexplode() 的示例,请考虑一个名为 myTable 的表,该表具有单列 (myCol) 和两行:

Array<int> myCol
[100,200,300]
[400,500,600]

Then running the query:

| SELECT posexplode(myCol) AS pos, myNewCol FROM myTable; || --- |

will produce:

(int) pos(int) myNewCol
1100
2200
3300
1400
2500
3600

<a name="NtqO5"></a>

json_tuple

Hive 0.7 中引入了新的 json_tuple() UDTF。它接受一组名称(键)和一个 JSON 字符串,并使用一个函数返回一组值。这比调用 GET_JSON_OBJECT 从单个 JSON 字符串中检索多个键要高效得多。在任何情况下,单个 JSON 字符串将被解析多次,如果您解析一次,您的查询将更有效,这就是 JSON_TUPLE 的用途。由于 JSON_TUPLE 是一个 UDTF,您将需要使用 LATERAL VIEW 语法来实现相同的目标。<br />For example,

| select a.timestamp, get_json_object(a.appevents, '​.eventname') from log a; || --- |

should be changed to:

| select a.timestamp, b.*<br />from log a lateral view json_tuple(a.appevent, 'eventid', 'eventname') b as f1, f2; || --- |

<a name="nBIP1"></a>

parse_url_tuple

parse_url_tuple() UDTF 类似于 parse_url(),但可以提取给定 URL 的多个部分,以元组的形式返回数据。可以通过向 partToExtract 参数附加冒号和键来提取 QUERY 中特定键的值,例如,parse_url_tuple('http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1' , 'QUERY:k1', 'QUERY:k2') 返回一个值为 'v1','v2' 的元组。这比多次调用 parse_url() 更有效。所有输入参数和输出列类型都是字符串。

| SELECT b.*<br />FROM src LATERAL VIEW parse_url_tuple(fullurl, 'HOST', 'PATH', 'QUERY', 'QUERY:id') b as host, path, query, query_id LIMIT 1; || --- |

<a name="cBD6P"></a>

GROUPing and SORTing on f(column)

典型的 OLAP 模式是您有一个时间戳列,并且您希望按每日或其他粒度较细的日期窗口分组,而不是按秒分组。因此,您可能想要选择 concat(year(dt),month(dt)),然后对该 concat() 进行分组。但是,如果您尝试对已应用函数和别名的列进行 GROUP BY 或 SORT BY,如下所示:

| select f(col) as fc, count(*) from table_name group by fc; || --- |

you will get an error:

| FAILED: Error in semantic analysis: line 1:69 Invalid Table Alias or Column Reference fc || --- |

因为您无法对已应用函数的列别名进行 GROUP BY 或 SORT BY 。有两种解决方法。首先,您可以使用子查询重新构造此查询,这有点复杂:

| select sq.fc,col1,col2,...,colN,count(*) from<br /> (select f(col) as fc,col1,col2,...,colN from table_name) sq<br /> group by sq.fc,col1,col2,...,colN; || --- |

或者您可以确保不使用列别名,这更简单:

| select f(col) as fc, count(*) from table_name group by f(col); || --- |

如果您想更详细地讨论这个问题,请联系 RiotGames dot com 的 Tim Ellis (tellis)。<a name="ZX72p"></a>

Utility Functions

Function NameReturn TypeDescriptionTo Run
versionStringProvides the Hive version Details (Package built version)select version();
buildversionStringExtension of the Version function which includes the checksumselect buildversion();

<a name="Wpkry"></a>

UDF 内部结构

UDF 的评估方法的上下文是一次一行。一个简单的 UDF 调用,例如

| SELECT length(string_col) FROM table_name; || --- |

将评估作业地图部分中每个 string_col 值的长度。在映射端评估 UDF 的副作用是您无法控制发送到映射器的行的顺序。这与发送到映射器的文件拆分被反序列化的顺序相同。任何减少端操作(例如 SORT BY、ORDER BY、常规 JOIN 等)都将应用于 UDF 输出,就好像它只是表的另一列一样。这很好,因为 UDF 的评估方法的上下文是一次一行。<br />如果您想控制哪些行被发送到同一个 UDF(可能以什么顺序),您会很想在缩减阶段对 UDF 进行评估。这可以通过使用 DISTRIBUTE BY、DISTRIBUTE BY + SORT BY、CLUSTER BY 来实现。一个示例查询是:

| SELECT reducer_udf(my_col, distribute_col, sort_col) FROM<br />(SELECT my_col, distribute_col, sort_col FROM table_name DISTRIBUTE BY distribute_col SORT BY distribute_col, sort_col) t || --- |

但是,有人可能会争辩说,控制发送到同一 UDF 的一组行的要求的前提是在该 UDF 中进行聚合。在这种情况下,使用用户定义的聚合函数 (UDAF) 是更好的选择。您可以在此处阅读有关编写 UDAF 的更多信息。或者,您可以使用自定义 reduce 脚本使用 Hive 的转换功能来完成相同的操作。这两个选项都会在 reduce 端进行聚合。<a name="c3SmL"></a>

Creating Custom UDFs

有关如何创建自定义 UDF 的信息,请参阅 Hive 插件和创建函数。<br />select explode(array('A','B','C'));select explode(array('A','B','C')) as col;select tf.* from (select 0) t lateral view explode(array('A','B','C')) tf;select tf.* from (select 0) t lateral view explode(array('A','B','C')) tf as col;<br />10人赞了它

2.原文链接

LanguageManual UDF - Apache Hive - Apache Software Foundation

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值