三、 Hive Select
语法:
SELECT [ALL | DISTINCT] select_expr, select_expr, ...
FROM table_reference
[WHERE where_condition]
[GROUP BY col_list]
[ CLUSTER BY col_list
| [DISTRIBUTE BY col_list] [SORT BY col_list]
]
[LIMIT number]
3.1 Group By
基本语法:
groupByClause:
GROUP BY groupByExpression (, groupByExpression)*
groupByExpression: expression
groupByQuery:
SELECT expression (, expression)* FROM src groupByClause?
高级特性:
聚合可进一步分为多个表,甚至发送到Hadoop的DFS的文件(可以进行操作,然后使用HDFS的utilitites)。例如我们可以根据性别划分,需要找到独特的页面浏览量按年龄划分。如下面的例子:
FROM pv_users
INSERT OVERWRITE TABLE pv_gender_sum
SELECT pv_users.gender, count(DISTINCT pv_users.userid)
GROUP BY pv_users.gender
INSERT OVERWRITE DIRECTORY '/user/facebook/tmp/pv_age_sum'
SELECT pv_users.age, count(DISTINCT pv_users.userid)
GROUP BY pv_users.age;
hive.map.aggr可以控制怎么进行汇总。默认为为true,配置单元会做的第一级聚合直接在MAP上的任务。这通常提供更好的效率,但可能需要更多的内存来运行成功。
set hive.map.aggr=true;
SELECT COUNT(*) FROM table2;
PS:在要特定的场合使用可能会加效率。不过我试了一下,比直接使用False慢很多。
3.2 Order /Sort By
Order by 语法:
colOrder: ( ASC | DESC )
orderBy: ORDER BY colName colOrder? (',' colName colOrder?)*
query: SELECT expression (',' expression)* FROM src orderBy
Sort By 语法:
Sort顺序将根据列类型而定。如果数字类型的列,则排序顺序也以数字顺序。如果字符串类型的列,则排序顺序将字典顺序。
colOrder: ( ASC | DESC )
sortBy: SORT BY colName colOrder? (',' colName colOrder?)*
query: SELECT expression (',' expression)* FROM src sortBy
四、Hive Join
语法:
join_table:
table_reference JOIN table_factor [join_condition]
| table_reference {LEFT|RIGHT|FULL} [OUTER] JOIN table_reference join_condition
| table_reference LEFT SEMI JOIN table_reference join_condition
table_reference:
table_factor
| join_table
table_factor:
tbl_name [alias]
| table_subquery alias
| ( table_references )
join_condition:
ON equality_expression ( AND equality_expression )*
equality_expression:
expression = expression
Hive 只支持等值连接(equality joins)、外连接(outer joins)和(left/right joins)。
Hive 不支持所有非等值的连接,因为非等值连接非常难转化到 map/reduce 任务。另外,
Hive 支持多于 2 个表的连接。
写 join 查询时,需要注意几个关键点:
1.只支持等值join
eg:
SELECT a.* FROM a JOIN b ON (a.id = b.id)
SELECT a.* FROM a JOIN b ON (a.id = b.id AND a.department = b.department)
是正确的,然而:
SELECT a.* FROM a JOIN b ON (a.id b.id)
是错误的。
2.可以 join 多于 2 个表
eg:
SELECT a.val, b.val, c.val FROM a JOIN b ON (a.key = b.key1) JOIN c ON (c.key = b.key2)
如果join中多个表的 join key 是同一个,则 join 会被转化为单个 map/reduce 任务,
eg:
SELECT a.val, b.val, c.val FROM a JOIN b
ON (a.key = b.key1) JOIN c
ON (c.key = b.key1)
被转化为单个 map/reduce 任务,因为 join 中只使用了 b.key1 作为 join key。
SELECT a.val, b.val, c.val FROM a JOIN b ON (a.key = b.key1)
JOIN c ON (c.key = b.key2)
而这一 join 被转化为 2 个 map/reduce 任务。因为 b.key1 用于第一次 join 条件,而 b.key2 用于第二次 join。
3.join 时,每次 map/reduce 任务的逻辑:
reducer 会缓存 join 序列中除了最后一个表的所有表的记录,再通过最后一个表将结果序列化到文件系统。这一实现有助于在 reduce 端减少内存的使用量。实践中,应该把最大的那个表写在最后(否则会因为缓存浪费大量内存)。例如:
SELECT a.val, b.val, c.val FROM a JOIN b ON (a.key = b.key1) JOIN c ON (c.key = b.key1)
所有表都使用同一个 join key(使用 1 次 map/reduce 任务计算)。Reduce 端会缓存 a 表和 b 表的记录,然后每次取得一个 c 表的记录就计算一次 join 结果,类似的还有:
SELECT a.val, b.val, c.val FROM a JOIN b ON (a.key = b.key1) JOIN c ON (c.key = b.key2)
这里用了 2 次 map/reduce 任务。第一次缓存 a 表,用 b 表序列化;第二次缓存第一次 map/reduce 任务的结果,然后用 c 表序列化。
4.LEFT,RIGHT 和 FULL OUTER 关键字用于处理 join 中空记录的情况。
eg:
SELECT a.val, b.val FROM a LEFT OUTER JOIN b ON (a.key=b.key)
对应所有 a 表中的记录都有一条记录输出。当 a.key=b.key 时,输出的结果应该是 a.val, b.val,而当 b.key 中找不到等值的 a.key 记录时也会输出 a.val, NULL。“FROM a LEFT OUTER JOIN b”这句一定要写在同一行——意思是 a 表在 b 表的左边,所以 a 表中的所有记录都被保留了;
“a RIGHT OUTER JOIN b”会保留所有 b 表的记录。OUTER JOIN 语义应该是遵循标准 SQL spec的。
Join 发生在 WHERE 子句之前。如果你想限制 join 的输出,应该在 WHERE 子句中写过滤条件——或是在 join 子句中写。这里面一个容易混淆的问题是表分区的情况:
SELECT a.val, b.val FROM a LEFT OUTER JOIN b ON (a.key=b.key)
WHERE a.ds='2009-07-07' AND b.ds='2009-07-07'
会 join a 表到 b 表(OUTER JOIN),列出 a.val 和 b.val 的记录。WHERE 从句中可以使用其他列作为过滤条件。但是,如前所述,如果 b 表中找不到对应 a 表的记录,b 表的所有列都会列出 NULL,包括 ds 列。也就是说,join 会过滤 b 表中不能找到匹配 a 表 join key 的所有记录。这样的话,LEFT OUTER 就使得查询结果与 WHERE 子句无关了。解决的办法是在 OUTER JOIN 时使用以下语法:
SELECT a.val, b.val FROM a LEFT OUTER JOIN b
ON (a.key=b.key AND
b.ds='2009-07-07' AND
a.ds='2009-07-07')
这一查询的结果是预先在 join 阶段过滤过的,所以不会存在上述问题。这一逻辑也可以应用于 RIGHT 和 FULL 类型的 join 中。
SELECT a.val1, a.val2, b.val, c.val
FROM a
JOIN b ON (a.key = b.key)
LEFT OUTER JOIN c ON (a.key = c.key)
先 join a 表到b 表,丢弃掉所有 join key 中不匹配的记录,然后用这一中间结果和 c 表做 join。这一表述有一个不太明显的问题,就是当一个 key 在 a 表和 c 表都存在,但是 b 表中不存在的时候:整个记录在第一次 join,即 a JOIN b的时候都被丢掉了(包括a.val1,a.val2和a.key),然后我们再和 c表 join的时候,如果 c.key 与 a.key 或 b.key 相等,就会得到这样的结果:NULL, NULL, NULL, c.val。
5.LEFT SEMI JOIN 是 IN/EXISTS 子查询的一种更高效的实现。
Hive 当前没有实现 IN/EXISTS 子查询,所以你可以用 LEFT SEMI JOIN 重写你的子查询语句。LEFT SEMI JOIN 的限制是, JOIN 子句中右边的表只能在 ON 子句中设置过滤条件,在 WHERE 子句、SELECT 子句或其他地方过滤都不行。
SELECT a.key, a.value
FROM a
WHERE a.key in
(SELECT b.key
FROM B);
可以被重写为:
SELECT a.key, a.val
FROM a LEFT SEMI JOIN b on (a.key = b.key)
五、HIVE参数设置
开发Hive应用时,不可避免地需要设定Hive的参数。设定Hive的参数可以调优HQL代码的执行效率,或帮助定位问题。然而实践中经常遇到的一个问题是,为什么设定的参数没有起作用?--》这通常是错误的设定方式导致的。
对于一般参数,有以下三种设定方式:
-
配置文件
命令行参数
参数声明
配置文件:Hive的配置文件包括
-
用户自定义配置文件:$HIVE_CONF_DIR/hive-site.xml
默认配置文件:$HIVE_CONF_DIR/hive-default.xml
用户自定义配置会覆盖默认配置。另外,Hive也会读入Hadoop的配置,因为Hive是作为Hadoop的客户端启动的,Hadoop的配置文件包括
-
$HADOOP_CONF_DIR/hive-site.xml
$HADOOP_CONF_DIR/hive-default.xml
Hive的配置会覆盖Hadoop的配置。
配置文件的设定对本机启动的所有Hive进程都有效。
命令行参数:启动Hive(客户端或Server方式)时,可以在命令行添加-hiveconf param=value来设定参数,
例如:bin/hive -hiveconf hive.root.logger=INFO,console
这一设定对本次启动的Session(对于Server方式启动,则是所有请求的Sessions)有效。
参数声明:可以在HQL中使用SET关键字设定参数,例如:
set mapred.reduce.tasks=100;
这一设定的作用域也是Session级的。
上述三种设定方式的优先级依次递增。即参数声明覆盖命令行参数,命令行参数覆盖配置文件设定。注意某些系统级的参数,例如log4j相关的设定,必须用前两种方式设定,因为那些参数的读取在Session建立以前已经完成了。
另外,SerDe参数必须写在DDL(建表)语句中。例如:
create table if not exists t_dummy(
dummy string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'field.delim'='\t',
'escape.delim'='\\',
'serialization.null.format'=' '
) STORED AS TEXTFILE;
类似serialization.null.format这样的参数,必须和某个表或分区关联。在DDL外部声明将不起作用。
六、HIVE UDF
6.1 基本函数
SHOW FUNCTIONS;
DESCRIBE FUNCTION <function_name>;
1 关系操作符
Operator | Operand types | Description |
A = B | All primitive types | TRUE if expression A is equal to expression B otherwise FALSE |
A == B | None! | Fails because of invalid syntax. SQL uses =, not == |
A <> B | All primitive types | NULL if A or B is NULL, TRUE if expression A is NOT equal to expression B otherwise FALSE |
A < B | All primitive types | NULL if A or B is NULL, TRUE if expression A is less than expression B otherwise FALSE |
A <= B | All primitive types | NULL if A or B is NULL, TRUE if expression A is less than or equal to expression B otherwise FALSE |
A > B | All primitive types | NULL if A or B is NULL, TRUE if expression A is greater than expression B otherwise FALSE |
A >= B | All primitive types | NULL if A or B is NULL, TRUE if expression A is greater than or equal to expression B otherwise FALSE |
A IS NULL | all types | TRUE if expression A evaluates to NULL otherwise FALSE |
A IS NOT NULL | All types | TRUE if expression A evaluates to NULL otherwise FALSE |
A LIKE B | strings | NULL if A or B is NULL, TRUE if string A matches the SQL simple regular expression B, otherwise FALSE. The comparison is done character by character. The _ character in B matches any character in A(similar to . in posix regular expressions) while the % character in B matches an arbitrary number of characters in A(similar to .* in posix regular expressions) e.g. 'foobar' like 'foo' evaluates to FALSE where as 'foobar' like 'foo_ _ _' evaluates to TRUE and so does 'foobar' like 'foo%' |
A RLIKE B | strings | NULL if A or B is NULL, TRUE if string A matches the Java regular expression B(See Java regular expressions syntax), otherwise FALSE e.g. 'foobar' rlike 'foo' evaluates to FALSE where as 'foobar' rlike '^f.*r$' evaluates to TRUE |
A REGEXP B | strings | Same as RLIKE |
2 代数操作符
返回数字类型,如果任意一个操作符为NULL,则结果为NULL
Operator | Operand types | Description |
A + B | All number types | Gives the result of adding A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. e.g. since every integer is a float, therefore float is a containing type of integer so the + operator on a float and an int will result in a float. |
A - B | All number types | Gives the result of subtracting B from A. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. |
A * B | All number types | Gives the result of multiplying A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. Note that if the multiplication causing overflow, you will have to cast one of the operators to a type higher in the type hierarchy. |
A / B | All number types | Gives the result of dividing B from A. The result is a double type. |
A % B | All number types | Gives the reminder resulting from dividing A by B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. |
A & B | All number types | Gives the result of bitwise AND of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. |
A | B | All number types | Gives the result of bitwise OR of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. |
A ^ B | All number types | Gives the result of bitwise XOR of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. |
~A | All number types | Gives the result of bitwise NOT of A. The type of the result is the same as the type of A. |
3 逻辑操作符 ?
4 复杂类型操作符
Constructor Function | Operands | Description |
Map | (key1, value1, key2, value2, ...) | Creates a map with the given key/value pairs |
Struct | (val1, val2, val3, ...) | Creates a struct with the given field values. Struct field names will be col1, col2, ... |
Array | (val1, val2, ...) | Creates an array with the given elements |
5 内建函数 ?
6 数学函数 ?
7 集合函数 ?
8 类型转化 ?
9 日期函数
返回值类型 | 名称 | 描述 |
string | from_unixtime(int unixtime) | 将时间戳(unix epoch秒数)转换为日期时间字符串,例如from_unixtime(0)="1970-01-01 00:00:00" |
bigint | unix_timestamp() | 获得当前时间戳 |
bigint | unix_timestamp(string date) | 获得date表示的时间戳 |
bigint | to_date(string timestamp) | 返回日期字符串,例如to_date("1970-01-01 00:00:00") = "1970-01-01" |
string | year(string date) | 返回年,例如year("1970-01-01 00:00:00") = 1970,year("1970-01-01") = 1970 |
int | month(string date) | |
int | day(string date) dayofmonth(date) | |
int | hour(string date) | |
int | minute(string date) | |
int | second(string date) | |
int | weekofyear(string date) | |
int | datediff(string enddate, string startdate) | 返回enddate和startdate的天数的差,例如datediff('2009-03-01', '2009-02-27') = 2 |
int | date_add(string startdate, int days) | 加days天数到startdate: date_add('2008-12-31', 1) = '2009-01-01' |
int | date_sub(string startdate, int days) | 减days天数到startdate: date_sub('2008-12-31', 1) = '2008-12-30' |
10 条件函数
返回值类型 | 名称 | 描述 |
- | if(boolean testCondition, T valueTrue, T valueFalseOrNull) | 当testCondition为真时返回valueTrue,testCondition为假或NULL时返回valueFalseOrNull |
- | COALESCE(T v1, T v2, ...) | 返回列表中的第一个非空元素,如果列表元素都为空则返回NULL |
- | CASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] END | a = b,返回c;a = d,返回e;否则返回f |
- | CASE WHEN a THEN b [WHEN c THEN d]* [ELSE e] END | a 为真,返回b;c为真,返回d;否则e |
11 字符串函数
The following are built-in String functions are supported in hive:
返回值类型 | 名称 | 描述 |
Int | length(string A) | 返回字符串长度 |
String | reverse(string A) | 反转字符串 |
String | concat(string A, string B...) | 合并字符串,例如concat('foo', 'bar')='foobar'。注意这一函数可以接受任意个数的参数 |
String | substr(string A, int start) substring(string A, int start) | 返回子串,例如substr('foobar', 4)='bar' |
String | substr(string A, int start, int len) substring(string A, int start, int len) | 返回限定长度的子串,例如substr('foobar', 4, 1)='b' |
String | upper(string A) ucase(string A) | 转换为大写 |
String | lower(string A) lcase(string A) | 转换为小写 |
String | trim(string A) | |
String | ltrim(string A) | |
String | rtrim(string A) | |
String | regexp_replace(string A, string B, string C) | Returns the string resulting from replacing all substrings in B that match the Java regular expression syntax(See Java regular expressions syntax) with C e.g. regexp_replace("foobar", "oo|ar", "") returns 'fb.' Note that some care is necessary in using predefined character classes: using '\s' as the second argument will match the letter s; '\\s' is necessary to match whitespace, etc. |
String | regexp_extract(string subject, string pattern, int intex) | 返回使用正则表达式提取的子字串。例如,regexp_extract('foothebar', 'foo(.*?)(bar)', 2)='bar'。注意使用特殊字符的规则:使用'\s'代表的是字符's';空白字符需要使用'\\s',以此类推。 |
String | parse_url(string urlString, string partToExtract) | 解析URL字符串,partToExtract的可选项有:HOST, PATH, QUERY, REF, PROTOCOL, FILE, AUTHORITY, USERINFO。 |
例如, | ||
parse_url('http://facebook.com/path/p1.php?query=1', 'HOST')='facebook.com' | ||
parse_url('http://facebook.com/path/p1.php?query=1', 'PATH')='/path/p1.php' | ||
parse_url('http://facebook.com/path/p1.php?query=1', 'QUERY')='query=1',可以指定key来返回特定参数,key的格式是QUERY:<KEY_NAME>,例如QUERY:k1 | ||
parse_url('http://facebook.com/path/p1.php?query=1&field=2','QUERY','query')='1'可以用来取出外部渲染参数key对应的value值 | ||
parse_url('http://facebook.com/path/p1.php?query=1&field=2','QUERY','field')='2' | ||
parse_url('http://facebook.com/path/p1.php?query=1#Ref', 'REF')='Ref' | ||
parse_url('http://facebook.com/path/p1.php?query=1#Ref', 'PROTOCOL')='http' | ||
String | get_json_object(string json_string, string path) | 解析json字符串。若源json字符串非法则返回NULL。path参数支持JSONPath的一个子集,包括以下标记: |
$: Root object | ||
[]: Subscript operator for array | ||
&: Wildcard for [] | ||
.: Child operator | ||
String | space(int n) | 返回一个包含n个空格的字符串 |
String | repeat(string str, int n) | 重复str字符串n遍 |
String | ascii(string str) | 返回str中第一个字符的ascii码 |
String | lpad(string str, int len, string pad) | 左端补齐str到长度为len。补齐的字符串由pad指定。 |
String | rpad(string str, int len, string pad) | 右端补齐str到长度为len。补齐的字符串由pad指定。 |
Array | split(string str, string pat) | 返回使用pat作为正则表达式分割str字符串的列表。例如,split('foobar', 'o')[2] = 'bar'。?不是很明白这个结果 |
Int | find_in_set(string str, string strList) | Returns the first occurance of str in strList where strList is a comma-delimited string. Returns null if either argument is null. Returns 0 if the first argument contains any commas. e.g. find_in_set('ab', 'abc,b,ab,c,def') returns 3 |
6.2 UDTF
UDTF即Built-in Table-Generating Functions
使用这些UDTF函数有一些限制:
1、SELECT里面不能有其它字段
如:SELECT pageid, explode(adid_list) AS myCol...
2、不能嵌套
如:SELECT explode(explode(adid_list)) AS myCol...不支持
3、不支持GROUP BY / CLUSTER BY / DISTRIBUTE BY / SORT BY
如:SELECT explode(adid_list) AS myCol ... GROUP BY myCol
eg:
将数组进行转置
例如:
1、create table test2(mycol array<int>);
2、insert OVERWRITE table test2 select * from (select array(1,2,3) from a union all select array(7,8,9) from d)c;
3、hive> select * from test2;
OK
[1,2,3]
[7,8,9]
hive> SELECT explode(myCol) AS myNewCol FROM test2;
OK
1
2
3
7
8
9