Hive--函数、窗口函数

1 Hive–函数

Hive–build-in 函数

1.1 Hive 常用函数
1.1.1 判空函数:nvl
  • 如果一列数据中间含有NULL,需要给一个默认值,则可以使用
  • 一般在做聚合操作的时候需要加上,防止有的数据为NULL,导致计算结果不准确
  • 创建表并加载数据
CREATE TABLE IF NOT EXISTS bigdata.people(
name string,
sex string,
salary int
)ROW FORMAT
DELIMITED FIELDS TERMINATED BY ',' 
LINES TERMINATED BY '\n'
STORED AS TEXTFILE;

load data local inpath '/home/work/data/hive/people.txt' overwrite into table bigdata.people;
hive> select * from bigdata.people;
OK
HADOOP	M	10000
SPARK	M	NULL
HBASE	F	12000
KAFKA	M	11000
FLINK	F	NULL
Time taken: 0.125 seconds, Fetched: 5 row(s)
  • 使用nvl函数
hive> select salary,nvl(salary,0) from bigdata.people;
10000	10000
NULL	0
12000	12000
11000	11000
NULL	0

# 对比是否使用nvl 求平均值
hive> select avg(salary) as avg1,avg(nvl(salary,0)) as avg2 from bigdata.people;
  avg1   avg2
11000.0	6600.0
  • 可以看出求平均数的时候没有把salary为null的数据算进去
1.2 concat/concat_ws
  • 两个函数都有字符串拼接的效果
  • 查看function的帮助看下两个函数的区别
  • concat
hive> desc function extended concat;
OK
concat(str1, str2, ... strN) - returns the concatenation of str1, str2, ... strN or concat(bin1, bin2, ... binN) - returns the concatenation of bytes in binary data  bin1, bin2, ... binN
Returns NULL if any argument is NULL.
Example:
  > SELECT concat('abc', 'def') FROM src LIMIT 1;
  'abcdef'
  • concat参数只能是字符串,并且如果字符串里面要是有null,返回结果则为null
  • concat_ws
hive> desc function extended concat_ws;
OK
concat_ws(separator, [string | array(string)]+) - returns the concatenation of the strings separated by the separator.
Example:
  > SELECT concat_ws('.', 'www', array('facebook', 'com')) FROM src LIMIT 1;
  'www.facebook.com'
  • concat_ws后面参数可以跟字符串和array,而且第一个参数可以指定分隔符
  • 测试concat_ws拼接的字符串里面有null返回结果
hive> select concat_ws(",",null,"111",array("222","333"));
111,222,333
  • concat_ws 后面如果拼接的数据有null,则排除不拼接
1.3 case when/if
  • 两个函数都有判断的作用,case when可以写多个判断,if函数只有一个判断
  • case when 案例
select 
name,
sex,
(case when nvl(salary,0) >= 12000 then 'A' when nvl(salary,0) >= 10000 then 'B' else 'C' end)
from bigdata.people;
  • 运行结果
HADOOP	M	B
SPARK	M	C
HBASE	F	A
KAFKA	M	B
FLINK	F	C
  • if 案例
select 
name,
sex,
if(salary >= 12000,'A',if(salary >= 10000,'B','C'))
from bigdata.people;
  • 运行结果
HADOOP	M	B
SPARK	M	C
HBASE	F	A
KAFKA	M	B
FLINK	F	C
1.4 Hive–WC
  • 创建表并加载数据
CREATE TABLE IF NOT EXISTS bigdata.wc(
words string
)ROW FORMAT
DELIMITED FIELDS TERMINATED BY '\t' 
LINES TERMINATED BY '\n'
STORED AS TEXTFILE;
load data local inpath '/home/work/data/hive/wc.txt' overwrite into table bigdata.wc;
select * from bigdata.wc;
    words
hadoop,spark,flink
hbase,hadoop,spark,flink
spark
hadoop
hadoop,spark,flink
hbase,hadoop,spark,flink
spark
hadoop
hbase,hadoop,spark,flink
  • SQL
select
    tmp.word,
    sum(tmp.num) as cnt
from(
    SELECT 
        word,
        1 as num
    from bigdata.wc lateral view explode(split(words,",")) tmp as word
) as tmp
group by tmp.word
  • 这里用了列转行的操作,先把效果实现,后面再好好研究下行列转换
  • 执行结果
flink	5
hadoop	7
hbase	3
spark	7
1.5 Hive–列转行
  • 网上很多文章上面说的列转行都不一样
  • 我理解的列转行应该是类似于上面的WC操作一样,是先把一个字符串通过一个固定的分隔符给拆成一个数组,然后通过explode(爆裂函数)给炸开,拆成一行一行的数据
  • 列转行案例
  • 先按照固定的分隔符拆数据
SELECT 
split(words,",")
from bigdata.wc;
  • 运行结果
["hadoop","spark","flink"]
["hbase","hadoop","spark","flink"]
["spark"]
["hadoop"]
["hadoop","spark","flink"]
["hbase","hadoop","spark","flink"]
["spark"]
["hadoop"]
["hbase","hadoop","spark","flink"]
  • 使用explode把数组给炸开
SELECT 
explode(split(words,",")) as word
from bigdata.wc;
  • 运行结果
hadoop
spark
flink
hbase
hadoop
spark
flink
spark
hadoop
hadoop
spark
flink
hbase
hadoop
spark
flink
spark
hadoop
hbase
hadoop
spark
flink
1.6 Hive–行转列
  • 行转列即把做一个聚合操作,把多行数据,转成一列数据
  • 需要使用collect_list 聚合操作
  • 需求把people 对性别进行分区,把姓名通过|来聚合成一个字段
hive> select * from bigdata.people;
OK
HADOOP	M	10000
SPARK	M	NULL
HBASE	F	12000
KAFKA	M	11000
FLINK	F	NULL
  • 首先使用collect_list来吧一行一行的数据聚合成一个数组
  • SQL
select 
sex,
collect_list(name)
from bigdata.people
group by sex;
  • 运行结果
F	["HBASE","FLINK"]
M	["HADOOP","SPARK","KAFKA"]
  • 然后把数据拼接成一个字符串,则需要使用concat_wc来进行操作
  • SQL
select 
sex,
concat_ws('|',collect_list(name))
from bigdata.people
group by sex;
  • 运行结果
F	HBASE|FLINK
M	HADOOP|SPARK|KAFKA

2 Hive–窗口函数

Hive官网–窗口函数

2.1 LEAD/LAG
The number of rows to lead can optionally be specified. If the number of rows to lead is not specified, the lead is one row.
Returns null when the lead for the current row extends beyond the end of the window.
  • 指定按照分区以及排序规则,哪个字段的前第几行,如果为空给一个默认值
  • 第一个参数是放需要查询的字段名,第二个参数是前/后(LEAD/LAG)多少条数据,第三个参数当前/后N条数据为null时,有一个默认值
  • 创建表以及加载数据
CREATE TABLE IF NOT EXISTS bigdata.window1(
cookie string,
time string,
url string
)ROW FORMAT
DELIMITED FIELDS TERMINATED BY ',' 
LINES TERMINATED BY '\n'
STORED AS TEXTFILE;
load data local inpath '/home/work/data/hive/window.txt' overwrite into table bigdata.window1;
select * from bigdata.window1;
cookie1	2015-04-10 10:00:02	url2
cookie1	2015-04-10 10:00:00	url1
cookie1	2015-04-10 10:03:04	1url3
cookie1	2015-04-10 10:50:05	url6
cookie1	2015-04-10 11:00:00	url7
cookie1	2015-04-10 10:10:00	url4
cookie1	2015-04-10 10:50:01	url5
cookie2	2015-04-10 10:00:02	url22
cookie2	2015-04-10 10:00:00	url11
cookie2	2015-04-10 10:03:04	1url33
cookie2	2015-04-10 10:50:05	url66
cookie2	2015-04-10 11:00:00	url77
cookie2	2015-04-10 10:10:00	url44
cookie2	2015-04-10 10:50:01	url55
  • 使用lead函数查询同一个cookie访问的前一个时间,如果前一个没有数据,则给一个1970-01-01 00:00:00的默认值
select
cookie,
time,
url,
lead(time,1,'1970-01-01 00:00:00') over(partition by cookie order by time desc) as last_time
from bigdata.window1
order by cookie,time;
  • 运行结果
cookie1	2015-04-10 10:00:00	url1	1970-01-01 00:00:00
cookie1	2015-04-10 10:00:02	url2	2015-04-10 10:00:00
cookie1	2015-04-10 10:03:04	1url3	2015-04-10 10:00:02
cookie1	2015-04-10 10:10:00	url4	2015-04-10 10:03:04
cookie1	2015-04-10 10:50:01	url5	2015-04-10 10:10:00
cookie1	2015-04-10 10:50:05	url6	2015-04-10 10:50:01
cookie1	2015-04-10 11:00:00	url7	2015-04-10 10:50:05
cookie2	2015-04-10 10:00:00	url11	1970-01-01 00:00:00
cookie2	2015-04-10 10:00:02	url22	2015-04-10 10:00:00
cookie2	2015-04-10 10:03:04	1url33	2015-04-10 10:00:02
cookie2	2015-04-10 10:10:00	url44	2015-04-10 10:03:04
cookie2	2015-04-10 10:50:01	url55	2015-04-10 10:10:00
cookie2	2015-04-10 10:50:05	url66	2015-04-10 10:50:01
cookie2	2015-04-10 11:00:00	url77	2015-04-10 10:50:05
  • 对比于lead,lag是访问分区以及排序规则下,后面的那条数据
The number of rows to lag can optionally be specified. If the number of rows to lag is not specified, the lag is one row.
Returns null when the lag for the current row extends before the beginning of the window.
select
cookie,
time,
url,
lag(time,1,'1970-01-01 00:00:00') over(partition by cookie order by time desc) as next_time,
from bigdata.window1
order by cookie,time;
  • 运行结果
cookie1	2015-04-10 10:00:00	url1	2015-04-10 10:00:02
cookie1	2015-04-10 10:00:02	url2	2015-04-10 10:03:04
cookie1	2015-04-10 10:03:04	1url3	2015-04-10 10:10:00
cookie1	2015-04-10 10:10:00	url4	2015-04-10 10:50:01
cookie1	2015-04-10 10:50:01	url5	2015-04-10 10:50:05
cookie1	2015-04-10 10:50:05	url6	2015-04-10 11:00:00
cookie1	2015-04-10 11:00:00	url7	1970-01-01 00:00:00
cookie2	2015-04-10 10:00:00	url11	2015-04-10 10:00:02
cookie2	2015-04-10 10:00:02	url22	2015-04-10 10:03:04
cookie2	2015-04-10 10:03:04	1url33	2015-04-10 10:10:00
cookie2	2015-04-10 10:10:00	url44	2015-04-10 10:50:01
cookie2	2015-04-10 10:50:01	url55	2015-04-10 10:50:05
cookie2	2015-04-10 10:50:05	url66	2015-04-10 11:00:00
cookie2	2015-04-10 11:00:00	url77	1970-01-01 00:00:00
2.2 FIRST_VALUE/LAST_VALUE
  • FIRST_VALUE
This takes at most two parameters. The first parameter is the column for which you want the first value, the second (optional) parameter must be a boolean which is false by default. If set to true it skips null values.
  • 第一个参数是放着想要查询的字段名,第二个参数默认是false,也可以改成true,来判断是否跳过空值
  • 查询这批数据中的每个cookie中的最早的时间
select
cookie,
time,
url,
FIRST_VALUE(time) over(partition by cookie order by time) as first_time
from bigdata.window1
order by cookie,time;
  • 运行结果
cookie1	2015-04-10 10:00:00	url1	2015-04-10 10:00:00
cookie1	2015-04-10 10:00:02	url2	2015-04-10 10:00:00
cookie1	2015-04-10 10:03:04	1url3	2015-04-10 10:00:00
cookie1	2015-04-10 10:10:00	url4	2015-04-10 10:00:00
cookie1	2015-04-10 10:50:01	url5	2015-04-10 10:00:00
cookie1	2015-04-10 10:50:05	url6	2015-04-10 10:00:00
cookie1	2015-04-10 11:00:00	url7	2015-04-10 10:00:00
cookie2	2015-04-10 10:00:00	url11	2015-04-10 10:00:00
cookie2	2015-04-10 10:00:02	url22	2015-04-10 10:00:00
cookie2	2015-04-10 10:03:04	1url33	2015-04-10 10:00:00
cookie2	2015-04-10 10:10:00	url44	2015-04-10 10:00:00
cookie2	2015-04-10 10:50:01	url55	2015-04-10 10:00:00
cookie2	2015-04-10 10:50:05	url66	2015-04-10 10:00:00
cookie2	2015-04-10 11:00:00	url77	2015-04-10 10:00:00
  • LAST_VALUE
select
cookie,
time,
url,
LAST_VALUE(time) over(partition by cookie order by time) as last_time
from bigdata.window1
order by cookie,time;
  • 运行结果
cookie1	2015-04-10 10:00:00	url1	2015-04-10 10:00:00
cookie1	2015-04-10 10:00:02	url2	2015-04-10 10:00:02
cookie1	2015-04-10 10:03:04	1url3	2015-04-10 10:03:04
cookie1	2015-04-10 10:10:00	url4	2015-04-10 10:10:00
cookie1	2015-04-10 10:50:01	url5	2015-04-10 10:50:01
cookie1	2015-04-10 10:50:05	url6	2015-04-10 10:50:05
cookie1	2015-04-10 11:00:00	url7	2015-04-10 11:00:00
cookie2	2015-04-10 10:00:00	url11	2015-04-10 10:00:00
cookie2	2015-04-10 10:00:02	url22	2015-04-10 10:00:02
cookie2	2015-04-10 10:03:04	1url33	2015-04-10 10:03:04
cookie2	2015-04-10 10:10:00	url44	2015-04-10 10:10:00
cookie2	2015-04-10 10:50:01	url55	2015-04-10 10:50:01
cookie2	2015-04-10 10:50:05	url66	2015-04-10 10:50:05
cookie2	2015-04-10 11:00:00	url77	2015-04-10 11:00:00
  • 数据虽然和当前行的time数据一样,但是语义并不是如此
  • LAST_VALUE指的是按照分区以及排序规则,到当前行时候time的最后一条数据
2.3 COUNT、SUM、MIN、MAX、AVG
  • 这些函数的含义我在这里就不多介绍了,主要是介绍一些这些函数如何使用
  • 下面我就以COUNT来举例
  • 使用分区来计算每个cookie中url数量
select
cookie,
time,
url,
count(url) over(partition by cookie) as cnt
from bigdata.window1
order by cookie,time;
  • 运行结果
cookie1	2015-04-10 10:00:00	url1	7
cookie1	2015-04-10 10:00:02	url2	7
cookie1	2015-04-10 10:03:04	1url3	7
cookie1	2015-04-10 10:10:00	url4	7
cookie1	2015-04-10 10:50:01	url5	7
cookie1	2015-04-10 10:50:05	url6	7
cookie1	2015-04-10 11:00:00	url7	7
cookie2	2015-04-10 10:00:00	url11	7
cookie2	2015-04-10 10:00:02	url22	7
cookie2	2015-04-10 10:03:04	1url33	7
cookie2	2015-04-10 10:10:00	url44	7
cookie2	2015-04-10 10:50:01	url55	7
cookie2	2015-04-10 10:50:05	url66	7
cookie2	2015-04-10 11:00:00	url77	7
  • OVER后面跟着分区操作之后还可以操作ROW
(ROWS | RANGE) BETWEEN (UNBOUNDED | [num]) PRECEDING AND ([num] PRECEDING | CURRENT ROW | (UNBOUNDED | [num]) FOLLOWING)
(ROWS | RANGE) BETWEEN CURRENT ROW AND (CURRENT ROW | (UNBOUNDED | [num]) FOLLOWING)
(ROWS | RANGE) BETWEEN [num] FOLLOWING AND (UNBOUNDED | [num]) FOLLOWING
  • BETWEEN 后面 UNBOUNDED:是无边界的,指的是从分区内部的第一行开始,后面还可以选择跟着一个数字,来确定是从第几行开始分区开窗

  • CURRENT ROW :指的是当前行

  • AND后面还可以跟着 num(数字) FOLLOWING,来确定下边界是到第几行

  • 如果有一个需求是计算每个分区到当前数据的总和

select
cookie,
time,
url,
count(url) over(partition by cookie order by time ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as cnt
from bigdata.window1
order by cookie,time;
  • 运行结果
cookie1	2015-04-10 10:00:00	url1	1
cookie1	2015-04-10 10:00:02	url2	2
cookie1	2015-04-10 10:03:04	1url3	3
cookie1	2015-04-10 10:10:00	url4	4
cookie1	2015-04-10 10:50:01	url5	5
cookie1	2015-04-10 10:50:05	url6	6
cookie1	2015-04-10 11:00:00	url7	7
cookie2	2015-04-10 10:00:00	url11	1
cookie2	2015-04-10 10:00:02	url22	2
cookie2	2015-04-10 10:03:04	1url33	3
cookie2	2015-04-10 10:10:00	url44	4
cookie2	2015-04-10 10:50:01	url55	5
cookie2	2015-04-10 10:50:05	url66	6
cookie2	2015-04-10 11:00:00	url77	7
  • 下面根据一个具体案例分析下ROW的用法
select 
cookie, time,
count(url) OVER (partition by cookie order by time) pv1,
count(url) OVER (partition by cookie order by time ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) pv2,
count(url) OVER (partition by cookie)  pv3,
count(url) OVER (partition by cookie order by time  ROWS BETWEEN 3 PRECEDING AND CURRENT ROW)  pv4,
count(url) OVER (partition by cookie order by time  ROWS BETWEEN 3 PRECEDING AND 1 FOLLOWING)  pv5,
count(url) OVER (partition by cookie order by time  ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)  pv6
from bigdata.window1
order by cookie,time;
  • pv1计算的是按照cookie分区,按照time排序,从第一行到当前行的url的数量
  • pv2计算的是按照cookie分区,按照time排序,从第一行到当前行的url的数量
  • pv3计算的是按照cookie分区,计算每个分区url的数量
  • pv4计算的是按照cookie分区,从当前行往上三行到当前行url的数量
  • pv5计算的是按照cookie分区,从当前行往上三行到当前行往下一行url的数量
  • pv6计算的是按照cookie分区,按照time排序,从当前行到分区内部最后一行的url的数量
  • 运行结果
  cookie	time				pv1		pv2		pv3		pv4		pv5		pv6
cookie1	2015-04-10 10:00:00		1		1		7		1		2		7
cookie1	2015-04-10 10:00:02		2		2		7		2		3		6
cookie1	2015-04-10 10:03:04		3		3		7		3		4		5
cookie1	2015-04-10 10:10:00		4		4		7		4		5		4
cookie1	2015-04-10 10:50:01		5		5		7		4		5		3
cookie1	2015-04-10 10:50:05		6		6		7		4		5		2
cookie1	2015-04-10 11:00:00		7		7		7		4		4		1
cookie2	2015-04-10 10:00:00		1		1		7		1		2		7
cookie2	2015-04-10 10:00:02		2		2		7		2		3		6
cookie2	2015-04-10 10:03:04		3		3		7		3		4		5
cookie2	2015-04-10 10:10:00		4		4		7		4		5		4
cookie2	2015-04-10 10:50:01		5		5		7		4		5		3
cookie2	2015-04-10 10:50:05		6		6		7		4		5		2
cookie2	2015-04-10 11:00:00		7		7		7		4		4		1
2.4 RANK、ROW_NUMBER、DENSE_RANK、CUME_DIST、PERCENT_RANK、NTILE
2.4.1 ROW_NUMBER、RANK、DENSE_RANK
  • 按照cookie分区,按照时间顺序排序,分别使用上面三个窗口函数测试,以此来看下他们之间的差异
select 
cookie, time,
ROW_NUMBER() OVER (partition by cookie order by time) rn1,
RANK() OVER (partition by cookie order by time) rn2,
DENSE_RANK() OVER (partition by cookie order by time) rn3
from bigdata.window1
order by cookie,time;
  • 运行结果
cookie		time				rn1		rn2		rn3
cookie1	2015-04-10 10:00:00		1		1		1
cookie1	2015-04-10 10:00:02		2		2		2
cookie1	2015-04-10 10:00:02		3		2		2
cookie1	2015-04-10 10:03:04		4		4		3
cookie1	2015-04-10 10:10:00		5		5		4
cookie1	2015-04-10 10:50:01		6		6		5
cookie1	2015-04-10 10:50:05		7		7		6
cookie1	2015-04-10 11:00:00		8		8		7
cookie2	2015-04-10 10:00:00		1		1		1
cookie2	2015-04-10 10:00:02		2		2		2
cookie2	2015-04-10 10:03:04		3		3		3
cookie2	2015-04-10 10:10:00		4		4		4
cookie2	2015-04-10 10:50:01		5		5		5
cookie2	2015-04-10 10:50:01		6		5		5
cookie2	2015-04-10 10:50:05		7		7		6
cookie2	2015-04-10 11:00:00		8		8		7
  • cookie1 和cookie2各有一条重复数据
  • 先看ROW_NUMBER是按照查询出来的顺序进行排序,即使是有重复数据,但是排序的编号并没有重复或者缺少
  • RANK遇到重复数据的时候,排序的编号是一样的,并且有多少重复数据排名都是相同的,总数并没有改变
  • DENSE_RANK遇到重复数据的时候,排序的编号是一样的,排序的总数变少了
2.4.2 CUME_DIST、PERCENT_RANK
  • CUME_DIST:小于等于当前行值的行数 / 分组内的总行数
select 
cookie,time,url,
round(CUME_DIST() over(order by time),2) rn1,
round(CUME_DIST() over(partition by cookie order by time),2) rn2
from bigdata.window1
order by cookie,time;
  • 运行结果
cookie				time			url			rn1		rn2
cookie1		2015-04-10 10:00:00		url1		0.13	0.13
cookie1		2015-04-10 10:00:02		url2		0.31	0.38
cookie1		2015-04-10 10:00:02		url2		0.31	0.38
cookie1		2015-04-10 10:03:04		1url3		0.44	0.5
cookie1		2015-04-10 10:10:00		url4		0.56	0.63
cookie1		2015-04-10 10:50:01		url5		0.75	0.75
cookie1		2015-04-10 10:50:05		url6		0.88	0.88
cookie1		2015-04-10 11:00:00		url7		1.0		1.0
cookie2		2015-04-10 10:00:00		url11		0.13	0.13
cookie2		2015-04-10 10:00:02		url22		0.31	0.25
cookie2		2015-04-10 10:03:04		1url33		0.44	0.38
cookie2		2015-04-10 10:10:00		url44		0.56	0.5
cookie2		2015-04-10 10:50:01		url55		0.75	0.75
cookie2		2015-04-10 10:50:01		url55		0.75	0.75
cookie2		2015-04-10 10:50:05		url66		0.88	0.88
cookie2		2015-04-10 11:00:00		url77		1.0		1.0
  • PERCENT_RANK:分组内当前行的rank-1/分组内总行数-1
select 
cookie,time,url,
round(PERCENT_RANK() over(order by time),2) rn1,
round(PERCENT_RANK() over(partition by cookie order by time),2) rn2
from bigdata.window1
order by cookie,time;
  • 运行结果
cookie				time			url			rn1		rn2
cookie1		2015-04-10 10:00:00		url1		0.0		0.0 
cookie1		2015-04-10 10:00:02		url2		0.13	0.14
cookie1		2015-04-10 10:00:02		url2		0.13	0.14
cookie1		2015-04-10 10:03:04		1url3		0.33	0.43
cookie1		2015-04-10 10:10:00		url4		0.47	0.57
cookie1		2015-04-10 10:50:01		url5		0.6		0.71
cookie1		2015-04-10 10:50:05		url6		0.8		0.86
cookie1		2015-04-10 11:00:00		url7		0.93	1.0
cookie2		2015-04-10 10:00:00		url11		0.0		0.0
cookie2		2015-04-10 10:00:02		url22		0.13	0.14
cookie2		2015-04-10 10:03:04		1url33		0.33	0.29
cookie2		2015-04-10 10:10:00		url44		0.47	0.43
cookie2		2015-04-10 10:50:01		url55		0.6		0.57
cookie2		2015-04-10 10:50:01		url55		0.6		0.57
cookie2		2015-04-10 10:50:05		url66		0.8		0.86
cookie2		2015-04-10 11:00:00		url77		0.93	1.0
2.4.3 NTILE
  • 将分组数据按照顺序进行切n片
select 
cookie,time,url,
NTILE(2) OVER (partition by cookie order by time) rn
from bigdata.window1
order by cookie,time;
  • 按照cookie分区、time排序,把数据分成两组,返回的数据是行号
  • 运行结果
cookie				time			url			rn
cookie1		2015-04-10 10:00:00		url1		1
cookie1		2015-04-10 10:00:02		url2		1
cookie1		2015-04-10 10:00:02		url2		1
cookie1		2015-04-10 10:03:04		1url3		1
cookie1		2015-04-10 10:10:00		url4		2
cookie1		2015-04-10 10:50:01		url5		2
cookie1		2015-04-10 10:50:05		url6		2
cookie1		2015-04-10 11:00:00		url7		2
cookie2		2015-04-10 10:00:00		url11		1
cookie2		2015-04-10 10:00:02		url22		1
cookie2		2015-04-10 10:03:04		1url33		1
cookie2		2015-04-10 10:10:00		url44		1
cookie2		2015-04-10 10:50:01		url55		2
cookie2		2015-04-10 10:50:01		url55		2
cookie2		2015-04-10 10:50:05		url66		2
cookie2		2015-04-10 11:00:00		url77		2
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值