hive中简单介绍分区表(partition table),含动态分区(dynamic partition)与静态分区(static partition)

简介:

hive中简单介绍分区表(partition table),含动态分区(dynamic partition)与静态分区(static partition)

hive中创建分区表没有什么复杂的分区类型(范围分区、列表分区、hash分区、混合分区等)。分区列也不是表中的一个实际的字段,而是一个或者多个伪列。意思是说在表的数据文件中实际上并不保存分区列的信息与数据。

(1)下面的语句创建了一个简单的分区表

create table partition_test(
member_id string,
name string
)
partitioned by (
stat_date string,
province string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';

(2)这个例子中创建了stat_date和province两个字段作为分区列。通常情况下需要先预先创建好分区,然后才能使用该分区,例如:
hive> alter table partition_test add partition (stat_date='20110728',province='zhejiang');
OK
Time taken: 0.39 seconds
2.1 这样就创建好了一个分区。这时我们会看到hive在HDFS存储中创建了一个相应的文件夹:
localhost:result_data a6$ hadoop fs -ls /user/hive/warehouse/yyz_workdb.db/partition_test
17/11/04 19:13:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x   - a6 supergroup          0 2017-11-04 19:11 /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110728
localhost:result_data a6$ hadoop fs -ls /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110728
17/11/04 19:14:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x   - a6 supergroup          0 2017-11-04 19:11 /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110728/province=zhejiang
localhost:result_data a6$
2.2 每一个分区都会有一个独立的文件夹,下面是该分区所有的数据文件。在这个例子中stat_date是主层次,province是副层次,所stat_date='20110728',而province不同的分区都会在/user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110728 下面,而stat_date不同的分区都会在/user/hive/warehouse/yyz_workdb.db/partition_test下面,如:
假设我们在创建一个分区,代码如下:
hive> alter table partition_test add partition (stat_date='20110728',province='henan');
OK
Time taken: 0.256 seconds
hive> alter table partition_test add partition (stat_date='20110730',province='beijing');
OK
Time taken: 0.113 seconds
2.3 则在该hive表的hdfs存储目录下,如下结构:
localhost:result_data a6$ hadoop fs -ls /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110728
17/11/04 19:24:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
drwxr-xr-x   - a6 supergroup          0 2017-11-04 19:20 /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110728/province=henan
drwxr-xr-x   - a6 supergroup          0 2017-11-04 19:11 /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110728/province=zhejiang
localhost:result_data a6$ hadoop fs -ls /user/hive/warehouse/yyz_workdb.db/partition_test
17/11/04 19:24:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
drwxr-xr-x   - a6 supergroup          0 2017-11-04 19:20 /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110728
drwxr-xr-x   - a6 supergroup          0 2017-11-04 19:21 /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110730
localhost:result_data a6$ hadoop fs -ls /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110730
17/11/04 19:24:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x   - a6 supergroup          0 2017-11-04 19:21 /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110730/province=beijing
localhost:result_data a6$
注意,因为分区列的值要转化为文件夹的存储路径,所以如果分区列的值中包含特殊值,如 '%', ':', '/', '#',它将会被使用%加上2字节的ASCII码进行转义,如:
hive>  alter table partition_test add partition (stat_date='2011/07/28',province='zhejiang');
OK
Time taken: 0.135 seconds
存储该hive表的分区的hdfs目录如下:
localhost:result_data a6$ hadoop fs -ls /user/hive/warehouse/yyz_workdb.db/partition_test
17/11/04 19:26:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 3 items
drwxr-xr-x   - a6 supergroup          0 2017-11-04 19:26 /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=2011%2F07%2F28
drwxr-xr-x   - a6 supergroup          0 2017-11-04 19:20 /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110728
drwxr-xr-x   - a6 supergroup          0 2017-11-04 19:21 /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110730
(3)下面准备使用一个辅助的非分区表partition_test_input准备向partition_test中插入数据:
3.1 创建非分区表格命令如下:
create table partition_test_input
(member_id string,
name string,
stat_date string,
province string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
3.2 插入数据,命令如下,结果如下:
hive> load data local inpath '/Users/a6/Applications/apache-hive-2.3.0-bin/result_data/partition_test_input.txt' overwrite into table partition_test_input;
Loading data to table yyz_workdb.partition_test_input
OK
Time taken: 0.867 seconds
hive> select * from partition_test_input;
OK
1	liujiannan	20110526	liaoning
2	wangchaoqun	20110526	hubei
3	xuhongxing	20110728	sichuan
4	zhudaoyong	20110728	henan
5	zhouchengyu	20110728	heilongjiang
Time taken: 0.08 seconds, Fetched: 5 row(s)
3.3 然后我向partition_test的分区中插入数据:
hive> select * from partition_test;
OK
Time taken: 0.132 seconds
hive> insert overwrite table partition_test partition(stat_date='20110728',province='henan') select member_id,name from partition_test_input where stat_date='20110728' and province='henan';
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = a6_20171104200454_cbad387f-05f3-4bba-81a3-0a185396512d
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1509763925736_0007, Tracking URL = http://localhost:8088/proxy/application_1509763925736_0007/
Kill Command = /Users/a6/Applications/hadoop-2.6.5/bin/hadoop job  -kill job_1509763925736_0007
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2017-11-04 20:05:02,062 Stage-1 map = 0%,  reduce = 0%
2017-11-04 20:05:08,361 Stage-1 map = 100%,  reduce = 0%
Ended Job = job_1509763925736_0007
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://localhost:9002/user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110728/province=henan/.hive-staging_hive_2017-11-04_20-04-54_622_5274372985717082223-1/-ext-10000
Loading data to table yyz_workdb.partition_test partition (stat_date=20110728, province=henan)
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1   HDFS Read: 5037 HDFS Write: 128 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 15.232 seconds
hive> select * from partition_test;
OK
4	zhudaoyong	20110728	henan
Time taken: 0.1 seconds, Fetched: 1 row(s)
hive>
3.4 还可以同时向多个分区插入数据,0.7版本以后不存在的分区会自动创建,0.6之前的版本官方文档上说必须要预先创建好分区。
为测试以上说法,首先显示partition_test这个hive表所具有的分区,命令如下:
hive> show partitions partition_test;
OK
stat_date=2011%2F07%2F28/province=zhejiang
stat_date=20110728/province=henan
stat_date=20110728/province=zhejiang
stat_date=20110730/province=beijing
Time taken: 0.101 seconds, Fetched: 4 row(s)
FROM partition_test_input
       INSERT overwrite TABLE partition_test PARTITION(
		stat_date = '20110526' ,
		province = 'liaoning'
	) SELECT
		member_id ,
		name
	WHERE
		stat_date = '20110526' AND province = 'liaoning' 
        INSERT overwrite TABLE partition_test PARTITION(
		stat_date = '20110728' ,
		province = 'sichuan'
	) SELECT
		member_id ,
		name
	WHERE
		stat_date = '20110728' AND province = 'sichuan' 
	INSERT overwrite TABLE partition_test PARTITION(
		stat_date = '20110728' ,
		province = 'heilongjiang'
	) SELECT
		member_id ,
		name
	WHERE
		stat_date = '20110728' AND province = 'heilongjiang';
3.5 执行以上语句后,则发现果然没有的分区被自动创建啦,
hive> show partitions partition_test;
OK
stat_date=2011%2F07%2F28/province=zhejiang
stat_date=20110526/province=liaoning
stat_date=20110728/province=heilongjiang
stat_date=20110728/province=henan
stat_date=20110728/province=sichuan
stat_date=20110728/province=zhejiang
stat_date=20110730/province=beijing
Time taken: 0.077 seconds, Fetched: 7 row(s)
hive>
特别要注意,在其他数据库中,一般向分区表中插入数据时系统会校验数据是否符合该分区,如果不符合会报错。而在hive中,向某个分区中插入什么样的数据完全是由人来控制的,因为分区键是伪列,不实际存储在文件中,如:
hive> insert overwrite table partition_test partition(stat_date='20110527',province='liaoning') select member_id,name from partition_test_input;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = a6_20171104201912_291312cf-da20-4b67-a746-1889b297f0bd
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1509763925736_0009, Tracking URL = http://localhost:8088/proxy/application_1509763925736_0009/
Kill Command = /Users/a6/Applications/hadoop-2.6.5/bin/hadoop job  -kill job_1509763925736_0009
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2017-11-04 20:19:20,577 Stage-1 map = 0%,  reduce = 0%
2017-11-04 20:19:26,869 Stage-1 map = 100%,  reduce = 0%
Ended Job = job_1509763925736_0009
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://localhost:9002/user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110527/province=liaoning/.hive-staging_hive_2017-11-04_20-19-12_640_370650644218657006-1/-ext-10000
Loading data to table yyz_workdb.partition_test partition (stat_date=20110527, province=liaoning)
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1   HDFS Read: 4507 HDFS Write: 185 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 15.737 seconds
hive> select * from partition_test;
OK
1	liujiannan	20110526	liaoning
1	liujiannan	20110527	liaoning
2	wangchaoqun	20110527	liaoning
3	xuhongxing	20110527	liaoning
4	zhudaoyong	20110527	liaoning
5	zhouchengyu	20110527	liaoning
5	zhouchengyu	20110728	heilongjiang
4	zhudaoyong	20110728	henan
3	xuhongxing	20110728	sichuan
Time taken: 0.104 seconds, Fetched: 9 row(s)
hive>
可以看到在partition_test_input中的5条数据有着不同的stat_date和province,但是在插入到partition(stat_date='20110527',province='liaoning')这个分区后,5条数据的stat_date和province都变成相同的了,因为这两列的数据是根据文件夹的名字读取来的,而不是实际从数据文件中读取来的:
localhost:result_data a6$ hadoop fs -ls /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110527/province=liaoning
17/11/04 20:22:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rwxr-xr-x   1 a6 supergroup         67 2017-11-04 20:19 /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110527/province=liaoning/000000_0
localhost:result_data a6$ hadoop dfs -cat /user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110527/province=liaoning/000000_0
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

17/11/04 20:22:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
1	liujiannan
2	wangchaoqun
3	xuhongxing
4	zhudaoyong
5	zhouchengyu
localhost:result_data a6$
(4)下面介绍一下动态分区,因为按照上面的方法向分区表中插入数据,如果源数据量很大,那么针对一个分区就要写一个insert,非常麻烦。况且在之前的版本中,必须先手动创建好所有的分区后才能插入,这就更麻烦了,你必须先要知道源数据中都有什么样的数据才能创建分区。
使用动态分区可以很好的解决上述问题。动态分区可以根据查询得到的数据自动匹配到相应的分区中去。 

4.1 使用动态分区要先设置hive.exec.dynamic.partition参数值为true,默认值为false,即不允许使用:
hive> set hive.exec.dynamic.partition;
hive.exec.dynamic.partition=false
hive> set hive.exec.dynamic.partition=true;
hive> set hive.exec.dynamic.partition;
hive.exec.dynamic.partition=true
动态分区的使用方法很简单,假设我想向stat_date='20110728'这个分区下面插入数据,至于province插入到哪个子分区下面让数据库自己来判断,那可以这样写:
hive> insert overwrite table partition_test partition(stat_date='20110728',province)
    > select member_id,name,province from partition_test_input where stat_date='20110728';
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = a6_20171104202556_5adab2fb-bc07-4dd9-b61c-2fb87176da63
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1509763925736_0010, Tracking URL = http://localhost:8088/proxy/application_1509763925736_0010/
Kill Command = /Users/a6/Applications/hadoop-2.6.5/bin/hadoop job  -kill job_1509763925736_0010
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2017-11-04 20:26:05,043 Stage-1 map = 0%,  reduce = 0%
2017-11-04 20:26:11,465 Stage-1 map = 100%,  reduce = 0%
Ended Job = job_1509763925736_0010
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://localhost:9002/user/hive/warehouse/yyz_workdb.db/partition_test/stat_date=20110728/.hive-staging_hive_2017-11-04_20-25-56_419_3312543230075088948-1/-ext-10000
Loading data to table yyz_workdb.partition_test partition (stat_date=20110728, province=null)

Loaded : 3/3 partitions.
	 Time taken to load dynamic partitions: 0.359 seconds
	 Time taken for adding to write entity : 0.001 seconds
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1   HDFS Read: 5218 HDFS Write: 322 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 16.814 seconds
4.2 stat_date叫做静态分区列,province叫做动态分区列。select子句中需要把动态分区列按照分区的顺序写出来,静态分区列不用写出来。这样stat_date='20110728'的所有数据,会根据province的不同分别插入到/user/hive/warehouse/partition_test/stat_date=20110728/下面的不同的子文件夹下,如果源数据对应的province子分区不存在,则会自动创建,非常方便,而且避免了人工控制插入数据与分区的映射关系存在的潜在风险。

注意,动态分区不允许主分区采用动态列而副分区采用静态列,这样将导致所有的主分区都要创建副分区静态列所定义的分区:
hive>  insert overwrite table partition_test partition(stat_date,province='liaoning')
    > select member_id,name,province from partition_test_input where province='liaoning';
FAILED: SemanticException [Error 10094]: Line 1:49 Dynamic partition cannot be the parent of a static partition ''liaoning''
hive>
动态分区可以允许所有的分区列都是动态分区列,但是要首先设置一个参数hive.exec.dynamic.partition.mode :
hive> set hive.exec.dynamic.partition.mode;
hive.exec.dynamic.partition.mode=strict
(5)再介绍3个参数:
5.1 默认参数释义:
hive.exec.max.dynamic.partitions.pernode (缺省值100):每一个mapreduce job允许创建的分区的最大数量,如果超过了这个数量就会报错
hive.exec.max.dynamic.partitions (缺省值1000):一个dml语句允许创建的所有分区的最大数量
hive.exec.max.created.files (缺省值100000):所有的mapreduce job允许创建的文件的最大数量

5.2 举例
当源表数据量很大时,单独一个mapreduce job中生成的数据在分区列上可能很分散,举个简单的例子,比如下面的表要用3个map:
1
1
1
2
2
2
3
3
3
如果数据这样分布,那每个mapreduce只需要创建1个分区就可以了: 
         |1
map1 --> |1 
         |1 

         |2
map2 --> |2 
         |2 

         |3
map3 --> |3 
         |3

但是如果数据按下面这样分布,那第一个mapreduce就要创建3个分区: 

         |1
map1 --> |2 
         |3 

         |1
map2 --> |2 
         |3 

         |1
map3 --> |2 
         |3
5.3 下面给出了一个报错的例子:
hive> set hive.exec.max.dynamic.partitions.pernode=4;
hive> insert overwrite table partition_test partition(stat_date,province)
> select member_id,name,stat_date,province from partition_test_input distribute by stat_date,province;
Total MapReduce jobs = 1
...
[Fatal Error] Operator FS_4 (id=4): Number of dynamic partitions exceeded hive.exec.max.dynamic.partitions.pernode.. Killing the job.
Ended Job = job_201107251641_0083 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask

为了让分区列的值相同的数据尽量在同一个mapreduce中,这样每一个mapreduce可以尽量少的产生新的文件夹,可以借助distribute by的功能,将分区列值相同的数据放到一起:

hive> insert overwrite table partition_test partition(stat_date,province)
> select member_id,name,stat_date,province from partition_test_input distribute by stat_date,province;
Total MapReduce jobs = 1
...
18 Rows loaded to partition_test
OK


参考网址:http://blog.sina.com.cn/s/blog_6ff05a2c0100tah0.html
  • 0
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Hive分区Partition)是一种将表数据按照指定的列进行逻辑划分和存储的方式。通过将数据按照某个列的值进行分区,可以提高查询效率和数据管理的灵活性。 具体来说,以下是关于Hive分区的一些概念和特点: 1. 分区列(Partition Column):分区列是表用于进行分区的列。通常选择具有高基数(Distinct Value)的列作为分区列,以便更好地划分数据并提高查询性能。 2. 分区目录(Partition Directory):每个分区都会对应一个独立的目录,用于存储该分区的数据文件。分区目录的命名通常基于分区列的值,以便更好地组织和管理数据。 3. 动态分区Dynamic Partition):Hive支持动态分区,在插入数据时根据数据的列值动态创建和管理分区。这允许在插入数据时自动创建新的分区目录。 4. 静态分区Static Partition):与动态分区相反,静态分区需要在创建表时明确地定义所有分区,并提前创建对应的分区目录。 5. 多级分区(Multi-level Partitioning):Hive还支持多级分区,即在一个表使用多个列进行分区。这样可以更细粒度地划分数据,提供更灵活的查询和管理能力。 通过使用表分区,可以使Hive在处理大规模数据时更高效地执行查询操作。例如,当查询仅涉及特定分区时,Hive可以仅加载相关分区的数据,而不必加载整个表的数据。此外,分区还可以帮助优化数据存储和管理,以及提供更灵活的数据查询和过滤功能。 需要注意的是,在设计和使用表分区时,需要考虑数据分布的均匀性、查询模式、分区列的选择等因素,以确保最佳的性能和使用效果。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值