Hive中快速复制一张分区表的结构和数据

首先,来回顾下在hive中创建表的三种方式:
直接创建表:create table table_name (column1 type1, column2 type2, …)
创建新表只复制旧表的结构:create table new_table_name like old_table_name
创建新表复制旧表的结构和数据:create table new_table_name as select * from old_table_name

看到上面的需求和hive中创建表的三种方式,如果是非分区表,我们可以选择上面第三种创建表的方式复制表
但在实际开发中,hive中的表大部分为分区表,针对分区表,我们可以使用上面的第二种方式先复制旧分区表的表结构,再使用动态分区,将旧表的分区数据insert到新表中,虽然能够实现功能,但并非是效率最高的

如果想提高效率,可以使用以下步骤:

1.create table new_partition like old_partition;

2.使用hdfs dfs -cp 命令,把old_partition对应的HDFS目录的文件夹全部拷贝到new_partition对应的目录下

3.使用msck repair table new_table;修复新表的分区元数据

测试下:

在我们数据库中有张表

> select * from rt_data;
OK
rt_data.shop_id	rt_data.stat_date	rt_data.ordamt
10026	201501120030	5170.0
10026	201501120100	5669.0
10026	201501120130	2396.0
10026	201501120200	1498.0
10026	201501120230	1997.0
10026	201501120300	1188.0
10026	201501120330	598.0
10026	201501120400	479.0
10026	201501120430	1587.0
10026	201501120530	799.0
10027	201501120030	2170.0
10027	201501120100	1623.0
10027	201501120130	3397.0
10027	201501120200	1434.0
10027	201501120230	1001.0
10028	201501120300	1687.0
10028	201501120330	1298.0
10028	201501120400	149.0
10029	201501120430	2587.0
10029	201501120530	589.0
Time taken: 12.712 seconds, Fetched: 20 row(s)

创建一张分区表

> create table old_partition (shop_id int, stat_date string, ordamt double) partitioned by (daystr string); 
OK
Time taken: 0.17 seconds

设置动态分区

set hive.exec.dynamic.partition.mode=nonstrict;

将rt_data的数据动态插入old_partition表中

insert into table old_partition partition (daystr)
                > select shop_id, stat_date, ordamt, stat_date from rt_data;
Query ID = rdedu_20180729105529_0d16042f-5d27-4616-826f-bc5e02067767
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1532570467965_0057, Tracking URL = http://bigdata-training01.erongda.com:8088/proxy/application_1532570467965_0057/
Kill Command = /opt/cdh-5.7.6/hadoop-2.6.0-cdh5.7.6/bin/mapred job  -kill job_1532570467965_0057
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-07-29 10:59:14,083 Stage-1 map = 0%,  reduce = 0%
2018-07-29 11:00:04,874 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 20.9 sec
2018-07-29 11:00:17,029 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 23.07 sec
MapReduce Total cumulative CPU time: 23 seconds 70 msec
Ended Job = job_1532570467965_0057
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://bigdata-training01.erongda.com:8020/user/hive/warehouse-3.1.1/db_window.db/old_partition/.hive-staging_hive_2018-07-29_10-55-
29_054_1770048812106415069-1/-ext-10000Loading data to table db_window.old_partition partition (daystr=null)


	 Time taken to load dynamic partitions: 1.588 seconds
	 Time taken for adding to write entity : 0.014 seconds
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 23.07 sec   HDFS Read: 18383 HDFS Write: 3075 SUCCESS
Total MapReduce CPU Time Spent: 23 seconds 70 msec
OK
shop_id	stat_date	ordamt	stat_date
Time taken: 292.625 seconds

注意:这样动态插入花费的时间是292.625s,才二十几条数据,花了这么长的时间,如果是上千万条数据,可以想象所需要的处理时间会有多长。

查看分区

> show partitions old_partition;
OK
partition
daystr=201501120030
daystr=201501120100
daystr=201501120130
daystr=201501120200
daystr=201501120230
daystr=201501120300
daystr=201501120330
daystr=201501120400
daystr=201501120430
daystr=201501120530
Time taken: 0.112 seconds, Fetched: 10 row(s)

先设置hive.mapred.mode=nonstrict再查看old_partition中的数据
如果未设置此参数,select分区表时必须指定分区,不然会报错

> set hive.mapred.mode=nonstrict
> select * from old_partition;
OK
old_partition.shop_id	old_partition.stat_date	old_partition.ordamt	old_partition.daystr
10026	201501120030	5170.0	201501120030
10027	201501120030	2170.0	201501120030
10026	201501120100	5669.0	201501120100
10027	201501120100	1623.0	201501120100
10026	201501120130	2396.0	201501120130
10027	201501120130	3397.0	201501120130
10026	201501120200	1498.0	201501120200
10027	201501120200	1434.0	201501120200
10026	201501120230	1997.0	201501120230
10027	201501120230	1001.0	201501120230
10026	201501120300	1188.0	201501120300
10028	201501120300	1687.0	201501120300
10026	201501120330	598.0	201501120330
10028	201501120330	1298.0	201501120330
10026	201501120400	479.0	201501120400
10028	201501120400	149.0	201501120400
10026	201501120430	1587.0	201501120430
10029	201501120430	2587.0	201501120430
10026	201501120530	799.0	201501120530
10029	201501120530	589.0	201501120530
Time taken: 0.425 seconds, Fetched: 20 row(s)

使用第二种方法复制旧分区表结构和数据

> create table new_partition like old_partition;
OK
Time taken: 0.224 seconds

> show create table new_partition;
OK
createtab_stmt
CREATE TABLE `new_partition`(
  `shop_id` int, 
  `stat_date` string, 
  `ordamt` double)
PARTITIONED BY ( 
  `daystr` string)
ROW FORMAT SERDE 
  'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' 
STORED AS INPUTFORMAT 
  'org.apache.hadoop.mapred.TextInputFormat' 
OUTPUTFORMAT 
  'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
  'hdfs://bigdata-training01.fuyun.com:8020/user/hive/warehouse-3.1.1/db_window.db/new_partition'
TBLPROPERTIES (
  'transient_lastDdlTime'='1532834516')
Time taken: 0.17 seconds, Fetched: 16 row(s)

使用hdfs dfs -cp 命令,把old_partition对应的HDFS目录的文件夹全部拷贝到new_partition对应的目录下

cd /opt/cdh-5.7.6/hadoop-2.6.0-cdh5.7.6

bin/hdfs dfs -cp /user/hive/warehouse-3.1.1/db_window.db/old_partition/* /user/hive/warehouse-3.1.1/db_window.db/new_partition

bin/hdfs dfs -ls /user/hive/warehouse-3.1.1/db_window.db/new_partition
Found 10 items
drwxr-xr-x   - rdedu supergroup          0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120030
drwxr-xr-x   - rdedu supergroup          0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120100
drwxr-xr-x   - rdedu supergroup          0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120130
drwxr-xr-x   - rdedu supergroup          0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120200
drwxr-xr-x   - rdedu supergroup          0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120230
drwxr-xr-x   - rdedu supergroup          0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120300
drwxr-xr-x   - rdedu supergroup          0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120330
drwxr-xr-x   - rdedu supergroup          0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120400
drwxr-xr-x   - rdedu supergroup          0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120430
drwxr-xr-x   - rdedu supergroup          0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120530

到hive中查看new_partition表的分区情况

> show partitions new_partition;
OK
partition
Time taken: 0.124 seconds

发现并没有分区,使用msck repair table new_table;修复源数据再查看分区情况

> msck repair table new_partition;
OK
Partitions not in metastore:	new_partition:daystr=201501120030	new_partition:daystr=201501120100	new_partition:daystr=201501120130	new_p
artition:daystr=201501120200	new_partition:daystr=201501120230	new_partition:daystr=201501120300	new_partition:daystr=201501120330	new_partition:daystr=201501120400	new_partition:daystr=201501120430	new_partition:daystr=201501120530Repair: Added partition to metastore new_partition:daystr=201501120530
Repair: Added partition to metastore new_partition:daystr=201501120330
Repair: Added partition to metastore new_partition:daystr=201501120230
Repair: Added partition to metastore new_partition:daystr=201501120030
Repair: Added partition to metastore new_partition:daystr=201501120130
Repair: Added partition to metastore new_partition:daystr=201501120100
Repair: Added partition to metastore new_partition:daystr=201501120200
Repair: Added partition to metastore new_partition:daystr=201501120300
Repair: Added partition to metastore new_partition:daystr=201501120400
Repair: Added partition to metastore new_partition:daystr=201501120430
Time taken: 0.742 seconds, Fetched: 11 row(s)

> show partitions new_partition;
OK
partition
daystr=201501120030
daystr=201501120100
daystr=201501120130
daystr=201501120200
daystr=201501120230
daystr=201501120300
daystr=201501120330
daystr=201501120400
daystr=201501120430
daystr=201501120530
Time taken: 0.082 seconds, Fetched: 10 row(s)

这样新的分区表已经复制好,新的分区表和旧的分区表有一样的结构和数据,可以从上面两种测试中看出,第一种复制的方法时间明显比第二种方法慢很多,并且测试数据只有几十条,如果上千万的数据会更慢更明显。

  • 3
    点赞
  • 24
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值