Hive内部表与外部表(外部表使用场景)

官网解释

Managed and External Tables
By default Hive creates managed tables, where files, metadata and statistics are managed by internal Hive processes. A managed table is stored under the hive.metastore.warehouse.dir path property, by default in a folder path similar to /apps/hive/warehouse/databasename.db/tablename/. The default location can be overridden by the location property during table creation. If a managed table or partition is dropped, the data and metadata associated with that table or partition are deleted. If the PURGE option is not specified, the data is moved to a trash folder for a defined duration.
Use managed tables when Hive should manage the lifecycle of the table, or when generating temporary tables.
An external table describes the metadata / schema on external files. External table files can be accessed and managed by processes outside of Hive. External tables can access data stored in sources such as Azure Storage Volumes (ASV) or remote HDFS locations. If the structure or partitioning of an external table is changed, an MSCK REPAIR TABLE table_name statement can be used to refresh metadata information.
Use external tables when files are already present or in remote locations, and the files should remain even if the table is dropped.
Managed or external tables can be identified using the DESCRIBE FORMATTED table_name command, which will display either MANAGED_TABLE or EXTERNAL_TABLE depending on table type.
Statistics can be managed on internal and external tables and partitions for query optimization. 

从官网这句话可以得知Hive中有两类数据:data和metadata
并且data存储在hdfs上,metadata存储在MySQL中,这两类数据是理解内部表和外部表的关键。

一、内部表
1、创建一张内部表

hive (default)> create table emp_manager(
              > empno int, 
              > ename string, 
              > job string, 
              > mgr int, 
              > hiredate string, 
              > sal double, 
              > comm double, 
              > deptno int
              > )row format delimited fields terminated by '\t';

2、查看data和metadata
在HDFS上可以看到emp_manager表的data

[root@hadoop001 maven-3.3.9]# hadoop fs -ls /user/hive/warehouse/
Found 2 items
drwxr-xr-x   - root supergroup          0 2017-10-07 21:55 /user/hive/warehouse/emp

drwxr-xr-x   - root supergroup          0 2017-10-07 23:33 /user/hive/warehouse/emp_manager

在MySQL中可以看到emp_manager的metadata

mysql> select * from TBLS \G;
*************************** 1. row ***************************
            TBL_ID: 1
       CREATE_TIME: 1506370951
             DB_ID: 1
  LAST_ACCESS_TIME: 0
             OWNER: root
         RETENTION: 0
             SD_ID: 1
          TBL_NAME: emp
          TBL_TYPE: MANAGED_TABLE
VIEW_EXPANDED_TEXT: NULL
VIEW_ORIGINAL_TEXT: NULL
*************************** 2. row ***************************
            TBL_ID: 6
       CREATE_TIME: 1507390381
             DB_ID: 1
  LAST_ACCESS_TIME: 0
             OWNER: root
         RETENTION: 0
             SD_ID: 6
          TBL_NAME: emp_manager
          TBL_TYPE: MANAGED_TABLE
VIEW_EXPANDED_TEXT: NULL
VIEW_ORIGINAL_TEXT: NULL
2 rows in set (0.00 sec)

3、删除emp_manager表后查看data和metadata
删除

hive (default)> drop table emp_manager;
OK
Time taken: 2.738 seconds

data也会随之删除

[root@hadoop001 maven-3.3.9]# hadoop fs -ls /user/hive/warehouse/
Found 1 items
drwxr-xr-x   - root supergroup          0 2017-10-07 21:55 /user/hive/warehouse/emp

metadata也会是删除

mysql> select * from TBLS \G;
*************************** 1. row ***************************
            TBL_ID: 1
       CREATE_TIME: 1506370951
             DB_ID: 1
  LAST_ACCESS_TIME: 0
             OWNER: root
         RETENTION: 0
             SD_ID: 1
          TBL_NAME: emp
          TBL_TYPE: MANAGED_TABLE
VIEW_EXPANDED_TEXT: NULL
VIEW_ORIGINAL_TEXT: NULL
1 row in set (0.00 sec)

二、外部表
1、创建外部表(放在了自己指定的目录下)

hive (default)> create external table emp_external(
              > empno int, 
              > ename string, 
              > job string, 
              > mgr int, 
              > hiredate string, 
              > sal double, 
              > comm double, 
              > deptno int
              > )row format delimited fields terminated by '\t'
              > location '/hive/external_table/';

加载数据到emp_external

[root@hadoop001 data]# hadoop fs -put emp.txt /hive/external_table/

2、查看data和metadata
在HDFS上可以看到emp_external表的data

[root@hadoop001 data]# hadoop fs -ls /hive/external_table/
Found 1 items
-rw-r--r--   1 root supergroup        700 2017-10-07 23:50 /hive/exterbal_table/emp.txt

在MySQL中可以看到emp_external表的data

mysql> select * from TBLS \G;
*************************** 1. row ***************************
            TBL_ID: 1
       CREATE_TIME: 1506370951
             DB_ID: 1
  LAST_ACCESS_TIME: 0
             OWNER: root
         RETENTION: 0
             SD_ID: 1
          TBL_NAME: emp
          TBL_TYPE: MANAGED_TABLE
VIEW_EXPANDED_TEXT: NULL
VIEW_ORIGINAL_TEXT: NULL
*************************** 2. row ***************************
            TBL_ID: 7
       CREATE_TIME: 1507391060
             DB_ID: 1
  LAST_ACCESS_TIME: 0
             OWNER: root
         RETENTION: 0
             SD_ID: 7
          TBL_NAME: emp_external
          TBL_TYPE: EXTERNAL_TABLE
VIEW_EXPANDED_TEXT: NULL
VIEW_ORIGINAL_TEXT: NULL
2 rows in set (0.00 sec)

3、删除emp_external表后查看data和metadata
删除

hive (default)> drop table emp_external;
OK

data不会随之删除

[root@hadoop001 data]# hadoop fs -ls /hive/exterbal_table/
Found 1 items
-rw-r--r--   1 root supergroup        700 2017-10-07 23:50 /hive/exterbal_table/emp.txt

metadata会删除

mysql> select * from TBLS \G;
*************************** 1. row ***************************
            TBL_ID: 1
       CREATE_TIME: 1506370951
             DB_ID: 1
  LAST_ACCESS_TIME: 0
             OWNER: root
         RETENTION: 0
             SD_ID: 1
          TBL_NAME: emp
          TBL_TYPE: MANAGED_TABLE
VIEW_EXPANDED_TEXT: NULL
VIEW_ORIGINAL_TEXT: NULL
1 row in set (0.00 sec)

所以不小心删除外部表后,可以创建一个新表指定到(location ‘/hive/external_table/’)这个位置,那么数据就会恢复。

三、外部表使用场景
对于一些原始日志文件,同时被多个部门同时操作的时候就需要使用外部表,如果不小心将meta data删除了,HDFS上 的data还在可以恢复,增加了数据的安全性。

  • 4
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值