Hive学习笔记(二)-- Hive数据定义语言(DDL)

Hive数据定义语言(DDL)

Hive的官方提供的建表语法如下所示:LanguageManual DDL

CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name    -- (Note: TEMPORARY available in Hive 0.14.0 and later)
  [(col_name data_type [COMMENT col_comment], ... [constraint_specification])]
  [COMMENT table_comment]
  [PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
  [CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
  [SKEWED BY (col_name, col_name, ...)                  -- (Note: Available in Hive 0.10.0 and later)]
     ON ((col_value, col_value, ...), (col_value, col_value, ...), ...)
     [STORED AS DIRECTORIES]
  [
   [ROW FORMAT row_format] 
   [STORED AS file_format]
     | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]  -- (Note: Available in Hive 0.6.0 and later)
  ]
  [LOCATION hdfs_path]
  [TBLPROPERTIES (property_name=property_value, ...)]   -- (Note: Available in Hive 0.6.0 and later)
  [AS select_statement];   -- (Note: Available in Hive 0.5.0 and later; not supported for external tables)
 
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
  LIKE existing_table_or_view_name
  [LOCATION hdfs_path];
 
data_type
  : primitive_type
  | array_type
  | map_type
  | struct_type
  | union_type  -- (Note: Available in Hive 0.7.0 and later)
 
primitive_type
  : TINYINT
  | SMALLINT
  | INT
  | BIGINT
  | BOOLEAN
  | FLOAT
  | DOUBLE
  | DOUBLE PRECISION -- (Note: Available in Hive 2.2.0 and later)
  | STRING
  | BINARY      -- (Note: Available in Hive 0.8.0 and later)
  | TIMESTAMP   -- (Note: Available in Hive 0.8.0 and later)
  | DECIMAL     -- (Note: Available in Hive 0.11.0 and later)
  | DECIMAL(precision, scale)  -- (Note: Available in Hive 0.13.0 and later)
  | DATE        -- (Note: Available in Hive 0.12.0 and later)
  | VARCHAR     -- (Note: Available in Hive 0.12.0 and later)
  | CHAR        -- (Note: Available in Hive 0.13.0 and later)
 
array_type
  : ARRAY < data_type >
 
map_type
  : MAP < primitive_type, data_type >
 
struct_type
  : STRUCT < col_name : data_type [COMMENT col_comment], ...>
 
union_type
   : UNIONTYPE < data_type, data_type, ... >  -- (Note: Available in Hive 0.7.0 and later)
 
row_format
  : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
        [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
        [NULL DEFINED AS char]   -- (Note: Available in Hive 0.13 and later)
  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, property_name=property_value, ...)]
 
file_format:
  : SEQUENCEFILE
  | TEXTFILE    -- (Default, depending on hive.default.fileformat configuration)
  | RCFILE      -- (Note: Available in Hive 0.6.0 and later)
  | ORC         -- (Note: Available in Hive 0.11.0 and later)
  | PARQUET     -- (Note: Available in Hive 0.13.0 and later)
  | AVRO        -- (Note: Available in Hive 0.14.0 and later)
  | JSONFILE    -- (Note: Available in Hive 4.0.0 and later)
  | INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname
 
constraint_specification:
  : [, PRIMARY KEY (col_name, ...) DISABLE NOVALIDATE ]
    [, CONSTRAINT constraint_name FOREIGN KEY (col_name, ...) REFERENCES table_name(col_name, ...) DISABLE NOVALIDATE 

1.创建内部表语句

 CREATE TABLE employees(                          
   name string,                                   
   salary float,                                  
   subordinates array<string>,                    
   deductions map<string,float>,                  
   address struct<street:string,city:string,state:string,zip:int>) 
 PARTITIONED BY ( country string, state string)                                  
   ROW FORMAT DELIMITED 
   FIELDS TERMINATED BY '\001'
   COLLECTION ITEMS TERMINATED BY '\002'
   MAP KEYS TERMINATED BY '\003'
   LINES TERMINATED BY '\n' 
   STORED AS TEXTFILE; 
  • ROW FORMAT DELIMITED: 保留关键字
  • FIELDS TERMINATED BY: 列分隔符(^A,一般写成‘\001’)
  • COLLECTION ITEMS TERMINATED BY : 元素间分隔符(^B,一般写成‘\002’)
  • MAP KEYS TERMINATED BY: key/value对间的分隔符(^C,一般写成‘\003’)
  • LINES TERMINATED BY: 行分隔符(\n)
    在这里插入图片描述

2.创建外部表

  • 文件存储在/user/hivetest/logs目录下:
10180,2019-01-01 20:30:10,2019-01-01,1241,1101003
10180,2019-01-02 20:40:10,2019-01-02,1221,1101123
10180,2019-01-03 20:50:16,2019-01-03,1341,1101014
10180,2019-01-04 20:40:20,2019-01-04,1051,1101869
  • 创建外部表,指定文件存储路径:
CREATE EXTERNAL TABLE tmp_logs(
    domain_id INT ,
    log_time STRING ,
    log_date STRING,
    log_type INT,
    uin BIGINT )
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ',‘
STORED AS textfile
LOCATION '/user/hivetest/logs';

3.使用不同的文件格式

  • 在create/alter表的时候,可以为表以及分区的文件指定不同的格式
  • Storage Formats
    • stored as file_format
      • stored as parquet(不需要指定分隔符,直接解析)
      • stored as orc(不需要指定分隔符,直接解析)
      • stored as sequencefile
      • stored as avro
      • stored as textfile(对于文本文件需要指定分隔符)
    • stored by ‘storage.handler.class.name’ with serdeproperties(…)
      • 自定义的format,支持更多的存储格式
        STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler’
        WITH SERDEPROPERTIES ("hbase.columns.mapping" =":key,a:b,a:c,d:e");
        
  • Row Formats
  • SerDe
  • 对于文本格式还需要指定分隔符
    • 行分隔符,列分隔符
    • 集合元素分隔符,kv分隔符

4.行存储与列存储

在这里插入图片描述

行式存储列式存储
优点数据被保存在一起
INSERT/UPDATE容易
查询时只有涉及到的列会被读取
任何列都能作为索引
缺点选择时即使只涉及某几列,所有数据都会被读取选择完成时,被选择的列要重新组装
INSERT/UPDATE麻烦
  • 传统行式存储(text、sequence file)

    • 数据是按行存储的
    • 没有索引的查询需要使用大量IO
    • 建立索引和物化视图需要花费大量的时间和资源
    • 面向查询的需求,数据库必须被大量膨胀才能满足性能要求
  • 列式存储(ORCFile、Parquet)

    • 数据是按列存储-每一列单独存放
    • 数据即是索引
    • 只访问查询涉及的列-大量降低系统IO
    • 每一列由一个线索来处理-查询的并发处理
    • 数据类型一致,数据特征相似-高效压缩
  • 各个存储类型的存储空间对比
    在这里插入图片描述

5.如何创建带压缩的ORC表:步骤

在这里插入图片描述

  • 创建临时表

    • 文件格式如下所示:
      10180,2019-01-01 20:30:10,2019-01-01,1241,1101003
      10180,2019-01-02 20:40:10,2019-01-02,1221,1101123
      10180,2019-01-03 20:50:16,2019-01-03,1341,1101014
      10180,2019-01-04 20:40:20,2019-01-04,1051,1101869
      
    • 创建临时表
      CREATE EXTERNAL TABLE tmp_logs(
          domain_id INT ,
          log_time STRING ,
          log_date STRING,
          log_type INT,
          uin BIGINT )
      ROW FORMAT DELIMITED
      FIELDS TERMINATED BY ',‘
      STORED AS textfile
      LOCATION '/user/hivetest/logs';
      
  • 创建ORC表

    • 将临时表中的数据导入到ORC格式的表中

      LOAD DATA INPATH ‘/ngnix/logs/2016011206’ OVERWRITE INTO TABLE tmp_logs;
      
    • 将临时表中数据导入到orc格式表中

      CREATE TABLE logs(
          domain_id INT ,
          log_date STRING,
          log_type INT,
          uin BIGINT )
      PARTITIONED BY(log_time string)
      STORED AS ORC
      tblproperties ("orc.compress"="SNAPPY");
      
      INSERT INTO TABLE logs PARTITION(dt=2016-01-12-06) 
      SELECT * FROM tmp_logs;
      
  • 常用的压缩算法
    在这里插入图片描述

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值