HiveQL:数据定义

一、HQL的官方帮助文档
HiveQL 官方帮助文档:https://cwiki.apache.org/confluence/display/Hive/Home#Home-UserDocumentation
HiveQL:hive使用的查询语言,类似于SQL,最接近mysql的sql语言,这两者都不完全遵守ANSI SQL 标准。hiveQL和mysql SQL的比较
1、hive 不支持事务
2、hive不支持行级插入、更新、删除操作
3、更详细的比较见下图
hiveQL与mysql SQL的比较

Hive 1.2 保留关键字

ALL, ALTER, AND, ARRAY, AS, AUTHORIZATION, BETWEEN, BIGINT, BINARY, BOOLEAN, BOTH, BY, CASE, CAST, CHAR, COLUMN, CONF, CREATE, CROSS, CUBE, CURRENT, CURRENT_DATE, CURRENT_TIMESTAMP, CURSOR, DATABASE, DATE, DECIMAL, DELETE, DESCRIBE, DISTINCT, DOUBLE, DROP, ELSE, END, EXCHANGE, EXISTS, EXTENDED, EXTERNAL, FALSE, FETCH, FLOAT, FOLLOWING, FOR, FROM, FULL, FUNCTION, GRANT, GROUP, GROUPING, HAVING, IF, IMPORT, IN, INNER, INSERT, INT, INTERSECT, INTERVAL, INTO, IS, JOIN, LATERAL, LEFT, LESS, LIKE, LOCAL, MACRO, MAP, MORE, NONE, NOT, NULL, OF, ON, OR, ORDER, OUT, OUTER, OVER, PARTIALSCAN, PARTITION, PERCENT, PRECEDING, PRESERVE, PROCEDURE, RANGE, READS, REDUCE, REVOKE, RIGHT, ROLLUP, ROW, ROWS, SELECT, SET, SMALLINT, TABLE, TABLESAMPLE, THEN, TIMESTAMP, TO, TRANSFORM, TRIGGER, TRUE, TRUNCATE, UNBOUNDED, UNION, UNIQUEJOIN, UPDATE, USER, USING, UTC_TMESTAMP, VALUES, VARCHAR, WHEN, WHERE, WINDOW, WITH

非保留关键字

ADD, ADMIN, AFTER, ANALYZE, ARCHIVE, ASC, BEFORE, BUCKET, BUCKETS, CASCADE, CHANGE, CLUSTER, CLUSTERED, CLUSTERSTATUS, COLLECTION, COLUMNS, COMMENT, COMPACT, COMPACTIONS, COMPUTE, CONCATENATE, CONTINUE, DATA, DATABASES, DATETIME, DAY, DBPROPERTIES, DEFERRED, DEFINED, DELIMITED, DEPENDENCY, DESC, DIRECTORIES, DIRECTORY, DISABLE, DISTRIBUTE, ELEM_TYPE, ENABLE, ESCAPED, EXCLUSIVE, EXPLAIN, EXPORT, FIELDS, FILE, FILEFORMAT, FIRST, FORMAT, FORMATTED, FUNCTIONS, HOLD_DDLTIME, HOUR, IDXPROPERTIES, IGNORE, INDEX, INDEXES, INPATH, INPUTDRIVER, INPUTFORMAT, ITEMS, JAR, KEYS, KEY_TYPE, LIMIT, LINES, LOAD, LOCATION, LOCK, LOCKS, LOGICAL, LONG, MAPJOIN, MATERIALIZED, METADATA, MINUS, MINUTE, MONTH, MSCK, NOSCAN, NO_DROP, OFFLINE, OPTION, OUTPUTDRIVER, OUTPUTFORMAT, OVERWRITE, OWNER, PARTITIONED, PARTITIONS, PLUS, PRETTY, PRINCIPALS, PROTECTION, PURGE, READ, READONLY, REBUILD, RECORDREADER, RECORDWRITER, REGEXP, RELOAD, RENAME, REPAIR, REPLACE, REPLICATION, RESTRICT, REWRITE, RLIKE, ROLE, ROLES, SCHEMA, SCHEMAS, SECOND, SEMI, SERDE, SERDEPROPERTIES, SERVER, SETS, SHARED, SHOW, SHOW_DATABASE, SKEWED, SORT, SORTED, SSL, STATISTICS, STORED, STREAMTABLE, STRING, STRUCT, TABLES, TBLPROPERTIES, TEMPORARY, TERMINATED, TINYINT, TOUCH, TRANSACTIONS, UNARCHIVE, UNDO, UNIONTYPE, UNLOCK, UNSET, UNSIGNED, URI, USE, UTC, UTCTIMESTAMP, VALUE_TYPE, VIEW, WHILE, YEAR

二、Hive 中的数据库
1、如果用户没有显示指定数据库,那么将会使用默认的数据库default
2、hive中的数据库的概念本质上仅仅是吧的一个目录或者命名空间,对于很多组和应用的大集群来说,使用数据库可以避免表名重复的冲突,使用数据库将生产表组织成逻辑组
3、创建一个数据库

hive> CREATE DATABASE IF NOT EXISTS financials;

IF NOT EXISTS :当financials 这个数据库存在就不会创建,如果不加这个,如果financials这个数据库存在,创建的时候就会报错

hive (default)> SHOW DATABASES;
OK
default
finacials
test_hive
Time taken: 0.007 seconds, Fetched: 3 row(s)
如果数据库很多,可以使用正则表达式查看
hive (finacials)> SHOW DATABASES LIKE 'test.*';
OK
test_hive
Time taken: 0.007 seconds, Fetched: 1 row(s)
hive (finacials)> SHOW DATABASES LIKE '.*test.*';
OK
hive_test1
hive_test2
Time taken: 0.007 seconds, Fetched: 2 row(s)
hive (finacials)> SHOW DATABASES LIKE '*test*';
OK
hive_test1
hive_test2
test_hive
Time taken: 0.007 seconds, Fetched: 3 row(s)

4、hive会为每一个数据库创建一个目录(目录名:DBNAME.db),数据库中的表会在这个目录下创建字目录,但是default这个数据库本事没有自己的目录
5、数据库所在的目录属性 hive.metastore.warehouse.dir 这个目录所指定。

默认值:hdfs://master:9000/user/hive/warehouse。
例如:hdfs://master:9000/user/hive/warehouse/hive_test2.db

在创建数据库的时候可以修改这个默认的位置
hive (finacials)> CREATE DATABASE IF NOT EXISTS hive_test2

COMMENT ‘this is hive test db’
LOCATION ‘hdfs://master:9000/hive_test2’;

COMMENT: 创建数据库的时候给一个描述信息
LOCATION:给定数据库的目录。最好给HDFS绝对路径,如果只是给定相对路径,是以hdfs://master:9000/user/hive/ 这个目录为基准的。例如: location /test2 实际对应HDFS的路径是hdfs://master:9000/user/hive/test2.db

    hive (finacials)> desc database hive_test2;
    OK
    hive_test2      this is hive test db    hdfs://master:9000/hive_test2   root    USER
    Time taken: 0.008 seconds, Fetched: 1 row(s)
    DESC 和 DESCRIBE 这两个命令等价。如果后面不加SCHEMA,默认是查看表的结构

官网给的创建数据库语法:

    CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
    [COMMENT database_comment]
    [LOCATION hdfs_path]
    [WITH DBPROPERTIES (property_name=property_value, ...)];

6、USE 命令,在不同的数据库中来回切换

7、删除数据库

hive (finacials)> DROP DATABASE IF EXISTS hive_test2;
OK
Time taken: 0.126 seconds
如果hive_test2 存在就删除
hdfs上对应的目录也被删除
[root@master ~]# hadoop fs -ls hdfs://master:9000/hive_test2
ls: `hdfs://master:9000/hive_test2': No such file or directory

8、修改数据库

ALTER (DATABASE|SCHEMA) database_name SET DBPROPERTIES (property_name=property_value, ...);   -- (Note: SCHEMA added in Hive 0.14.0)
 ALTER (DATABASE|SCHEMA) database_name SET OWNER [USER|ROLE] user_or_role;   -- (Note: Hive 0.13.0 and later; SCHEMA added in Hive 0.14.0)
 ALTER (DATABASE|SCHEMA) database_name SET LOCATION hdfs_path; -- (Note: Hive 2.2.1, 2.4.0 and later)

DBPROPERTIES 设置键值对属性值,来描述这个数据库的属性信息
对于修改数据库的目录,只对后增加的表有效,原以存在的表还是存储在原位置,而且原表的分区信息也是不会变的。这个最好不要修改数据库的目录
数据库的其他元数据信息是不可以更改的,包括数据库名字,也没有办法删除或者”重置”数据库属性

二、hive创建表的语法:

CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name    -- **(Note: TEMPORARY available in Hive 0.14.0 and later)**
      [(col_name data_type [COMMENT col_comment], ... [constraint_specification])]
      [COMMENT table_comment]
      [PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
      [CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
      [SKEWED BY (col_name, col_name, ...)                  -- (Note: Available in Hive 0.10.0 and later)]
         ON ((col_value, col_value, ...), (col_value, col_value, ...), ...)
         [STORED AS DIRECTORIES]
      [
       [ROW FORMAT row_format] 
       [STORED AS file_format]
         | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]  -- (Note: Available in Hive 0.6.0 and later)
      ]
      [LOCATION hdfs_path]
      [TBLPROPERTIES (property_name=property_value, ...)]   -- (Note: Available in Hive 0.6.0 and later)
      [AS select_statement];   -- (Note: Available in Hive 0.5.0 and later; not supported for external tables)

    CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
      LIKE existing_table_or_view_name
      [LOCATION hdfs_path];

    data_type
      : primitive_type
      | array_type
      | map_type
      | struct_type
      | union_type  -- (Note: Available in Hive 0.7.0 and later)

    primitive_type
      : TINYINT
      | SMALLINT
      | INT
      | BIGINT
      | BOOLEAN
      | FLOAT
      | DOUBLE
      | DOUBLE PRECISION -- (Note: Available in Hive 2.2.0 and later)
      | STRING
      | BINARY      -- (Note: Available in Hive 0.8.0 and later)
      | TIMESTAMP   -- (Note: Available in Hive 0.8.0 and later)
      | DECIMAL     -- (Note: Available in Hive 0.11.0 and later)
      | DECIMAL(precision, scale)  -- (Note: Available in Hive 0.13.0 and later)
      | DATE        -- (Note: Available in Hive 0.12.0 and later)
      | VARCHAR     -- (Note: Available in Hive 0.12.0 and later)
      | CHAR        -- (Note: Available in Hive 0.13.0 and later)

    array_type
      : ARRAY < data_type >

    map_type
      : MAP < primitive_type, data_type >

    struct_type
      : STRUCT < col_name : data_type [COMMENT col_comment], ...>

    union_type
       : UNIONTYPE < data_type, data_type, ... >  -- (Note: Available in Hive 0.7.0 and later)

    row_format
      : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
            [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
            [NULL DEFINED AS char]   -- (Note: Available in Hive 0.13 and later)
      | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, property_name=property_value, ...)]

    file_format:
      : SEQUENCEFILE
      | TEXTFILE    -- (Default, depending on hive.default.fileformat configuration)
      | RCFILE      -- (Note: Available in Hive 0.6.0 and later)
      | ORC         -- (Note: Available in Hive 0.11.0 and later)
      | PARQUET     -- (Note: Available in Hive 0.13.0 and later)
      | AVRO        -- (Note: Available in Hive 0.14.0 and later)
      | INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname

    constraint_specification:
      : [, PRIMARY KEY (col_name, ...) DISABLE NOVALIDATE ]
        [, CONSTRAINT constraint_name FOREIGN KEY (col_name, ...) REFERENCES table_name(col_name, ...) DISABLE NOVALIDATE

1、创建的示例:

hive (finacials)> CREATE TABLE IF NOT EXISTS mydb.employees 
            > (
            > name STRING COMMENT 'Employee name',
            > salary FLOAT COMMENT 'Employee salary',
            > subordinates ARRAY<STRING> COMMENT 'Name of subordinates',
            > deductions MAP<STRING,FLOAT> COMMENT 'keys are deductions name,values are percentages',
            > address STRUCT<street:STRING,city:STRING,state:STRING,zip:INT> COMMENT 'Home address'
            > )
            > COMMENT 'Description of the table'
            > LOCATION '/user/hive/warehoust/mydb.db/employees'
            > TBLPROPERTIES('creator'='me','created_at'='2018-01-29 16:21:20');

(1)、语句的顺序不能颠倒,否则会报错
(2)、当前数据库不是mydb,也能创建所属于它的表,需要制定数据库名
(3)、IF NOT EXISTS 如果已存在的表和CREATE TABLE 语句后指定的模式不同,hive会忽略掉,并不会修改此表的模式。
(4)、TBLPROPERTILES 主要作用:按键值对的格式为表增加额外的文档说明。hive v0.10.0及后续版本

    hive (mydb)> SHOW TBLPROPERTIES employees;
    OK
    comment Description of the table
    created_at      2018-01-29 16:21:20
    creator me
    transient_lastDdlTime   1517214768
    Time taken: 0.04 seconds, Fetched: 4 row(s)

(5)、HDFS 目录说明

[root@master ~]# hadoop fs -ls hdfs://master:9000/user/hive/warehoust/mydb.db
  Found 1 items
 drwxr-xr-x   - root supergroup          0 2018-01-29 08:32     
 hdfs://master:9000/user/hive/warehoust/mydb.db/employees
employees  对应hdfs上的目录,此表是空的,如果有加载数据这个目录就有文件,在hive中将数据加载到表中,相当于将文件复制/上传至指定的目录

2、拷贝一张已存在的表的结构,不复制数据

    hive (mydb)> CREATE TABLE IF NOT EXISTS employees2
           > LIKE employees
           > LOCATION '/user/root/employees2';

(1)、hdfs 上对应的目录

    [root@master ~]# hadoop fs -ls hdfs://master:9000/user/root
    Found 1 items
    drwxr-xr-x   - root supergroup          0 2018-01-29 09:58 hdfs://master:9000/user/root/employees2

(2)、验证两表之间的结构是否相同,以及第二个表是否有数据

    hive (mydb)> desc employees2;
    OK
    name                    string                  Employee name       
    salary                  float                   Employee salary     
    subordinates            array<string>           Name of subordinates
    deductions              map<string,float>       keys are deductions name,values are percentages
    address                 struct<street:string,city:string,state:string,zip:int>  Home address        
    Time taken: 0.04 seconds, Fetched: 5 row(s)
    hive (mydb)> desc employees;
    OK
    name                    string                  Employee name       
    salary                  float                   Employee salary     
    subordinates            array<string>           Name of subordinates
    deductions              map<string,float>       keys are deductions name,values are percentages
    address                 struct<street:string,city:string,state:string,zip:int>  Home address        
    Time taken: 0.039 seconds, Fetched: 5 row(s)
    hive (mydb)> select * from employees2;
    OK
    Time taken: 0.112 seconds
    从上面可以看出第二个表只有结构没有数据

(3)、拷贝表结构有2种情况需要注意下
CEATE [EXTERNAL[ TABLE IF NOT EXISTS NEW_TABLE LIKE OLD_NAME
①、如果没有EXTERNAL 这个关键字,无论原表是内部表还是外部表,新表都是内部表
②、如果不省略EXTERNAL关键字,无论原表是内部表或者外部表,新表都是外部表
③、如果要创建外部表,是不能省略EXTERNAL 这个关键字
3、查看表可以使用正则表达式

    hive (mydb)> SHOW TABLES IN mydb;
    OK
    employees
    employees2
    Time taken: 0.012 seconds, Fetched: 2 row(s)

如果不在mydb数据库下,同样可以查看mydb数据库下所有的表

    hive (mydb)> SHOW TABLES 'empl.*';
    OK
    employees
    employees2
    Time taken: 0.011 seconds, Fetched: 2 row(s)

Hive 并非支持所有的正则表达式功能,hive 1.2.2 版本支持IN database_name 和对表名使用正则表达式两功能同时使用。可能比较老的版本不支持这两个功能同时使用

    hive (mydb)> SHOW TABLES IN mydb 'empl.*';
    OK
    employees
    employees2
    Time taken: 0.01 seconds, Fetched: 2 row(s)
    hive (mydb)> SHOW TABLES IN mydb;
    OK
    employees
    employees2
    test1
    test2

4、查看表的结构
DESCRIBE employees; # 显示最普通的表结构信息
DESCRIBE EXTENDED employees; # 显示比较详细的表结构信息,但是可读性很差
DESCRIBE FORMATTED employees; # 显示非常详细的表结构信息,可读性很好

5、创建外部表

    CREATE EXTERNAL TABLE IF NOT EXISTS stocks
    (
    exchanges STRING,
    symbol STRING,
    ymd STRING,
    price_open FLOAT,
    price_high FLOAT,
    price_low FLOAT,
    price_close FLOAT,
    volume INT,
    price_adj_close FLOAT
    )
    ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
    LOCATION '/user/hive/stocks';

内部和外部表之间的区别
(1)、创建表的时候外部表需要加关键字 external,不加这个关键字创建的就是内部表
(2)、内部表使用drop table table_name 这样的命令就会表结构和数据全部删除,而外部表只删除表结构不删除数据
(3)、有些HQL语法不适合外部表
(4)、使用这个命令可以查看表是外部还是内部表
hive -S -e ‘DESCRIBE FORMATTED DB_NAME.TABLE_NAME’ | grep ‘Table Type:’
例如:
hive -S -e ‘DESCRIBE FORMATTED mydb.employees2’ | grep ‘Table Type:’
Table Type: MANAGED_TABLE # 内部表也称管理表
Table Type: EXTERNAL_TABLE # 外部表

6、创建分区表,需要手动指定PARTITION。分区的目的是优化Hive的查询性能

CREATE TABLE test(
id STRING,
num INT,
day INT
)
PARTITIONED BY (month STRING,year STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

原始数据:3列,注意原始数据不要带mount和year 这两列。可以使用201701 这样的目录来替代。这样以每个月为一个目录,实现分区的目的
第1列:id,第2列 num,第3列day(每个月天数不固定)。每个月存储一个目录
1001,12,1
1002,15,1
1003,64,1
1004,32,1
1005,54,3
1006,98,3
1007,45,3
1008,12,3

装载数据到test表中

LOAD DATA LOCAL INPATH '/tmp/201701/data.csv' OVERWRITE INTO TABLE test PARTITION(year='2017',month='1');
LOAD DATA LOCAL INPATH '/tmp/201702/data.csv' OVERWRITE INTO TABLE test PARTITION(year='2017',month='2');
LOAD DATA LOCAL INPATH '/tmp/201703/data.csv' OVERWRITE INTO TABLE test PARTITION(year='2017',month='3');
LOAD DATA LOCAL INPATH '/tmp/201704/data.csv' OVERWRITE INTO TABLE test PARTITION(year='2017',month='4');
LOAD DATA LOCAL INPATH '/tmp/201705/data.csv' OVERWRITE INTO TABLE test PARTITION(year='2017',month='5');
LOAD DATA LOCAL INPATH '/tmp/201706/data.csv' OVERWRITE INTO TABLE test PARTITION(year='2017',month='6');
LOAD DATA LOCAL INPATH '/tmp/201707/data.csv' OVERWRITE INTO TABLE test PARTITION(year='2017',month='7');
LOAD DATA LOCAL INPATH '/tmp/201708/data.csv' OVERWRITE INTO TABLE test PARTITION(year='2017',month='8');
LOAD DATA LOCAL INPATH '/tmp/201709/data.csv' OVERWRITE INTO TABLE test PARTITION(year='2017',month='9');
LOAD DATA LOCAL INPATH '/tmp/201710/data.csv' OVERWRITE INTO TABLE test PARTITION(year='2017',month='10');
LOAD DATA LOCAL INPATH '/tmp/201711/data.csv' OVERWRITE INTO TABLE test PARTITION(year='2017',month='11');
LOAD DATA LOCAL INPATH '/tmp/201712/data.csv' OVERWRITE INTO TABLE test PARTITION(year='2017',month='12');

hdfs 上对应的目录

[root@master tmp]# hadoop fs -ls /user/hive/warehouse/mydb.db/test
    Found 12 items
    drwxr-xr-x   - root supergroup          0 2018-01-30 08:37 /user/hive/warehouse/mydb.db/test/month=1
    drwxr-xr-x   - root supergroup          0 2018-01-30 08:39 /user/hive/warehouse/mydb.db/test/month=10
    drwxr-xr-x   - root supergroup          0 2018-01-30 08:39 /user/hive/warehouse/mydb.db/test/month=11
    drwxr-xr-x   - root supergroup          0 2018-01-30 08:39 /user/hive/warehouse/mydb.db/test/month=12
    drwxr-xr-x   - root supergroup          0 2018-01-30 08:39 /user/hive/warehouse/mydb.db/test/month=2
    drwxr-xr-x   - root supergroup          0 2018-01-30 08:39 /user/hive/warehouse/mydb.db/test/month=3
    drwxr-xr-x   - root supergroup          0 2018-01-30 08:39 /user/hive/warehouse/mydb.db/test/month=4
    drwxr-xr-x   - root supergroup          0 2018-01-30 08:39 /user/hive/warehouse/mydb.db/test/month=5
    drwxr-xr-x   - root supergroup          0 2018-01-30 08:39 /user/hive/warehouse/mydb.db/test/month=6
    drwxr-xr-x   - root supergroup          0 2018-01-30 08:39 /user/hive/warehouse/mydb.db/test/month=7
    drwxr-xr-x   - root supergroup          0 2018-01-30 08:39 /user/hive/warehouse/mydb.db/test/month=8
    drwxr-xr-x   - root supergroup          0 2018-01-30 08:39 /user/hive/warehouse/mydb.db/test/month=9

    [root@master tmp]# hadoop fs -ls /user/hive/warehouse/mydb.db/test_partition/month=1/year=2017/  
    Found 1 items
    -rwxr-xr-x   3 root supergroup        491 2018-01-31 07:28 /user/hive/warehouse/mydb.db/test_partition/month=1/year=2017/000000_0
    [root@master tmp]# hadoop fs -text /user/hive/warehouse/mydb.db/test_partition/month=1/year=2017/data.csv # 分区表的数据有3列
    1001,12,1
    1002,15,1
    1003,64,1
    1004,32,1
    1005,54,3
    1006,98,3
    1007,45,3
    1008,12,3
    1009,65,3
    1010,45,3
    ....
    查看 test表的partiti
    hive (mydb)> show partitions test;
    OK
    month=1/year=2017
    month=10/year=2017
    month=11/year=2017
    month=12/year=2017
    month=2/year=2017
    month=3/year=2017
    month=4/year=2017
    month=5/year=2017
    month=6/year=2017
    month=7/year=2017
    month=8/year=2017
    month=9/year=2017

如果原始数据中包含有year和month 这两列,总共5列,想导入带有分区功能的表中,有两种方法可以解决,一个将原始数据拆分,按上面的目录(201701)拆分,或者创建另一个表列数和原始数据的列数相同,不带分区功能,然后使用查询新表的数据插入到带有分区表的数据

    INSERT OVERWRITE TABLE test_partition
    PARTITION(month = 12,year = 2017)
    SELECT id,num,day FROM test
    WHERE year=2017 AND month=12;
    hive (mydb)> desc test;
    id                      string                                      
    num                     int                                         
    day                     int                                         
    month                   int                                         
    year                    int 
    hive (mydb)> desc test_partition;
    id                      string                                      
    num                     int                                         
    day                     int                                         
    month                   int                                         
    year                    int                                         

    # Partition Information          
    # col_name              data_type               comment             

    month                   int                                         
    year                    int 

三、修改表
1、改表名称:

hive (mydb)> ALTER TABLE test RENAME TO test1;

2、增加一个分区:

hive (mydb)> ALTER TABLE test1 ADD IF NOT EXISTS PARTITION (year='2018',month='1'); 

使用这个命令

show partitions test1;进行验证

3、修改列名称,添加注释以及改变列的位置:

hive (mydb)> ALTER TABLE stocks CHANGE COLUMN price_close close INT COMMENT 'change price_close to close' AFTER price_open;

4、增加列:

ALTER TABLE stocks ADD COLUMNS ( app_name STRING COMMENT 'Application name',session_id INT COMMENT 'The current session id');  # hive 1.2.2 版本LONG 不在是保留关键字,如果是长整型需要使用BIGINT。如果列的位置不对,需要使用上一条语句就行修改

5、删除列:

ALTER TABLE stocks REPLACE COLUMNS ( id int); 

这条语句会删除表的所有列,并新增加一列id,这时候表只有1列。如果只想删除某1列或者某几列,那只需要在()中添加不删除的列,带上列的类型即可。
6、修改表的属性:

ALTER TABLE stocks SET TBLPROPERTIES ( 'key1' = 'values'); 注意修改表的"Table Parameters:"

使用下面的命令验证

hive -e 'DESCRIBE FORMATTED mydb.stocks' | sed -n '/Table Parameters:/,+10{/Table Parameter:/n;p}' 

解析下这个命令 /Table Parameters:/ sed 搜索这个模式,打印匹配这个模式的行及后10行。
7、修改存储属性:

ALTER TABLE test1 PARTITION(year = '2017') SET FILEFORMAT SEQUENCEFILE;

hive 支持的存储格式
(1)、textFile
(2)、sequenceFile
(3)、RCFile
(4)、AVRO等
8、保护partition,禁止删除partition:

ALTER TABLE test1 PARTITION (year = 2017,month = 1) ENABLE NO_DROP; 

验证下删除year=’2017’,month=’1’ 这个partition

ALTER TABLE test1 DROP IF EXISTS PARTITION (year = '2017',month='1'); 

会报 “Partition protected” 这类错误。

ALTER TABLE test1 DROP IF EXISTS PARTITION (year = '2017',month='2'); 

这个就不会报错。

SHOW PARTITIONS test1; 

这条命令查看(year = ‘2017’,month=’2’) 这个partition 是否删除。
partition 开启/关闭离线功能

ALTER TABLE test1 PARTITION (year = 2017,month = 1) ENABLE OFFLINE;

ENABLE 表示开启
DISABLE 表示关闭
这些操作只适用于partition 分区表。
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值