hive 如果表不存在则创建_Hive 操作

本文主要是根据官方文档学习一些用到的Shell或者HQL命令

Hive 命令行hive –H #显示帮助

command

function

-d,–-define

定义一个变量值,这个变量可以在Hive交互Shell中引用,后面会介绍用法,比如:-d A=B

–-database

进入Hive交互Shell时候指定数据库,默认进入default数据库

-e

命令行执行一段SQL语句

-f

filename文件中保存HQL语句,执行其中的语句

-H,-–help

帮助信息

-h

连接远程Hive Server,

-–hiveconf

在命令行中设置Hive的运行时配置参数,优先级高于hive-site.xml,但低于Hive交互Shell中使用Set命令设置。

–-hivevar

同—define

-i

进入Hive交互Shell时候先执行filename中的HQL语句

-p

连接远程Hive Server的端口号

-S,–-silent

静默模式,指定后不显示执行进度信息,最后只显示结果

-v,–-verbose

冗余模式,额外打印出执行的HQL语句

-d, –define

hive -d A = B

表示定义了变量 A,值为 B,进行hive shell后可以引用该变量。

1SELECT '${A}'

–-hiveconf

可以使用该选项设定Hive的运行参数配置,相当于在Hive交互Shell中使用set命令进行设置,比如:

hive -–hiveconf mapreduce.job.reduces=20

SET mapreduce.job.reduces = 20;

–-hivevar

hive –hivevar A=B

1SELECT '${A}'

Hive与shell交互command

function

quit / exit

退出shell

reset

重置所有参数为hive-site.xml的配置,比如:使用set设置的参数

set

设置Hive的参数,优先级最高,只对当前session有效

set -v

打印所有Hive和Hadoop的配置参数值

add/list/delete -File/JAR/ARCHIVE

向DistributeCache中添加/列出/删除一个或过个文件、jar包、或者归档,添加之后,可以在Map和Reduce task中使用。比如,自定义一个udf函数,打成jar包,在创建函数之前,必须使用add jar 命令,将该jar包添加,否则会报错找不到类。

!

在交互Shell中执行Linux操作系统命令并打印出结果

source

在交互Shell中执行一个脚本

HiveServer2 有一个新的Cli叫 Beeline,Hive Cli 最终会被 deprecated 掉。

CREATE DATABASE/SCHEMA, TABLE, VIEW, FUNCTION, INDEX

DROP DATABASE/SCHEMA, TABLE, VIEW, INDEX

TRUNCATE TABLE

ALTER DATABASE/SCHEMA, TABLE, VIEW

MSCK REPAIR TABLE (or ALTER TABLE RECOVER PARTITIONS)

SHOW DATABASES/SCHEMAS, TABLES, TBLPROPERTIES, PARTITIONS, FUNCTIONS, INDEX[ES], COLUMNS, CREATE TABLE

DESCRIBE DATABASE/SCHEMA, table_name, view_name

Database/Schema

SCHEMA 和 Database 是可互换的,作用是一样的。

CREATE Database1

2

3

4CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name

[COMMENT database_comment]

[LOCATION hdfs_path]

[WITH DBPROPERTIES (property_name=property_value, ...)];

如果没有指定LOCATION,那么会在配置中的路径中生成对应的文件夹。

1

2

3CREATE DATABASE IF NOT EXISTS test_db

COMMENT "Test Database created"

WITH DBPROPERTIES('Date' = '2016-08-26','Creator' = 'Mike Leong','Email' = 'mike@test.com');

SHOW/DESCRIBE Database1

2SHOW (DATABASES|SCHEMAS) [LIKE identifier_with_wildcards];

DESCRIBE|DESC (DATABASE|SCHEMA) [EXTENDED] database_name;

Examples:

1

2SHOW DATABASES LIKE 'test_*';

DESC DATABASE EXTENDED test_db;

显示当前USE的Database

1

2

3

4

5--Hive Cli中可以设置

SET hive.cli.print.current.db=true;

--Cli和Beeline都可使用下面语句查询

SELECT CURRENT_DATABASE();

ALTER Database1

2ALTER (DATABASE|SCHEMA) database_name SET DBPROPERTIES (property_name=property_value, ...);

ALTER (DATABASE|SCHEMA) database_name SET OWNER [USER|ROLE] user_or_role;

Example:

1ALTER DATABASE test_db SET DBPROPERTIES ('Modified by' = 'Test User');

DROP/USE Database1

2DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE];

USE database_name;

RESTRICT 默认为RESTRICT,如果被删除数据库中存在Table,则删除失败。

CASCADE 连同表一起删除

Examples:

1

2USE test_schema;

SELECT CURRENT_DATABASE();

1DROP DATABASE test_db RESTRICT;

Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. InvalidOperationException(message:Database test_db is not empty. One or more tables exist.) (state=08S01,code=1)

1DROP DATABASE test_db CASCADE;

Table

Hive中的Table有这么几种类型:

Managed Tables 默认表类型,数据由Hive管理,如果表被删除,数据文件也会从HDFS中移除

External Tables 外部表,创建外部表时需要指定存放数据路径,即导入的数据不会移动到自己的数据仓库(如果不指定生成的数据文件会保存在hive/warehouse中);当删除外部表时,数据文件不会被删除。

Temporary Tables 临时表,作用和其他关系型是一样的。

Partitioned Tables 分区表

Bucketed Sorted Tables 桶表

Skewed Tables 数据倾斜表

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name

[(col_name data_type [COMMENT col_comment], ...)]

[COMMENT table_comment]

[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]

[CLUSTERED BY (col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]

[SKEWED BY (col_name, ...) ON ([(col_value, ...), ...|col_value, ...])

[STORED AS DIRECTORIES] ]

[

[ROW FORMAT row_format]

[STORED AS file_format]

| STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]

]

[LOCATION hdfs_path]

[TBLPROPERTIES (property_name=property_value, ...)]

[AS select_statement];

TEMPORARY 指定为临时表

EXTERNAL 指定为外部表

IF NOT EXISTS 不存在则创建表,否则忽略此创建信息

db_name 可选项

COMMENT 注释,与DATABASE的一致

PARTITIONED BY 分区

CLUSTERED BY 分桶

SKEWED BY 数据倾斜

STORED AS 指定文件保存格式,推荐ORC

Create Table As Select(CTAS)/Like

CTAS 包含SELECT和CREATE两部分,根据SELECT的列生成对应的列,CREATE生成表的结构信息。

CTAS 不可以创建分区表,外部表,桶表。

LIKE 直接生成对应的表结构,但是不会生成相应的数据。

1

2

3

4

5

6

7

8

9

10

11--CTAS

CREATE TABLE bmu_value

ROW FORMAT SERDE "org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe"

STORED AS RCFILE

AS

SELECT bmu,country_code FROM comm_bmu_market;

--LIKE

CREATE TABLE bmu_value_like LIKE bmu_value;

SELECT bmu,country_code FROM bmu_value LIMIT 10;

SELECT bmu,country_code FROM bmu_value_like LIMIT 10;

根据条件来进行分区,分区列可以指定多列。分区表将会根据指定的列进行保存,如

1

2

3

4

5

6

7

8

9

10

11CREATE EXTERNAL TABLE mid_distribution_coupons_push_date_partition

(lib_coupon_id string,coupon_id string,distribution_id string,pushed boolean, used boolean,used_date string,day_of_week int

,month_of_year string,used_place string,state string,expire_date string,country string,state_short string,city string,udts bigint,perform_id string,coupon_code string,trigger_type string,rowkey string)

PARTITIONED BY (push_date string);

--导数据

INSERT OVERWRITE TABLE mid_distribution_coupons_push_date_partition

PARTITION(push_date)

SELECT lib_coupon_id,coupon_id,distribution_id,pushed,used,used_date,day_of_week,month_of_year,used_place,state,expire_date,country,state_short,city,udts,perform_id,coupon_code,trigger_type,rowkey, CONCAT(YEAR(push_date),

CASE WHEN MONTH(push_date) < 10 THEN CONCAT(0,MONTH(push_date)) ELSE MONTH(push_date) END, CASE WHEN DAY(push_date) < 10 THEN CONCAT(0,DAY(push_date)) ELSE DAY(push_date) END) push_date

FROM mid_distribution_coupons_hive;

1

2

3

4

5

6

7

8CREATE EXTERNAL TABLE page_view(viewTime INT, userid BIGINT,

page_url STRING, referrer_url STRING,

ip STRING COMMENT 'IP Address of the User',

country STRING COMMENT 'country of origination')

COMMENT 'This is the staging page view table'

ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054'

STORED AS TEXTFILE

LOCATION '';

临时表的一些特点:

临时表只对当前会话有效

数据存储在hive.exec.scratchdir中,会话结束数据将会被删除。1.1之后可以通过 hive.exec.temporary.table.storage设置保存在memory、ssd或者default中。

在从一个表复制数据到另一个表时创建中间表是很有用的,而且在复制完成后可以马上删除

注意:

临时表不能使用分区和索引

如果数据库中已经存在一个永久表,那么这个表就会被锁定不能操作,直到临时表被删除

1

2

3

4

5

6

7

8

9

10create table test_tmp_tb(id string);

insert into test_tmp_tb values(1);

create temporary table test_tmp_tb(id string);

insert into test_tmp_tb values("temporary");

--结果是:temporary

select * from test_tmp_tb;

--先删除temporary,再进行查询

drop table test_tmp_tb;

select * from test_tmp_tb;

Drop/Truncate Table1

2

3

4

5DROP TABLE [IF EXISTS] table_name [PURGE];

TRUNCATE TABLE table_name [PARTITION partition_spec];

partition_spec:

(partition_column = partition_col_value, partition_column = partition_col_value, ...)

DROP 彻底删除元数据,存储数据移到到.Trash中

如果标记了PURGE,那么存储数据也会彻底删除

如果表被视图引用,不会出现警告信息

TRUNCATE 删除列或者分区的数据,目标表必须是本地或者管理表,否则会报错。Error: Error while compiling statement: FAILED: SemanticException [Error 10146]: Cannot truncate non-managed table comm_bmu_market_partition. (state=42000,code=10146)

1TRUNCATE TABLE comm_bmu_market_partition PARTITION( bmu='UKP');

Rename Table1ALTER TABLE table_name RENAME TO new_table_name;

Alter Table Properties1

2

3ALTER TABLE table_name SET TBLPROPERTIES table_properties;

table_properties:

(property_name = property_value, property_name = property_value, ... )

可以使用DESCRIBE EXTENDED table_name查看更多表信息。

Add SerDe Properties1

2

3

4

5

6

7

8

9

10

11

12ALTER TABLE table_name [PARTITION partition_spec] SET SERDE serde_class_name [WITH SERDEPROPERTIES serde_properties];

ALTER TABLE table_name [PARTITION partition_spec] SET SERDEPROPERTIES serde_properties;

serde_properties:

: (property_name = property_value, property_name = property_value, ... )

```

#### Alter Table Storage Properties

```sql

ALTER TABLE table_name CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name, ...)]

INTO num_buckets BUCKETS;

修改表的物理存储性质。

注:此修改只会改变表的元数据信息,不会改变已经存在的

Alter Partition

Add Partition1

2

3

4

5ALTER TABLE table_name ADD [IF NOT EXISTS] PARTITION partition_spec

[LOCATION 'location1'] partition_spec [LOCATION 'location2'] ...;

partition_spec:

: (partition_column = partition_col_value, partition_column = partition_col_value, ...)

Add Partition只会改变表的元数据,不会加载数据,所以数据如果没有保存在对应的Partition中,那么就会返回空值。

Rename Partition1ALTER TALBE table_name PARTITION partition_spec RENAME TO PARTITION new_partition_spec;

Exchange Partition1

2

3ALTER TABLE table_name_1 EXCHANGE PARTITION (partition_spec) WITH TABLE table_name_2;

-- multiple partitions

ALTER TABLE table_name_1 EXCHANGE PARTITION (partition_spec, partition_spec2, ...) WITH TABLE table_name_2;

此语句允许你将table_name_2的数据移动到table_name_1的中,如果table_name_1中已存在该分区,则会抛出异常。

1ALTER TABLE comm_bmu_market_partition EXCHANGE PARTITION (bmu='UKK') WITH TABLE comm_bmu_market_partition2;

Recover Partitions (MSCK REPAIR TABLE)

Hive将每个分区元数据保存在metastore,如果直接通过Hadoop命令将数据移动到相应的Hhive的HDFS中,metastore是不知道这些分区信息的,除非使用ALTER TABLE table_name ADD PARTITION命令一个一个添加。然而,我们可以使用下面的命令检查和修复表:

1MSCK REPAIR TABLE table_name;

对于没有元数据的分区,Hive会在元存储中添加对应的元数据。如果有大量未添加分区,可以通过设置hive.msck.repair.batch.size来批量进行添加,此设置默认值为0。

从Hive 1.3版本开始,MSCK如果发现分区中应用了不允许的字符就会抛出相应的异常信息。可设置hive.msck.path.validation为skip或者ignore来进行验证管理。

Example:

1

2

3

4--手动添加bmu=INK到comm_bmu_market_partition

SELECT * FROM comm_bmu_market_partition WHERE bmu='INK';

MSCK REPAIR TABLE comm_bmu_market_partition;

SELECT * FROM comm_bmu_market_partition WHERE bmu='INK';

Drop Partitions1ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec[, PARTITION partition_spec, ...] [PURGE];

如果使用PURGE彻底删除元数据和文件数据,否则则是移动到/user/usrname/.Trash/Current文件夹中。

Alter Either Table or Partition

Alter Table/Partition File Format1ALTER TABLE table_name [PARTITION partition_spec] SET FILEFORMAT file_format;

Hive支持的格式有:SEQUENCEFILE、TEXTFILE、RCFILE、ORC和INPUTFORMAT等

Alter Table/Partition Location1ALTER TABLE table_name [PARTITION partition_spec] SET LOCATION "new location";

Alter Table/Partition Touch1ALTER TABLE table_name TOUCH [PARTITION partition_spec];

TOUCH 读元数据,然后写回。这能够触发前置或者后者(pre/post) 的hook的执行,假设是存在一个记录表或者分区修改的hook和直接修改HDFS上文件的外部脚本。由于外部脚本在Hive之外修改文件,由于外部脚本在Hive之外修改文件,修改不会被hook所记录,这是外部脚本可以调用TOUCH以触发hook,然后标记上述表或者分区为已修改的。

Alter Table/Partition Protections1

2ALTER TABLE table_name [PARTITION partition_spec] ENABLE|DISABLE NO_DROP [CASCADE];

ALTER TABLE table_name [PARTITION partition_spec] ENABLE|DISABLE OFFLINE;

可以在表级或者分区级设置数据保护。启用NO_DROP阻止删除操作,如果任何一个分区启用了NO_DROP,那么该表不能被删除。相反的,如果该表启用了NO_DROP,那么分区是可以被删除;如果是NO_DROP CASCADE,删除命令必须指定了IGNORE PROTECTION或者PURGE,才能进行删除操作;OFFLINE阻止查询,即使表离线,但是元数据依然可以访问。

Examples:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23ALTER TABLE comm_bmu_market_partition2 ENABLE NO_DROP;

DROP TABLE comm_bmu_market_partition2;

> Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Table comm_bmu_market_partition2 is protected from being dropped (state=08S01,code=1)

ALTER TABLE comm_bmu_market_partition2 DISABLE NO_DROP;

ALTER TABLE comm_bmu_market_partition2 PARTITION(bmu='INK') ENABLE NO_DROP;

DESCRIBE FORMATTED comm_bmu_market_partition2 PARTITION(bmu='INK');

ALTER TABLE comm_bmu_market_partition2 DROP PARTITION(bmu='INK');

DROP TABLE comm_bmu_market_partition2;

--Drop Table/Partition 都会抛出下面异常

> Error: Error while compiling statement: FAILED: SemanticException [Error 30011]: Partition protected from being dropped default@comm_bmu_market_partition2@bmu=INK (state=42000,code=30011)

ALTER TABLE comm_bmu_market_partition2 PARTITION(bmu='INK') DISABLE NO_DROP;

ALTER TABLE comm_bmu_market_partition2 ENABLE NO_DROP CASCADE;

ALTER TABLE comm_bmu_market_partition2 DROP IF EXISTS PARTITION(bmu='INK') IGNORE PROTECTION;

ALTER TABLE comm_bmu_market_partition2 DROP IF EXISTS PARTITION(bmu='ZAP');

> INFO : Dropped the partition bmu=INK

> FAILED: SemanticException [Error 30011]: Partition protected from being dropped default@comm_bmu_market_partition2@bmu=ZAP

Alter Table/Partition Compact1

2

3ALTER TABLE table_name [PARTITION (partition_key = 'partition_value' [, ...])]

COMPACT 'compaction_type'

[WITH OVERWRITE TBLPROPERTIES ("property"="value" [, ...])];

Alter Column

列名大小不敏感。

1

2ALTER TABLE table_name [PARTITION partition_spec] CHANGE [COLUMN] col_old_name col_new_name column_type

[COMMENT col_comment] [FIRST|AFTER column_name] [CASCADE|RESTRICT];

此语句一般用来修改列名称、列类型、列注释、列位置。该语句仅修改Hive元数据,不会触动表中的数据,用户需要确定实际的数据布局符合元数据的定义。

CASCADE对所有分区的元数据进行级联操作。

Examples:

1ALTER TABLE comm_bmu_market_partition CHANGE country_code coutryCode string CASCADE;

Add/Replace Columns1

2ALTER TABLE table_name [PARTITION partition_spec]

ADD|REPLACE COLUMNS (col_name data_type [COMMENT col_comment], ...) [CASCADE|RESTRICT]

其中ADD COLUMNS在现有列之后但在分区列之前增加新列,REPLACE COLUMNS先删除现存列,然后再增加新列。REPLACE COLUMNS也可以用来删除列,如ALTER TABLE test_change REPLACE COLUMNS (a int, b int);这样就可以删除c了

1

2

3

4CREATE VIEW [IF NOT EXISTS] [db_name.]view_name [(column_name [COMMENT column_comment], ...) ]

[COMMENT view_comment]

[TBLPROPERTIES (property_name = property_value, ...)]

AS SELECT ...;

Example

1

2

3

4

5

6CREATE VIEW onion_referrers(url COMMENT 'URL of Referring page')

COMMENT 'Referrers to The Onion website'

AS

SELECT DISTINCT referrer_url

FROM page_view

WHERE page_url='http://www.theonion.com';

DROP/Alter View As Select1

2DROP VIEW [IF EXISTS] [db_name.]view_name;

ALTER VIEW [db_name.]view_name AS select_statement;

Alter View Properties1

2

3ALTER VIEW [db_name.]view_name SET TBLPROPERTIES table_properties;

table_properties:

: (property_name = property_value, property_name = property_value, ...)

1

2

3

4

5

6

7

8

9

10

11

12

13CREATE INDEX index_name

ON TABLE base_table_name (col_name, ...)

AS index_type

[WITH DEFERRED REBUILD]

[IDXPROPERTIES (property_name=property_value, ...)]

[IN TABLE index_table_name]

[

[ ROW FORMAT ...] STORED AS ...

| STORED BY ...

]

[LOCATION hdfs_path]

[TBLPROPERTIES (...)]

[COMMENT "index comment"];

Drop/Index Index1

2DROP INDEX [IF EXISTS] index_name ON table_name;

ALTER INDEX index_name ON table_name [PARTITION partition_spec] REBUILD;

Example

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18--创建索引

CREATE INDEX idx_coupon_code

ON TABLE mid_distribution_coupons_hive(coupon_code)

AS 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler'

WITH DEFERRED REBUILD;

--使用REBUILD的索引表是空的,使用下面来建索引

ALTER INDEX idx_coupon_code

ON mid_distribution_coupons_hive

REBUILD;

--显示索引

SHOW FORMATTED INDEX ON mid_distribution_coupons_hive;

--删除索引

DROP INDEX idx_coupon_code on mid_distribution_coupons_hive;

SELECT * FROM mid_distribution_coupons_hive AS mdc WHERE mdc.coupon_code = '6FLG';

1SHOW (DATABASES|SCHEMAS) [LIKE 'identifier_with_wildcards'];

Show Tables/Partitions/Indexes

Show Tables1SHOW TABLES [IN database_name] ['identifier_with_wildcards'];

Example

1SHOW TABLES IN test_schema;

Show Partitions1

2SHOW PARTITIONS table_name;

SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)];

Examples:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17SHOW PARTITIONS comm_bmu_market_partition;

+------------+--+

| partition |

+------------+--+

| bmu=INK |

| bmu=UKK |

| bmu=UKP |

| bmu=ZAK |

| bmu=ZAP |

+------------+--+

SHOW PARTITIONS comm_bmu_market_partition PARTITION(bmu='INK');

+------------+--+

| partition |

+------------+--+

| bmu=INK |

+------------+--+

Show Table/Partition Extended1SHOW TABLE EXTENDED [IN|FROM database_name] LIKE 'identifier_with_wildcards' [PARTITION(partition_spec)];

Example

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20SHOW TABLE EXTENDED LIKE comm_bmu_market_partition;

+-------------------------------------------------------------------------------------------------------------------------------------------+

| tab_name |

+-------------------------------------------------------------------------------------------------------------------------------------------+

| tableName:comm_bmu_market_partition |

| owner:phoenix |

| location:hdfs://hadoop01.easternphoenix.com:8020/apps/hive/warehouse/comm_bmu_market_partition |

| inputformat:org.apache.hadoop.mapred.TextInputFormat |

| outputformat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat |

| columns:struct columns { i64 id, string bmu_label, string country_name, string coutrycode, bool enabled, string years, string languages} |

| partitioned:true |

| partitionColumns:struct partition_columns { string bmu} |

| totalNumberFiles:8 |

| totalFileSize:1714 |

| maxFileSize:749 |

| minFileSize:0 |

| lastAccessTime:1472203894364 |

| lastUpdateTime:1472203894407 |

| |

+-------------------------------------------------------------------------------------------------------------------------------------------+

Show Table Properties1

2

3

4

5

6

7

8

9

10

11

12SHOW TBLPROPERTIES tblname;

SHOW TBLPROPERTIES tblname("foo");

SHOW TBLPROPERTIES comm_bmu_market_partition;

+------------------------+--------------+--+

| prpt_name | prpt_value |

+------------------------+--------------+--+

| comment | new comment |

| last_modified_by | Chin |

| last_modified_time | 1472439992 |

| transient_lastDdlTime | 1472439992 |

+------------------------+--------------+--+

Show Indexes1

2

3

4

5

6

7

8SHOW [FORMATTED] (INDEX|INDEXES) ON table_with_index [(FROM|IN) db_name];

SHOW INDEX ON mid_distribution_coupons_idx;

+-----------------------+-------------------------------+-----------------------+----------------------------------------------------------+-----------------------+----------+--+

| idx_name | tab_name | col_names | idx_tab_name | idx_type | comment |

+-----------------------+-------------------------------+-----------------------+----------------------------------------------------------+-----------------------+----------+--+

| idx_coupon_code | mid_distribution_coupons_idx | coupon_code | default__mid_distribution_coupons_idx_idx_coupon_code__ | compact | |

+-----------------------+-------------------------------+-----------------------+----------------------------------------------------------+-----------------------+----------+--+

Show Columns1SHOW COLUMNS (FROM|IN) table_name [(FROM|IN) db_name];

Show Functions1SHOW FUNCTIONS "a.*";

Show Conf1SHOW CONF ;

Show Transactions1SHOW TRANSACTIONS;

Describe Database1DESCRIBE DATABASE/SCHEMA [EXTENDED] db_name;

Describe Table/View/Column1

2

3

4

5DESCRIBE [EXTENDED|FORMATTED]

table_name[.col_name ( [.field_name] | [.'$elem$'] | [.'$key$'] | [.'$value$'] )* ];

--明确指定数据库

DESCRIBE [EXTENDED|FORMATTED]

[db_name.]table_name[ col_name ( [.field_name] | [.'$elem$'] | [.'$key$'] | [.'$value$'] )* ];

Display Column Statistics1

2DESCRIBE FORMATTED [db_name.]table_name column_name;

DESCRIBE FORMATTED [db_name.]table_name column_name PARTITION (partition_spec);

Describe Partition1

2

3

4

5

6

7

8DESCRIBE [EXTENDED|FORMATTED] table_name[.column_name] PARTITION partition_spec;

--明确指定数据库

DESCRIBE [EXTENDED|FORMATTED] [db_name.]table_name [column_name] PARTITION partition_spec;

```

## Abort

```sql

ABORT TRANSACTIONS transactionID [, ...];

####

1

2

1

2

1

2

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值