一、Hive下查看数据表信息的方法
方法1:查看表的字段信息
desc table_name;
方法2:查看表的字段信息及元数据存储路径
desc extended table_name;
方法3:查看表的字段信息及元数据存储路径
desc formatted table_name;
备注:查看表元数据存储路径是,推荐方法3,信息比较清晰。
hive> desc parquet;
OK
member_id string
name string
stat_date string
province string
add_item string add new item comment
Time taken: 0.096 seconds, Fetched: 5 row(s)
hive> desc extended parquet;
OK
member_id string
name string
stat_date string
province string
add_item string add new item comment
Detailed Table Information Table(tableName:parquet, dbName:yyz_workdb, owner:a6, createTime:1510023589, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:member_id, type:string, comment:null), FieldSchema(name:name, type:string, comment:null), FieldSchema(name:stat_date, type:string, comment:null), FieldSchema(name:province, type:string, comment:null), FieldSchema(name:add_item, type:string, comment:add new item comment )], location:hdfs://localhost:9002/user/hive/warehouse/yyz_workdb.db/parquet, inputFormat:org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe, parameters:{serialization.format= , field.delim=
Time taken: 0.15 seconds, Fetched: 7 row(s)
hive> desc formatted parquet;
OK
# col_name data_type comment
member_id string
name string
stat_date string
province string
add_item string add new item comment
# Detailed Table Information
Database: yyz_workdb
Owner: a6
CreateTime: Tue Nov 07 10:59:49 CST 2017
LastAccessTime: UNKNOWN
Retention: 0
Location: hdfs://localhost:9002/user/hive/warehouse/yyz_workdb.db/parquet
Table Type: MANAGED_TABLE
Table Parameters:
COLUMN_STATS_ACCURATE {\"BASIC_STATS\":\"true\"}
last_modified_by a6
last_modified_time 1526612655
numFiles 1
numRows 5
rawDataSize 20
totalSize 792
transient_lastDdlTime 1526612655
# Storage Information
SerDe Library: org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
InputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat
Compressed: No
Num Buckets: -1
Bucket Columns: []
Sort Columns: []
Storage Desc Params:
field.delim \t
serialization.format \t
Time taken: 0.101 seconds, Fetched: 37 row(s)
二、查看表容量大小
方法1:查看一个hive表文件总大小时(单位为Byte),我们可以通过一行脚本快速实现,其命令如下:
--#查看普通表的容量
$ hadoop fs -ls /user/hive/warehouse/table_name | awk -F ' ' '{print $5}'|awk '{a+=$1}END {print a}'
48
这样可以省去自己相加,下面命令是列出该表的详细文件列表
$ hadoop fs -ls /user/hive/warehouse/table_name
--#查看分区表的容量
$ hadoop fs -ls /user/hive/warehouse/table_name/yyyymm=201601 | awk -F ' ' '{print $5}'|awk '{a+=$1}END {print a/(1024*1024*1024)}'
39.709
这样可以省去自己相加,下面命令是列出该表的详细文件列表
$ hadoop fs -ls /user/hive/warehouse/table_name/yyyymm=201601
方法2:查看该表总容量大小,单位为G
$ hadoop fs -du /user/hive/warehouse/table_name|awk '{ SUM += $1 } END { print SUM/(1024*1024*1024)}'
方法3:
$ hadoop fs -du /user/hive/warehouse/table_name/ | awk '{ sum=$1 ;dir2=$2 ; hum[1024**3]="Gb";hum[1024**2]="Mb";hum[1024]="Kb"; for (x=1024**3; x>=1024; x/=1024){ if (sum>=x) { printf "%.2f %s \t %s\n",sum/x,hum[x],dir2;break } }}'
转载:https://blog.csdn.net/helloxiaozhe/article/details/80363893