一、Hive下查看数据表信息的方法
方法1:查看表的字段信息
desc table_name;
方法2:查看表的字段信息及元数据存储路径
desc extended table_name;
方法3:查看表的字段信息及元数据存储路径
desc formatted table_name;
备注:查看表元数据存储路径是,推荐方法3,信息比较清晰。
hive> desc dept_partition;
OK
deptno int
dname string
loc string
month string
# Partition Information
# col_name data_type comment
month string
Time taken: 0.08 seconds, Fetched: 9 row(s)
hive> desc extended dept_partition;
OK
deptno int
dname string
loc string
month string
# Partition Information
# col_name data_type comment
month string
Detailed Table Information Table(tableName:dept_partition, dbName:luomk, owner:user_w, createTime:1535531416, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:deptno, type:int, comment:null), FieldSchema(name:dname, type:string, comment:null), FieldSchema(name:loc, type:string, comment:null), FieldSchema(name:month, type:string, comment:null)], location:hdfs://emr-header-1.cluster-68953:9000/user/hive/warehouse/luomk.db/dept_partition, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , field.delim=
Time taken: 0.034 seconds, Fetched: 11 row(s)
hive> desc formatted dept_partition;
OK
# col_name data_type comment
deptno int
dname string
loc string
# Partition Information
# col_name data_type comment
month string
# Detailed Table Information
Database: luomk
Owner: user_w
CreateTime: Wed Aug 29 16:30:16 CST 2018
LastAccessTime: UNKNOWN
Retention: 0
Location: hdfs://emr-header-1.cluster-68953:9000/user/hive/warehouse/luomk.db/dept_partition
Table Type: MANAGED_TABLE
Table Parameters:
numFiles 5
numPartitions 3
numRows 0
rawDataSize 0
totalSize 345
transient_lastDdlTime 1535531416
# Storage Information
SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
InputFormat: org.apache.hadoop.mapred.TextInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
Compressed: No
Num Buckets: -1
Bucket Columns: []
Sort Columns: []
Storage Desc Params:
field.delim \t
serialization.format \t
Time taken: 0.05 seconds, Fetched: 38 row(s)
二、查看表容量大小
方法1:查看一个hive表文件总大小时(单位为Byte),我们可以通过一行脚本快速实现,其命令如下:
--#查看普通表的容量
$ hadoop fs -ls /user/hive/warehouse/table_name | awk -F ' ' '{print $5}'|awk '{a+=$1}END {print a}'
48
这样可以省去自己相加,下面命令是列出该表的详细文件列表
$ hadoop fs -ls /user/hive/warehouse/table_name
--#查看分区表的容量
$ hadoop fs -ls /user/hive/warehouse/table_name/yyyymm=201601 | awk -F ' ' '{print $5}'|awk '{a+=$1}END {print a/(1024*1024*1024)}'
39.709
这样可以省去自己相加,下面命令是列出该表的详细文件列表
$ hadoop fs -ls /user/hive/warehouse/table_name/yyyymm=201601
方法2:查看该表总容量大小,单位为G
$ hadoop fs -du /user/hive/warehouse/table_name|awk '{ SUM += $1 } END { print SUM/(1024*1024*1024)}'
方法3:
$ hadoop fs -du /user/hive/warehouse/table_name/ | awk '{ sum=$1 ;dir2=$2 ; hum[1024**3]="Gb";hum[1024**2]="Mb";hum[1024]="Kb"; for (x=1024**3; x>=1024; x/=1024){ if (sum>=x) { printf "%.2f %s \t %s\n",sum/x,hum[x],dir2;break } }}'