首先,Hive的数据是存在HDFS上的,统计Hive表大小就算统计文件的大小。
虽然在Hive的管理界面可以看到HDFS文件占用的整体大小,但如何查看每张表占用的空间呢?
几经搜索,没有发现hive现成的命令,于是,动手撸一个出来吧。
基于python实现
下面这部分代码是非常低效的,因为对hive命令不了解,所以走了弯路,但实际上是可行的,20T的文件,共一百多张表,用41个线程跑了2小时才跑完。
核心就是:hadoop fs -ls /path
# hadoop fs -ls /user/hive/warehouse
Found 1 items
drwxr-xr-x - root supergroup 0 2020-05-11 19:46 /user/hive/warehouse/test.db
然后逐层深入下去:库–》表–》文件/分区目录再是文件
import subprocess
import threading
import logging
import time
def get_logger(name):
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler = logging.FileHandler(filename=name)
logger = logging.getLogger('jimo_' + name)
logger.addHandler(handler)
return logger
log = get_logger('info.log')
data_writer = get_logger('result.txt')
def list_dir_path(path):
# print('list_item='+path)
byte_str = subprocess.check_output(['hadoop fs -ls ' + path], shell=True)
return [line.split()[7] for line in [ss.decode('utf-8') for ss in byte_str.splitlines()] if line.startswith('d')]
def list_dir_files(path):
# print('list_lines='+path)
byte_str = subprocess.check_output(['hadoop fs -ls ' + path], shell=True)
return [line for line in [ss.decode('utf-8') for ss in byte_str.splitlines()] if
line.startswith('-') or line.startswith('d')]
def get_one_file_size(line):
return int(line.split()[4])
class TableSizeStat(threading.Thread):
def __init__(self, name, db_name, table_path_list):
threading.Thread.__init__(self)
self.name = name
self.db_name = db_name
self.table_path_list = table_path_list
def run(self):
log.info('线程{}开始运行{}'.format(self.name, time.time()))
self.count_size()
log.info('线程{}结束运行{}'.format(self.name, time.time()))
def count_size(self):
for table in self.table_path_list:
lines = list_dir_files(table)
# 如果是文件夹,就继续向下(因为有个表数据很多,会出现分区,分区就是目录)
# 否则计算大小
table_size = 0
for line in lines:
if line.startswith('d'):
files = list_dir_files(line.split()[7])
for file in files:
table_size += get_one_file_size(file)
elif line.startswith('-'):
table_size += get_one_file_size(line)
log.info('库{} 表{} 的大小为: {} MB'.format(self.db_name, table, table_size / 1024 / 1024))
data_writer.info('{},{},MB'.format(table, table_size / 1024 / 1024))
def cal():
db_path = list_dir_path('/user/hive/warehouse')
table_paths = []
for p in db_path:
t_p = list_dir_path(p)
table_paths.extend(t_p)
log.info('总共有{0}张表'.format(len(table_paths)))
run_thread(table_paths)
def run_thread(table_paths):
print(table_paths)
max_thread = 16
i = 0
step = int(len(table_paths) / max_thread) + 1
ts = []
for j in range(min(len(table_paths), max_thread)):
frag = table_paths[i:i + step]
if len(frag) == 0:
continue
log.info('开始构建线程{}:{}'.format(j, frag))
t = TableSizeStat('thread' + str(j), '', frag)
t.start()
ts.append(t)
i += step
log.info('等待执行完')
for t in ts:
t.join()
if __name__ == '__main__':
cal()
log.info('结束:{}'.format(time.time()))
基于shell实现
后来发现了一个更见简单的命令,可以直接统计一个目录下所有文件占的空间大小:
hadoop fs -count /path
# hadoop fs -count /user/hive/warehouse/test.db
2 1 704 /user/hive/warehouse/test.db
# hadoop fs -ls /user/hive/warehouse/test.db
Found 1 items
drwxr-xr-x - root supergroup 0 2020-05-11 19:50 /user/hive/warehouse/test.db/t1
# hadoop fs -ls /user/hive/warehouse/test.db/t1
Found 1 items
-rwxr-xr-x 2 root supergroup 704 2020-05-11 19:50 /user/hive/warehouse/test.db/t1/data.csv
- 第一个数值2表示
test.d
下的文件(夹)的总数 - 第二个数值1表是当前文件夹下文件(夹)的个数
- 第三个数值704表示该文件夹下文件所占的空间大小,这个大小是不计算副本的个数的(单位字节)
于是我重写了一份:
dbs=$(hadoop fs -ls /user/hive/warehouse | awk '{print $8}')
for db in $dbs
do
echo "统计库:$db"
tables=$(hadoop fs -ls "$db" | awk '{print $8}')
for table in $tables
do
echo "统计表:$table"
size=$(hadoop fs -count -h "$table" | awk '{print $3}')
echo "表 $table 占用空间为:$size"
echo "$table,$size" >> result.csv
done
done
按照每秒一个命令的执行速度,大约2分钟就能执行完。