HDFS小文件分析保姆级详解

存储资源
1、主机过期日志分析:
统计过期日志文件数量(修改时间>48h):
find /var/log/ -type f -mtime +1 | wc -l
统计过期日志文件总大小:
find /var/log/ -type f -mtime +1 |xargs du --time| sort -h | awk -F " " ‘{sum+=$1}END{print sum/1024/1024 “G”}’
统计过期日志小文件数量(<10M):
find /var/log/ -type f -size -10M -mtime +1 |wc -l
批量执行:
./runcmd.sh “find /var/log/ -type f -mtime +1 |xargs du --time| sort -h | awk ‘{sum+=$1}END{print sum/1024/1024 “G”}’” all

2、主机/tmp过期文件分享:
./runcmd.sh “find /tmp/ -type f -mtime +1 | wc -l” all
./runcmd.sh “find /tmp/ -type f -mtime +1 |xargs du --time| sort -h | awk ‘{sum+=$1}END{print sum/1024/1024 “G”}’” all

linux命令:
find ./ -mtime +1 修改时间> 48h
目录中查找过期文件按大小排序 -type 按类型进行查找,d查找目录,f查找文件
find ./ -mtime +1 |xargs du -sh * |sort -h
find /var/log/ -type f -mtime +1 |xargs du --time| sort -h | awk -F " " ‘{sum+=$1}END{print sum/1024/1024 “G”}’
find /var/log/ -type f -size +100M -mtime +1 |xargs du --time| sort -h | awk ‘{print $1/1024 “M”}’ | tail -n 3
find /var/log/ -type f -size +100M -mtime +1 |xargs du -sh --time| sort -h |tail -n 3
find /var/log/ -type f -mtime +1 | wc -l
目录中查找最大和最小文件:
du -h * | sort -h
主目录中查找并按大小升序对目录进行排序:
sudo find /var/log/ -maxdepth 1 -type d -exec du -sh {} ; | sort -h
查找大于特定大小的文件:
sudo find . -type f -size +200M -exec ls -lh {} ;

HDFS 小文件处理:
FSimage离线分析:
1、获取fsimage文件:
hdfs dfsadmin -fetchImage /home/sunxy

2、从2进制文件解析:
hdfs oiv -i /home/sunxy/fsimage_0000000012255765347 -t /home/sunxy/tmpdir -o /home/sunxy/fs_distribution -p Delimited -delimiter “,”
(-t使用临时文件处理中间数据 不加的话全部使用内存 容易OOM)
(统计文件整体情况: hdfs oiv -p FileDistribution -i fsimage_0000000012255765347 -o fs_distribution )

3、创建hive表存储HDFS元数据
create database if not exists hdfsinfo;
use hdfsinfo;
CREATE TABLE fsimage_info_csv(
path string,
replication int,
modificationtime string,
accesstime string,
preferredblocksize bigint,
blockscount int,
filesize bigint,
nsquota string,
dsquota string,
permission string,
username string,
groupname string)
ROW FORMAT SERDE
‘org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe’
WITH SERDEPROPERTIES (
‘field.delim’=‘,’,
‘serialization.format’=‘,’)
STORED AS INPUTFORMAT
‘org.apache.hadoop.mapred.TextInputFormat’;

hdfs dfs -ls /user/hive/warehouse/hdfsinfo.db

4、加载进hive中
hdfs dfs -put /home/sunxy/fs_distribution /user/hive/warehouse/hdfsinfo.db/fsimage_info_csv/
hdfs dfs -ls /user/hive/warehouse/hdfsinfo.db/fsimage_info_csv
Hive: MSCK REPAIR TABLE fsimage_info_csv;

5、一级目录一级目录查小于4mb的文件数量
CREATE TABLE smallfile_info(
dir_path string,
small_file_num int
);

insert overwrite table smallfile_info
SELECT
dir_path ,
COUNT(*) AS small_file_num
FROM
( SELECT
relative_size,
dir_path
FROM
( SELECT
(
CASE filesize < 4194304
WHEN TRUE
THEN ‘small’
ELSE ‘large’
END) AS relative_size,
concat(‘/’,split(PATH,‘/’)[1], ‘/’,split(PATH,‘/’)[2], ‘/’
,split(PATH,‘/’)[3], ‘/’,split(PATH,‘/’)[4], ‘/’, split(
PATH,‘/’)[5]) AS dir_path
FROM
hdfsinfo.fsimage_info_csv
WHERE
permission not LIKE ‘d%’
) t1
WHERE
relative_size=‘small’) t2
GROUP BY
dir_path
ORDER BY
dir_path, small_file_num desc;

6、拆解成hive库、表、分区下小于4M文件数
/user/hive/warehouse/hdfsinfo.db/fsimage_info_csv/
第一层目录统计:
SELECT
dir_path1
,sum(small_file_num) as smallfile_sum
from (
select concat(‘/’,split(dir_path,‘/’)[1]) as dir_path1, small_file_num
FROM hdfsinfo.smallfile_info
) t1
group by dir_path1
ORDER BY smallfile_sum desc;
结果:
/user 12294693
/tmp 1015850
/tmp1 399648
NULL 231318
/flink 160
/hbase 50
/flinkcdc 35
/ods_offline 20
/home 1

两个/tmp目录临时文件140W小文件,需要规范使用清理临时目录,可以清理出HDFS44.4T数据,腾出133.3T左右存储空间。
[hdfs@IT-CDH-Node01 tmp]$ hdfs dfs -du -s -h /tmp
40.3 T 120.9 T /tmp
[hdfs@IT-CDH-Node01 tmp]$ hdfs dfs -du -s -h /tmp1
4.1 T 12.4 T /tmp1

继续分析/user目录文件使用情况:
SELECT
dir_path2
,sum(small_file_num) as smallfile_sum
from (
select concat(‘/’,split(dir_path,‘/’)[1], ‘/’,split(dir_path,‘/’)[2]
) as dir_path2, small_file_num
FROM
hdfsinfo.smallfile_info
where concat(‘/’,split(dir_path,‘/’)[1] ) = ‘/user’
) t1
group by dir_path2
ORDER BY smallfile_sum desc;
结果:
/user/hive 11266237
/user/root 757109
/user/history 267902
/user/oozie 1741
/user/flink 1036
/user/super 600
/user/impala 56
/user/hdfs 10
/user/hue 2

继续分析/user/hive目录文件使用情况:
SELECT
dir_path3
,sum(small_file_num) as smallfile_sum
from (
select concat(‘/’,split(dir_path,‘/’)[1], ‘/’,split(dir_path,‘/’)[2], ‘/’,split(dir_path,‘/’)[3]
) as dir_path3, small_file_num
FROM
hdfsinfo.smallfile_info
where concat(‘/’,split(dir_path,‘/’)[1], ‘/’,split(dir_path,‘/’)[2] ) = ‘/user/hive’
) t1
group by dir_path3
ORDER BY smallfile_sum desc;
结果:
/user/hive/warehouse 10456478
/user/hive/.Trash 805175
/user/hive/.staging 4050
/user/hive/.flink 281
/user/hive/.sparkStaging 253
回收站需要及时清理
[hdfs@IT-CDH-Node01 tmp]$ hdfs dfs -du -s -h /user/hive/.Trash
3.7 T 11.1 T /user/hive/.Trash
hive数据文件存在1045w的小文件<4M

继续分析/user/hive/warehouse四级目录-数据库-文件使用情况:
SELECT
dir_path ,
COUNT(*) AS small_file_num
FROM
( SELECT
relative_size,
dir_path
FROM
( SELECT
(
CASE filesize < 4194304
WHEN TRUE
THEN ‘small’
ELSE ‘large’
END) AS relative_size,
concat(‘/’,split(PATH,‘/’)[1], ‘/’,split(PATH,‘/’)[2], ‘/’
,split(PATH,‘/’)[3], ‘/’,split(PATH,‘/’)[4]) AS dir_path
FROM
hdfsinfo.fsimage_info_csv
WHERE
permission not LIKE ‘d%’
) t1
WHERE
relative_size=‘small’) t2
GROUP BY
dir_path
ORDER BY
small_file_num desc
limit 100;

继续分析/user/hive/warehouse五级目录-表-文件使用情况:
SELECT
dir_path ,
COUNT(*) AS small_file_num
FROM
( SELECT
relative_size,
dir_path
FROM
( SELECT
(
CASE filesize < 4194304
WHEN TRUE
THEN ‘small’
ELSE ‘large’
END) AS relative_size,
concat(‘/’,split(PATH,‘/’)[1], ‘/’,split(PATH,‘/’)[2], ‘/’
,split(PATH,‘/’)[3], ‘/’,split(PATH,‘/’)[4], ‘/’, split(
PATH,‘/’)[5]) AS dir_path
FROM
hdfsinfo.fsimage_info_csv
WHERE
permission not LIKE ‘d%’
) t1
WHERE
relative_size=‘small’) t2
GROUP BY
dir_path
ORDER BY
small_file_num desc
limit 500;
hive表小文件最多的表77w+,超过10w小文件的12个,5w小文件的42个,超过1w的158个

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值