压测实操--nnbench压测hdfs_namenode负载方案

作者:九月

本次压测使用nnbench对namenode负载进行性能测试。nnbench生成很多与HDFS相关的请求,给NameNode施加较大的压力,这个测试能在HDFS上创建、读取、重命名和删除文件操作。

对应nnbench参数:

请添加图片描述

参数列表:

参数列表
-operationcreate_write open_read rename delete
-mapsmapper数
-reducesreducer数
-startTime开始时间
-blockSizeblock size
-bytesToWrite文件写入字节数 单位为b
-bytesPerChecksum
-numberOfFiles生成的文件数
-replicationFactorPerFile每个文件副本数
-baseDir根路径
-readFileAfterOpenright-aligned 文本居右

注意:如果集群开放了安全认证,需要提前认证通过后,进行压测。

步骤

一、先使用默认值进行压测

目前压测集群的namenode为1G

请添加图片描述

create_write操作

示例:测试map为100,reduce为5,创建10000个文件。(需要查看的指标仪表盘样例)

hadoop jar /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation create_write \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 10000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

请添加图片描述

请添加图片描述

请添加图片描述

请添加图片描述

hdfs指标

请添加图片描述

请添加图片描述

请添加图片描述

请添加图片描述

1、测试map为100,reduce为5,创建100w个文件。

hadoop jar /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation create_write \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 1000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

2、测试map为100,reduce为5,创建500w个文件。

hadoop jar /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation create_write \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 5000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

3、测试map为100,reduce为5,创建1000w个文件。

hadoop jar /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation create_write \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 10000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

4、测试map为100,reduce为5,创建3000w个文件。

hadoop jar /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation create_write \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 30000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

此时有节点内存已经达到93.2%,节点挂掉的风险比较大,增加namenode内存再继续进行压测

openread操作

1、测试map为100,reduce为5,创建100w个文件。

hadoop jar  /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation open_read \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 1000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

具体指标见示例中的仪表盘截图。

2、测试map为100,reduce为5,创建500w个文件。

hadoop jar /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation open_read \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 5000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

3、测试map为100,reduce为5,创建1000w个文件。

hadoop jar /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation open_read \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 10000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

4、测试map为100,reduce为5,创建3000w个文件。

hadoop jar /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation open_read \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 30000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench
rename操作

1、测试map为100,reduce为5,创建100w个文件。

hadoop jar  /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation rename \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 1000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

2、测试map为100,reduce为5,创建500w个文件。

hadoop jar  /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation rename \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 5000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

3、测试map为100,reduce为5,创建1000w个文件。

hadoop jar  /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation rename \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 10000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

4、测试map为100,reduce为5,创建3000w个文件。

hadoop jar  /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation rename \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 30000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench
delete操作

1、测试map为100,reduce为5,创建100w个文件。

hadoop jar  /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation delete \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 1000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

2、测试map为100,reduce为5,创建500w个文件。

hadoop jar  /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation delete \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 5000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

3、测试map为100,reduce为5,创建1000w个文件。

hadoop jar  /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation delete \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 10000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

4、测试map为100,reduce为5,创建3000w个文件。

hadoop jar  /usr/hdp/3.1.5.0-152/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.5.0-152-tests.jar nnbench \
-operation delete \
-maps 100 \
-reduces 5 \
-blockSize 1 \
-bytesToWrite 1024 \
-numberOfFiles 30000000 \
-replicationFactorPerFile 3 \
-readFileAfterOpen true \
-baseDir /benchmarks/NNBench

二、增加资源配置进行压测

例如增大namenode内存为2G,依次测试create_write/openerad/rename/delete操作,在内存或者cpu负载达到瓶颈时,结束压测。

三、总结

在集群硬件资源能给到最大条件下(比如namenode最大能给到8G,再大就会影响其他组件的内存使用),压测出此时的并行文件数为该集群中能达到的最大值,执行任务过程中不要超过最大值,并且建议根据该值设置任务运行并行文件阈值进行控制。也可以对每次运行命令的结果tps进行整理形成曲线图,观察不同变量下tps的趋势。

请添加图片描述

趋势图样例

请添加图片描述

更多技术信息请查看云掣官网https://yunche.pro/?t=yrgw

  • 16
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值