hadoop性能测试

一、hadoop自带的性能基准评测工具

(一)TestDFSIO

1、测试写性能
(1)若有必要,先删除历史数据
$hadoop jar /home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-client-jobclient-2.3.0-cdh5.1.2-tests.jar TestDFSIO -clean
(2)执行测试
$hadoop jar /home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-client-jobclient-2.3.0-cdh5.1.2-tests.jar TestDFSIO -write -nrFiles 5 -fileSize 20
(3)查看结果:每一次测试生成一个结果,并以附加的形式添加到TestDFSIO_results.log中
$cat TestDFSIO_results.log
----- TestDFSIO ----- : write
Date & time: Mon May 11 09:41:34 HKT 2015
Number of files:
Total MBytes processed: 100.0
Throughput mb/sec: 21.468441391155004
Average IO rate mb/sec: 25.366744995117188
IO rate std deviation: 12.744636924030177
Test exec time sec: 27.585

----- TestDFSIO ----- : write
Date & time: Mon May 11 09:42:28 HKT 2015
Number of files: 5
Total MBytes processed: 100.0
Throughput mb/sec: 22.779043280182233
Average IO rate mb/sec: 25.440486907958984
IO rate std deviation: 9.930490103638768
Test exec time sec: 26.67

(4)结果说明
Total MBytes processed : 总共需要写入的数据量 100MB
Throughput mb/sec :总共需要写入的数据量/(每个map任务实际写入数据的执行时间之和(这个时间会远小于Test exec time sec))==》100/(map1写时间+map2写时间+…)
Average IO rate mb/sec :(每个map需要写入的数据量/每个map任务实际写入数据的执行时间)之和/任务数==》(20/map1写时间+20/map2写时间+…)/1000,所以这个值跟上面一个值总是存在差异。
IO rate std deviation :上一个值的标准差
Test exec time sec :整个job的执行时间

2、测试读性能
(1)执行测试
$ hadoop jar /home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-client-jobclient-2.3.0-cdh5.1.2-tests.jar TestDFSIO -read -nrFiles 5 -fileSize 20
(2)查看测试结果
$ cat TestDFSIO_results.log

----- TestDFSIO ----- : read
Date & time: Mon May 11 09:53:27 HKT 2015
Number of files: 5
Total MBytes processed: 100.0
Throughput mb/sec: 534.75935828877
Average IO rate mb/sec: 540.4888916015625
IO rate std deviation: 53.93029580221512
Test exec time sec: 26.704
(3)结果说明
结果各项意思与write相同,但其读速率比写速率快很多,而总执行时间非常接近。真正测试时,应该用较大的数据量来执行,才可体现出二者的差异。

(二)排序测试

在api文档中搜索terasort,可查询相关信息。
排序测试的三个基本步骤:
生成随机数据——>排序——>验证排序结果
关于terasort更详细的原理,见http://blog.csdn.net/yuesichiu/article/details/17298563

1、生成随机数据
$ hadoop jar /home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.3.0-cdh5.1.2.jar teragen -Dmapreduce.job.maps=5 10000000 /tmp/hadoop/terasort
此步骤将在hdfs中的 /tmp/hadoop/terasort 中生成数据,
$ hadoop fs -ls /tmp/hadoop/terasort
Found 6 items
-rw-r----- 3 hadoop supergroup 0 2015-05-11 11:32 /tmp/hadoop/terasort/_SUCCESS
-rw-r----- 3 hadoop supergroup 200000000 2015-05-11 11:32 /tmp/hadoop/terasort/part-m-00000
-rw-r----- 3 hadoop supergroup 200000000 2015-05-11 11:32 /tmp/hadoop/terasort/part-m-00001
-rw-r----- 3 hadoop supergroup 200000000 2015-05-11 11:32 /tmp/hadoop/terasort/part-m-00002
-rw-r----- 3 hadoop supergroup 200000000 2015-05-11 11:32 /tmp/hadoop/terasort/part-m-00003
-rw-r----- 3 hadoop supergroup 200000000 2015-05-11 11:32 /tmp/hadoop/terasort/part-m-00004
$ hadoop fs -du -s -h /tmp/hadoop/terasort
953.7 M /tmp/hadoop/terasort
生成的5个数据竟然是每个200M,未解,为什么不是10M???

2、运行测试
$hadoop jar /home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.3.0-cdh5.1.2.jar terasort -Dmapreduce.job.maps=5 /tmp/hadoop/terasort /tmp/hadoop/terasort_out
Spent 354ms computing base-splits.
Spent 8ms computing TeraScheduler splits.
Computing input splits took 365ms
Sampling 10 splits of 10
Making 1 from 100000 sampled records
Computing parititions took 6659ms
Spent 7034ms computing partitions.

3、验证结果
$ hadoop jar /home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.3.0-cdh5.1.2.jar teravalidate /tmp/hadoop/terasort_out /tmp/hadoop/terasort_report

Spent 44ms computing base-splits.

Spent 7ms computing TeraScheduler splits.

二、hibench
hibench4.0测试不成功,使用3.0代替

1、下载并解压

wget https://codeload.github.com/intel-hadoop/HiBench/zip/HiBench-3.0.0

unzip HiBench-3.0.0

2、修改文件 bin/hibench-config.sh,主要是这几个

export JAVA_HOME=/home/hadoop/jdk1.7.0_67

export HADOOP_HOME=/home/hadoop/hadoop

export HADOOP_EXECUTABLE=/home/hadoop/hadoop//bin/hadoop

export HADOOP_CONF_DIR=/home/hadoop/conf

export HADOOP_EXAMPLES_JAR=/home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.3.0-cdh5.1.2.jar

export MAPRED_EXECUTABLE=/home/hadoop/hadoop/bin/mapred

#Set the varaible below only in YARN mode

export HADOOP_JOBCLIENT_TESTS_JAR=/home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.3.0-cdh5.1.2.jar/hadoop-mapreduce-client-jobclient-2.3.0-cdh5.1.2-tests.jar

3、修改conf/benchmarks.lst,哪些不想运行的将之注释掉

4、运行

bin/run-all.sh

5、查看结果

在当前目录会生成hibench.report文件,内容如下

Type Date Time Input_data_size Duration(s) Throughput(bytes/s) Throughput/node

WORDCOUNT 2015-05-12 19:32:33 251.248

DFSIOE-READ 2015-05-12 19:54:29 54004092852 463.863 116422505 38807501

DFSIOE-WRITE 2015-05-12 20:02:57 27320849148 498.132 54846605 18282201

PAGERANK 2015-05-12 20:27:25 711.391

SORT 2015-05-12 20:33:21 243.603

TERASORT 2015-05-12 20:40:34 10000000000 266.796 37481821 12493940

SLEEP 2015-05-12 20:40:40 0 .177 0 0

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值