HADOOP读写性能测试

HADOOP读写性能测试

一、操作系统磁盘IO测试

1磁盘写:

[root@hadoop13 ~]#  timedd if=/dev/zero of=/data/test.txt bs=1M count=4096

4096+0 records in

4096+0 records out

4294967296 bytes (4.3 GB) copied, 12.6505 s, 340 MB/s

real    0m12.777s

user    0m0.011s

sys     0m3.241s

[root@hadoop13 ~]#

2磁盘读:

[root@hadoop13 ~]# hdparm -tT --direct /dev/sda1

/dev/sda1:

 Timing O_DIRECT cachedreads:   3210 MB in  2.00 seconds = 1604.96 MB/sec

 Timing O_DIRECT diskreads: 300 MB in  0.17 seconds = 1725.86MB/sec

[root@hadoop13 ~]#

 

二、hadoop自带的性能基准评测工具
(一)TestDFSIO
1、测试写性能
(1)若有必要,先删除历史数据

[hdfs@hadoop13 sbin]$ ./hadoop-daemon.sh start namenode

[hdfs@hadoop13 sbin]$ ./hadoop-daemon.sh start datanode

[hdfs@hadoop13 sbin]$ hdfs dfs -ls /

[hdfs@hadoop13 sbin]$ hdfs dfsadmin -safemode  leave

Safe mode is OFF

[hdfs@hadoop13 sbin]$ hadoop jar/usr/hdp/2.5.3.0-37/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.2.5.3.0-37-tests.jarTestDFSIO -clean

17/04/15 16:19:34 INFO fs.TestDFSIO: TestDFSIO.1.8

17/04/15 16:19:34 INFO fs.TestDFSIO: nrFiles = 1

17/04/15 16:19:34 INFO fs.TestDFSIO: nrBytes (MB) = 1.0

17/04/15 16:19:34 INFO fs.TestDFSIO: bufferSize = 1000000

17/04/15 16:19:34 INFO fs.TestDFSIO: baseDir =/benchmarks/TestDFSIO

17/04/15 16:19:37 INFO fs.TestDFSIO: Cleaning up test files

[hdfs@hadoop13 sbin]$

(2)执行测试

[hdfs@hadoop13 sbin]$ hadoop jar/usr/hdp/2.5.3.0-37/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.2.5.3.0-37-tests.jarTestDFSIO -write -nrFiles 1 -fileSize 20

17/04/15 17:47:39 INFO fs.TestDFSIO: ----- TestDFSIO ----- :write

17/04/15 17:47:39 INFO fs.TestDFSIO:            Date & time: Sat Apr 1517:47:39 CST 2017

17/04/15 17:47:39 INFO fs.TestDFSIO:        Number of files: 1

17/04/15 17:47:39 INFO fs.TestDFSIO: Total MBytes processed:20.0

17/04/15 17:47:39 INFO fs.TestDFSIO:      Throughput mb/sec: 21.367521367521366

17/04/15 17:47:39 INFO fs.TestDFSIO: Average IO rate mb/sec:21.367521286010742

17/04/15 17:47:39 INFO fs.TestDFSIO:  IO rate std deviation: 0.004240117520584055

17/04/15 17:47:39 INFO fs.TestDFSIO:     Test exec time sec: 54.455

17/04/15 17:47:39 INFO fs.TestDFSIO:

[hdfs@hadoop13 ~]$

 

(3)查看结果:每一次测试生成一个结果,并以附加的形式添加到TestDFSIO_results.log中
$cat TestDFSIO_results.log

----- TestDFSIO ----- : write

           Date &time: Sat Apr 15 18:09:00 CST 2017

       Number of files: 1

Total MBytes processed: 50.0

     Throughput mb/sec:21.44082332761578

Average IO rate mb/sec: 21.44082260131836

 IO rate std deviation:0.0033935993692489072

    Test exec time sec:49.12

 

----- TestDFSIO ----- : write

           Date & time: Sat Apr 15 18:11:17 CST 2017

       Number of files: 1

Total MBytes processed: 50.0

     Throughput mb/sec:17.83803068141277

Average IO rate mb/sec: 17.838029861450195

 IO rate std deviation:0.0017782044668213728

    Test exec time sec:44.57

 

----- TestDFSIO ----- : write

           Date &time: Sat Apr 15 18:12:38 CST 2017

       Number of files: 1

Total MBytes processed: 50.0

     Throughput mb/sec:24.740227610094013

Average IO rate mb/sec: 24.74022674560547

 IO rate std deviation:0.004799713888696845

    Test exec time sec:44.007

(4)结果说明
Total MBytes processed : 总共需要写入的数据量 100MB
Throughput mb/sec :总共需要写入的数据量/(每个map任务实际写入数据的执行时间之和(这个时间会远小于Test exec timesec))==》100/(map1写时间+map2写时间+...)
Average IO rate mb/sec :(每个map需要写入的数据量/每个map任务实际写入数据的执行时间)之和/任务数==》(20/map1写时间+20/map2写时间+...)/1000,所以这个值跟上面一个值总是存在差异。
IO rate std deviation :上一个值的标准差
Test exec time sec :整个job的执行时间

2、测试读性能
(1)执行测试

[hdfs@hadoop13 sbin]$ hadoop jar /usr/hdp/2.5.3.0-37/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.2.5.3.0-37-tests.jarTestDFSIO -read -nrFiles 1 -fileSize 20

(2)查看结果:每一次测试生成一个结果,并以附加的形式添加到TestDFSIO_results.log中
$cat TestDFSIO_results.log

----- TestDFSIO ----- : read

           Date &time: Sat Apr 15 18:14:55 CST 2017

       Number of files: 1

Total MBytes processed: 20.0

     Throughput mb/sec:232.5581395348837

Average IO rate mb/sec: 232.55813598632812

 IO rate std deviation:0.03817309896339223

    Test exec time sec:39.832

 

----- TestDFSIO ----- : read

           Date &time: Sat Apr 15 18:16:27 CST 2017

       Number of files: 1

Total MBytes processed: 20.0

     Throughput mb/sec:208.33333333333334

Average IO rate mb/sec: 208.3333282470703

 IO rate std deviation:0.041051777732915615

    Test exec time sec:43.653

 

----- TestDFSIO ----- : read

           Date &time: Sat Apr 15 18:18:19 CST 2017

       Number of files: 1

Total MBytes processed: 20.0

     Throughput mb/sec:148.14814814814815

Average IO rate mb/sec: 148.1481475830078

 IO rate std deviation:0.024195075192209984

    Test exec time sec:43.222

(3)结果说明
结果各项意思与write相同,但其读速率比写速率快很多,而总执行时间非常接近。真正测试时,应该用较大的数据量来执行,才可体现出二者的差异。

 

(二)排序测试
在api文档中搜索terasort,可查询相关信息。
排序测试的三个基本步骤:
生成随机数据>排序>验证排序结果
    关于terasort更详细的原理,见http://blog.csdn.net/yuesichiu/article/details/17298563
    1、生成随机数据

[hdfs@hadoop13 sbin]$  hadoop jar/usr/hdp/2.5.3.0-37/hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3.2.5.3.0-37.jar teragen -Dmapreduce.job.maps=5 10000000  /tmp/hadoop/terasort

此步骤将在hdfs中的 /tmp/hadoop/terasort  中生成数据,

[hdfs@hadoop13 sbin]$ hadoop fs -ls /tmp/hadoop/terasort

 

[hdfs@hadoop13 sbin]$ hadoop fs -du -s -h /tmp/hadoop/terasort
953.7 M  /tmp/hadoop/terasort
生成的5个数据竟然是每个200M,未解,为什么不是10M???

2、运行测试
[hdfs@hadoop13 sbin]$  hadoop jar/usr/hdp/2.5.3.0-37/hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3.2.5.3.0-37.jar  terasort -Dmapreduce.job.maps=5 /tmp/hadoop/terasort /tmp/hadoop/terasort_out

3、验证结果

[hdfs@hadoop13 sbin]$ hadoop jar /usr/hdp/2.5.3.0-37/hadoop-mapreduce/hadoop-mapreduce-examples-2.7.3.2.5.3.0-37.jarteravalidate  /tmp/hadoop/terasort_out /tmp/hadoop/terasort_report

 

 

三、hibench 测试
hibench使用3.0
1、下载并解压
wget https://codeload.github.com/intel-hadoop/HiBench/zip/HiBench-3.0.0
unzip HiBench-3.0.0
2、修改文件  bin/hibench-config.sh,主要是这几个
export JAVA_HOME=/home/hadoop/jdk1.7.0_67
export HADOOP_HOME=/home/hadoop/hadoop
export HADOOP_EXECUTABLE=/home/hadoop/hadoop//bin/hadoop
export HADOOP_CONF_DIR=/home/hadoop/conf
export HADOOP_EXAMPLES_JAR=/home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.3.0-cdh5.1.2.jar
export MAPRED_EXECUTABLE=/home/hadoop/hadoop/bin/mapred
#Set the varaible below only in YARN mode
export HADOOP_JOBCLIENT_TESTS_JAR=/home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.3.0-cdh5.1.2.jar/hadoop-mapreduce-client-jobclient-2.3.0-cdh5.1.2-tests.jar
3、修改conf/benchmarks.lst,哪些不想运行的将之注释掉
4、运行
bin/run-all.sh
5、查看结果
在当前目录会生成hibench.report文件,内容如下
Type        Date       Time    Input_data_size     Duration(s)         Throughput(bytes/s)  Throughput/node
WORDCOUNT    2015-05-12 19:32:33 251.248
DFSIOE-READ  2015-05-12 19:54:2954004092852         463.863             116422505           38807501
DFSIOE-WRITE 2015-05-12 20:02:57 27320849148         498.132             54846605            18282201
PAGERANK     2015-05-12 20:27:25 711.391
SORT         2015-05-12 20:33:21243.603
TERASORT     2015-05-12 20:40:3410000000000         266.796             37481821            12493940
SLEEP        2015-05-12 20:40:400                   .177                0                   0

  • 0
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值