[root@slave1 hadoop-0.20.2]# hadoop fs -mkdir /usr/username
[root@slave1 hadoop-0.20.2]# hadoop fs -chown /usr/username
这时设置目录空间限制比较合适。可以给用户目录设置一个1TB的限制
[root@slave1 hadoop-0.20.2]# hadoop dfsadmin -setSpaceQuota 1t /user/username
1. 用TestDFSIO基准测试HDFS
[root@slave1 hadoop-0.20.2]# hadoop jar hadoop-0.20.2-test.jar TestDFSIO -write -nrFiles 10 -fileSize 1000
以下内容是TestDFSIO基准测试的运行结果,结果被写入控制台并同时记录在一个本地文件。
[root@slave1 hadoop-0.20.2]# cat TestDFSIO_results.log
----- TestDFSIO ----- : write
Date & time: Thu Aug 26 11:22:17 CST 2010
Number of files: 10
Total MBytes processed: 10000
Throughput mb/sec: 13.260489046836048
Average IO rate mb/sec: 14.472989082336426
Test exec time sec: 463.276
完成基准测试后,可通过参数-clean从HDFS上删除所有生成的文件:
[root@slave1 hadoop-0.20.2]# hadoop jar hadoop-0.20.2-test.jar TestDFSIO -clean
2. 用排序测试MapReduce
[root@slave1 hadoop-0.20.2]# hadoop jar hadoop-0.20.2-examples.jar randomwriter random-data
[root@slave1 hadoop-0.20.2]# hadoop jar hadoop-0.20.2-test.jar testmapredsort -sortInput random-data \ -sortOutput sorted-data
SUCCESS! Validated the MapReduce framework's 'sort' successfully .