Hadoop NNBENCH

@VM

[bigdata@bigdata hadoop]$ cat /proc/cpuinfo

processor : 0

vendor_id : GenuineIntel

cpu family : 6

model : 13

model name : QEMU Virtual CPU version (cpu64-rhel6)

stepping : 3

cpu MHz : 1995.223

cache size : 4096 KB

fpu : yes

fpu_exception : yes

cpuid level : 4

wp : yes

flags : fpu de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm unfair_spinlock pni cx16 hypervisor lahf_lm

bogomips : 3990.44

clflush size : 64

cache_alignment : 64

address sizes : 40 bits physical, 48 bits virtual

power management:

 

processor : 1

vendor_id : GenuineIntel

cpu family : 6

model : 13

model name : QEMU Virtual CPU version (cpu64-rhel6)

stepping : 3

cpu MHz : 1995.223

cache size : 4096 KB

fpu : yes

fpu_exception : yes

cpuid level : 4

wp : yes

flags : fpu de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm unfair_spinlock pni cx16 hypervisor lahf_lm

bogomips : 3990.44

clflush size : 64

cache_alignment : 64

address sizes : 40 bits physical, 48 bits virtual

power management:

[bigdata@bigdata hadoop]$ free -m

             total       used       free     shared    buffers     cached

Mem:          3832       3302        529          0         57        631

-/+ buffers/cache:       2613       1218

Swap:         8191        143       8048

 

[bigdata@bigdata hadoop]$ df -lh

文件系统      容量  已用  可用 已用%% 挂载点

/dev/vda3             288G   12G  262G   5% /

tmpfs                 1.9G  836K  1.9G   1% /dev/shm

/dev/vda1              97M   31M   61M  34% /boot

[bigdata@bigdata hadoop]$ hadoop jar hadoop-test-1.0.4.jar nnbench -operation create_write

Warning: $HADOOP_HOME is deprecated.

 

NameNode Benchmark 0.4

13/04/21 12:33:48 INFO hdfs.NNBench: Test Inputs:

13/04/21 12:33:48 INFO hdfs.NNBench:            Test Operation: create_write

13/04/21 12:33:48 INFO hdfs.NNBench:                Start time: 2013-04-21 12:35:48,45

13/04/21 12:33:48 INFO hdfs.NNBench:            Number of maps: 1

13/04/21 12:33:48 INFO hdfs.NNBench:         Number of reduces: 1

13/04/21 12:33:48 INFO hdfs.NNBench:                Block Size: 1

13/04/21 12:33:48 INFO hdfs.NNBench:            Bytes to write: 0

13/04/21 12:33:48 INFO hdfs.NNBench:        Bytes per checksum: 1

13/04/21 12:33:48 INFO hdfs.NNBench:           Number of files: 1

13/04/21 12:33:48 INFO hdfs.NNBench:        Replication factor: 1

13/04/21 12:33:48 INFO hdfs.NNBench:                  Base dir: /benchmarks/NNBench

13/04/21 12:33:48 INFO hdfs.NNBench:      Read file after open: false

13/04/21 12:33:48 INFO hdfs.NNBench: Deleting data directory

13/04/21 12:33:48 INFO hdfs.NNBench: Creating 1 control files

13/04/21 12:33:49 INFO mapred.FileInputFormat: Total input paths to process : 1

13/04/21 12:33:49 INFO mapred.JobClient: Running job: job_201304060732_0007

13/04/21 12:33:50 INFO mapred.JobClient:  map 0% reduce 0%

13/04/21 12:35:53 INFO mapred.JobClient:  map 100% reduce 0%

13/04/21 12:36:05 INFO mapred.JobClient:  map 100% reduce 100%

13/04/21 12:36:10 INFO mapred.JobClient: Job complete: job_201304060732_0007

13/04/21 12:36:10 INFO mapred.JobClient: Counters: 30

13/04/21 12:36:10 INFO mapred.JobClient:   Job Counters

13/04/21 12:36:10 INFO mapred.JobClient:     Launched reduce tasks=1

13/04/21 12:36:10 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=118573

13/04/21 12:36:10 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0

13/04/21 12:36:10 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0

13/04/21 12:36:10 INFO mapred.JobClient:     Launched map tasks=1

13/04/21 12:36:10 INFO mapred.JobClient:     Data-local map tasks=1

13/04/21 12:36:10 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=10698

13/04/21 12:36:10 INFO mapred.JobClient:   File Input Format Counters

13/04/21 12:36:10 INFO mapred.JobClient:     Bytes Read=124

13/04/21 12:36:10 INFO mapred.JobClient:   File Output Format Counters

13/04/21 12:36:10 INFO mapred.JobClient:     Bytes Written=164

13/04/21 12:36:10 INFO mapred.JobClient:   FileSystemCounters

13/04/21 12:36:10 INFO mapred.JobClient:     FILE_BYTES_READ=184

13/04/21 12:36:10 INFO mapred.JobClient:     HDFS_BYTES_READ=245

13/04/21 12:36:10 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=45589

13/04/21 12:36:10 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=164

13/04/21 12:36:10 INFO mapred.JobClient:   Map-Reduce Framework

13/04/21 12:36:10 INFO mapred.JobClient:     Map output materialized bytes=184

13/04/21 12:36:10 INFO mapred.JobClient:     Map input records=1

13/04/21 12:36:10 INFO mapred.JobClient:     Reduce shuffle bytes=184

13/04/21 12:36:10 INFO mapred.JobClient:     Spilled Records=14

13/04/21 12:36:10 INFO mapred.JobClient:     Map output bytes=164

13/04/21 12:36:10 INFO mapred.JobClient:     Total committed heap usage (bytes)=220266496

13/04/21 12:36:10 INFO mapred.JobClient:     CPU time spent (ms)=2510

13/04/21 12:36:10 INFO mapred.JobClient:     Map input bytes=38

13/04/21 12:36:10 INFO mapred.JobClient:     SPLIT_RAW_BYTES=121

13/04/21 12:36:10 INFO mapred.JobClient:     Combine input records=0

13/04/21 12:36:10 INFO mapred.JobClient:     Reduce input records=7

13/04/21 12:36:10 INFO mapred.JobClient:     Reduce input groups=7

13/04/21 12:36:10 INFO mapred.JobClient:     Combine output records=0

13/04/21 12:36:10 INFO mapred.JobClient:     Physical memory (bytes) snapshot=248139776

13/04/21 12:36:10 INFO mapred.JobClient:     Reduce output records=7

13/04/21 12:36:10 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2519277568

13/04/21 12:36:10 INFO mapred.JobClient:     Map output records=7

13/04/21 12:36:10 INFO hdfs.NNBench: -------------- NNBench -------------- :

13/04/21 12:36:10 INFO hdfs.NNBench:                                Version: NameNode Benchmark 0.4

13/04/21 12:36:10 INFO hdfs.NNBench:                            Date & time: 2013-04-21 12:36:10,314

13/04/21 12:36:10 INFO hdfs.NNBench:

13/04/21 12:36:10 INFO hdfs.NNBench:                         Test Operation: create_write

13/04/21 12:36:10 INFO hdfs.NNBench:                             Start time: 2013-04-21 12:35:48,45

13/04/21 12:36:10 INFO hdfs.NNBench:                            Maps to run: 1

13/04/21 12:36:10 INFO hdfs.NNBench:                         Reduces to run: 1

13/04/21 12:36:10 INFO hdfs.NNBench:                     Block Size (bytes): 1

13/04/21 12:36:10 INFO hdfs.NNBench:                         Bytes to write: 0

13/04/21 12:36:10 INFO hdfs.NNBench:                     Bytes per checksum: 1

13/04/21 12:36:10 INFO hdfs.NNBench:                        Number of files: 1

13/04/21 12:36:10 INFO hdfs.NNBench:                     Replication factor: 1

13/04/21 12:36:10 INFO hdfs.NNBench:             Successful file operations: 1

13/04/21 12:36:10 INFO hdfs.NNBench:

13/04/21 12:36:10 INFO hdfs.NNBench:         # maps that missed the barrier: 0

13/04/21 12:36:10 INFO hdfs.NNBench:                           # exceptions: 0

13/04/21 12:36:10 INFO hdfs.NNBench:

13/04/21 12:36:10 INFO hdfs.NNBench:                TPS: Create/Write/Close: 47

13/04/21 12:36:10 INFO hdfs.NNBench: Avg exec time (ms): Create/Write/Close: 42.0

13/04/21 12:36:10 INFO hdfs.NNBench:             Avg Lat (ms): Create/Write: 39.0

13/04/21 12:36:10 INFO hdfs.NNBench:                    Avg Lat (ms): Close: 3.0

13/04/21 12:36:10 INFO hdfs.NNBench:

13/04/21 12:36:10 INFO hdfs.NNBench:                  RAW DATA: AL Total #1: 39

13/04/21 12:36:10 INFO hdfs.NNBench:                  RAW DATA: AL Total #2: 3

13/04/21 12:36:10 INFO hdfs.NNBench:               RAW DATA: TPS Total (ms): 42

13/04/21 12:36:10 INFO hdfs.NNBench:        RAW DATA: Longest Map Time (ms): 42.0

13/04/21 12:36:10 INFO hdfs.NNBench:                    RAW DATA: Late maps: 0

13/04/21 12:36:10 INFO hdfs.NNBench:              RAW DATA: # of exceptions: 0

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值