【分布式集群】hadoop-2.6.0集群实例(接上一篇)

这是hadoop2.6.0集群环境下的单词统计程序,如果是hadoop2.0以下的,有些命令不适合,请查看其它资料!!!!

首先在当前用户下建一个文件夹

mkdir hadooptest

创建两个文件

echo "hecllo world" >test1.txt

echo "hello hadoop" >test2.txt


查看hdfs /下是否有in文件

hdfs dfs -ls /
15/07/21 20:08:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
drwxr-xr-x   - shamrock supergroup          0 2015-07-21 19:50 /in

如果没有则创建该文件

hdfs dfs -mkdir -p /in


将创建的txt文件加入到/in下面

hdfs dfs -put ~/hadooptest/*.txt /in

hadoop jar ~/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /in /out

15/07/21 19:50:56 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
15/07/21 19:50:56 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
15/07/21 19:50:56 INFO input.FileInputFormat: Total input paths to process : 2
15/07/21 19:50:56 INFO mapreduce.JobSubmitter: number of splits:2
15/07/21 19:50:56 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local652756729_0001
15/07/21 19:50:57 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
15/07/21 19:50:57 INFO mapreduce.Job: Running job: job_local652756729_0001
15/07/21 19:50:57 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/07/21 19:50:57 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
15/07/21 19:50:57 INFO mapred.LocalJobRunner: Waiting for map tasks
15/07/21 19:50:57 INFO mapred.LocalJobRunner: Starting task: attempt_local652756729_0001_m_000000_0
15/07/21 19:50:57 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
15/07/21 19:50:57 INFO mapred.MapTask: Processing split: hdfs://master:9000/in/test1.txt:0+13
15/07/21 19:50:57 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
15/07/21 19:50:57 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
15/07/21 19:50:57 INFO mapred.MapTask: soft limit at 83886080
15/07/21 19:50:57 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
15/07/21 19:50:57 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
15/07/21 19:50:57 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
15/07/21 19:50:57 INFO mapred.LocalJobRunner:
15/07/21 19:50:57 INFO mapred.MapTask: Starting flush of map output
15/07/21 19:50:57 INFO mapred.MapTask: Spilling map output
15/07/21 19:50:57 INFO mapred.MapTask: bufstart = 0; bufend = 21; bufvoid = 104857600
15/07/21 19:50:57 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
15/07/21 19:50:57 INFO mapred.MapTask: Finished spill 0
15/07/21 19:50:57 INFO mapred.Task: Task:attempt_local652756729_0001_m_000000_0 is done. And is in the process of committing
15/07/21 19:50:57 INFO mapred.LocalJobRunner: map
15/07/21 19:50:57 INFO mapred.Task: Task 'attempt_local652756729_0001_m_000000_0' done.
15/07/21 19:50:57 INFO mapred.LocalJobRunner: Finishing task: attempt_local652756729_0001_m_000000_0
15/07/21 19:50:57 INFO mapred.LocalJobRunner: Starting task: attempt_local652756729_0001_m_000001_0
15/07/21 19:50:57 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
15/07/21 19:50:57 INFO mapred.MapTask: Processing split: hdfs://master:9000/in/test2.txt:0+13
15/07/21 19:50:57 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
15/07/21 19:50:57 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
15/07/21 19:50:57 INFO mapred.MapTask: soft limit at 83886080
15/07/21 19:50:57 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
15/07/21 19:50:57 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
15/07/21 19:50:57 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
15/07/21 19:50:57 INFO mapred.LocalJobRunner:
15/07/21 19:50:57 INFO mapred.MapTask: Starting flush of map output
15/07/21 19:50:57 INFO mapred.MapTask: Spilling map output
15/07/21 19:50:57 INFO mapred.MapTask: bufstart = 0; bufend = 21; bufvoid = 104857600
15/07/21 19:50:57 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
15/07/21 19:50:57 INFO mapred.MapTask: Finished spill 0
15/07/21 19:50:57 INFO mapred.Task: Task:attempt_local652756729_0001_m_000001_0 is done. And is in the process of committing
15/07/21 19:50:57 INFO mapred.LocalJobRunner: map
15/07/21 19:50:57 INFO mapred.Task: Task 'attempt_local652756729_0001_m_000001_0' done.
15/07/21 19:50:57 INFO mapred.LocalJobRunner: Finishing task: attempt_local652756729_0001_m_000001_0
15/07/21 19:50:57 INFO mapred.LocalJobRunner: map task executor complete.
15/07/21 19:50:57 INFO mapred.LocalJobRunner: Waiting for reduce tasks
15/07/21 19:50:57 INFO mapred.LocalJobRunner: Starting task: attempt_local652756729_0001_r_000000_0
15/07/21 19:50:57 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
15/07/21 19:50:57 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@1ff5c98
15/07/21 19:50:57 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=334063200, maxSingleShuffleLimit=83515800, mergeThreshold=220481728, ioSortFactor=10, memToMemMergeOutputsThreshold=10
15/07/21 19:50:57 INFO reduce.EventFetcher: attempt_local652756729_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
15/07/21 19:50:57 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local652756729_0001_m_000000_0 decomp: 27 len: 31 to MEMORY
15/07/21 19:50:57 INFO reduce.InMemoryMapOutput: Read 27 bytes from map-output for attempt_local652756729_0001_m_000000_0
15/07/21 19:50:57 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 27, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->27
15/07/21 19:50:57 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local652756729_0001_m_000001_0 decomp: 27 len: 31 to MEMORY
15/07/21 19:50:57 INFO reduce.InMemoryMapOutput: Read 27 bytes from map-output for attempt_local652756729_0001_m_000001_0
15/07/21 19:50:57 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 27, inMemoryMapOutputs.size() -> 2, commitMemory -> 27, usedMemory ->54
15/07/21 19:50:57 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
15/07/21 19:50:57 INFO mapred.LocalJobRunner: 2 / 2 copied.
15/07/21 19:50:57 INFO reduce.MergeManagerImpl: finalMerge called with 2 in-memory map-outputs and 0 on-disk map-outputs
15/07/21 19:50:57 INFO mapred.Merger: Merging 2 sorted segments
15/07/21 19:50:57 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 36 bytes
15/07/21 19:50:57 INFO reduce.MergeManagerImpl: Merged 2 segments, 54 bytes to disk to satisfy reduce memory limit
15/07/21 19:50:57 INFO reduce.MergeManagerImpl: Merging 1 files, 56 bytes from disk
15/07/21 19:50:57 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
15/07/21 19:50:57 INFO mapred.Merger: Merging 1 sorted segments
15/07/21 19:50:57 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 43 bytes
15/07/21 19:50:57 INFO mapred.LocalJobRunner: 2 / 2 copied.
15/07/21 19:50:57 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
15/07/21 19:50:58 INFO mapred.Task: Task:attempt_local652756729_0001_r_000000_0 is done. And is in the process of committing
15/07/21 19:50:58 INFO mapred.LocalJobRunner: 2 / 2 copied.
15/07/21 19:50:58 INFO mapred.Task: Task attempt_local652756729_0001_r_000000_0 is allowed to commit now
15/07/21 19:50:58 INFO output.FileOutputCommitter: Saved output of task 'attempt_local652756729_0001_r_000000_0' to hdfs://master:9000/out/_temporary/0/task_local652756729_0001_r_000000
15/07/21 19:50:58 INFO mapred.LocalJobRunner: reduce > reduce
15/07/21 19:50:58 INFO mapred.Task: Task 'attempt_local652756729_0001_r_000000_0' done.
15/07/21 19:50:58 INFO mapred.LocalJobRunner: Finishing task: attempt_local652756729_0001_r_000000_0
15/07/21 19:50:58 INFO mapred.LocalJobRunner: reduce task executor complete.
15/07/21 19:50:58 INFO mapreduce.Job: Job job_local652756729_0001 running in uber mode : false
15/07/21 19:50:58 INFO mapreduce.Job:  map 100% reduce 100%
15/07/21 19:50:58 INFO mapreduce.Job: Job job_local652756729_0001 completed successfully
15/07/21 19:50:58 INFO mapreduce.Job: Counters: 38
        File System Counters
                FILE: Number of bytes read=812365
                FILE: Number of bytes written=1548626
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=65
                HDFS: Number of bytes written=34
                HDFS: Number of read operations=25
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=5
        Map-Reduce Framework
                Map input records=2
                Map output records=4
                Map output bytes=42
                Map output materialized bytes=62
                Input split bytes=192
                Combine input records=4
                Combine output records=4
                Reduce input groups=4
                Reduce shuffle bytes=62
                Reduce input records=4
                Reduce output records=4
                Spilled Records=8
                Shuffled Maps =2
                Failed Shuffles=0
                Merged Map outputs=2
                GC time elapsed (ms)=5
                CPU time spent (ms)=0
                Physical memory (bytes) snapshot=0
                Virtual memory (bytes) snapshot=0
                Total committed heap usage (bytes)=670760960
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=26
        File Output Format Counters
                Bytes Written=34

执行成功!!!

查看结果

hdfs dfs -ls /out

-rw-r--r--   2 shamrock supergroup          0 2015-07-21 19:50 /out/_SUCCESS
-rw-r--r--   2 shamrock supergroup         34 2015-07-21 19:50 /out/part-r-00000

查看

hdfs dfs -cat /out/part-r-00000

hadoop  1
hecllo  1
hello   1
world   1

小程序,大智慧啊。一步一步往上爬!!!




评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值