hadoop第一个计算任务wordcount的运行

1.上传文件HDFS

[hadoop@master example]$ cat mytest.txt #文件内容
Hello world ,you are My lunky! 
Hello you are my friend!
hello ,are you OKay!
hello ,GGG?
no GG
上传文件到HDFS

[hadoop@master example]$ hadoop fs -put mytest.txt /example/data/  #上传到目录/example/data/
[hadoop@master example]$ hadoop fs -cat  /example/data/mytest.txt #查看上传的内容
Hello world ,you are My lunky! 
Hello you are my friend!
hello ,are you OKay!
hello ,GGG?
no GG
[hadoop@master example]$ 

2.执行计算

[hadoop@master mapreduce]$ pwd
/home/hadoop/hadoop-2.8.1/share/hadoop/mapreduce
[hadoop@master mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.8.1.jar wordcount /example/data/mytest.txt /output
17/09/14 10:51:57 INFO client.RMProxy: Connecting to ResourceManager at master/10.0.1.118:18040
17/09/14 10:51:58 INFO input.FileInputFormat: Total input files to process : 1
17/09/14 10:51:58 INFO mapreduce.JobSubmitter: number of splits:1
17/09/14 10:51:58 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1505390401611_0002
17/09/14 10:51:59 INFO impl.YarnClientImpl: Submitted application application_1505390401611_0002
17/09/14 10:51:59 INFO mapreduce.Job: The url to track the job: http://master:18088/proxy/application_1505390401611_0002/
17/09/14 10:51:59 INFO mapreduce.Job: Running job: job_1505390401611_0002
17/09/14 10:52:13 INFO mapreduce.Job: Job job_1505390401611_0002 running in uber mode : false
17/09/14 10:52:13 INFO mapreduce.Job:  map 0% reduce 0%
17/09/14 10:52:21 INFO mapreduce.Job:  map 100% reduce 0%
17/09/14 10:52:28 INFO mapreduce.Job:  map 100% reduce 100%
17/09/14 10:52:29 INFO mapreduce.Job: Job job_1505390401611_0002 completed successfully
17/09/14 10:52:30 INFO mapreduce.Job: Counters: 49
     File System Counters
        FILE: Number of bytes read=171
        FILE: Number of bytes written=272737
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=203
        HDFS: Number of bytes written=105
        HDFS: Number of read operations=6
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
     Job Counters 
        Launched map tasks=1
        Launched reduce tasks=1
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=6215
        Total time spent by all reduces in occupied slots (ms)=4850
        Total time spent by all map tasks (ms)=6215
        Total time spent by all reduce tasks (ms)=4850
        Total vcore-milliseconds taken by all map tasks=6215
        Total vcore-milliseconds taken by all reduce tasks=4850
        Total megabyte-milliseconds taken by all map tasks=6364160
        Total megabyte-milliseconds taken by all reduce tasks=4966400
     Map-Reduce Framework
        Map input records=5
        Map output records=19
        Map output bytes=171
        Map output materialized bytes=171
        Input split bytes=107
        Combine input records=19
        Combine output records=15
        Reduce input groups=15
        Reduce shuffle bytes=171
        Reduce input records=15
        Reduce output records=15
        Spilled Records=30
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=180
        CPU time spent (ms)=1620
        Physical memory (bytes) snapshot=294346752
        Virtual memory (bytes) snapshot=4173053952
        Total committed heap usage (bytes)=139882496
     Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
     File Input Format Counters 
        Bytes Read=96
     File Output Format Counters 
        Bytes Written=105


3.查看计算结果


[hadoop@master mapreduce]$ hadoop fs -cat /output/part-r-00000
,GGG?   1
,are 1
,you 1
GG   1
Hello   2
My   1
OKay!   1
are  2
friend! 1
hello   2
lunky!  1
my   1
no   1
world   1
you  2
[hadoop@master mapreduce]$ 


运行hadoop自带的wordcount 计算,让自己对hadoop 有个初步了解.



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值