hadoop-0.20.2-examples.jar grep 示例

1. 运行
root@ubuntu:/usr/hadoop#  bin/hadoop jar hadoop-0.20.2-examples.jar grep input output 'dfs[a-z.]+'
10/06/20 05:58:07 INFO mapred.FileInputFormat: Total input paths to process : 17
10/06/20 05:58:08 INFO mapred.JobClient: Running job: job_201006200542_0001
10/06/20 05:58:09 INFO mapred.JobClient:  map 0% reduce 0%
10/06/20 05:58:46 INFO mapred.JobClient:  map 11% reduce 0%
10/06/20 05:59:00 INFO mapred.JobClient:  map 23% reduce 0%
10/06/20 05:59:07 INFO mapred.JobClient:  map 35% reduce 7%
10/06/20 05:59:09 INFO mapred.JobClient:  map 47% reduce 7%
10/06/20 05:59:15 INFO mapred.JobClient:  map 58% reduce 11%
10/06/20 05:59:19 INFO mapred.JobClient:  map 64% reduce 11%
10/06/20 05:59:22 INFO mapred.JobClient:  map 76% reduce 19%
10/06/20 05:59:25 INFO mapred.JobClient:  map 88% reduce 19%
10/06/20 05:59:28 INFO mapred.JobClient:  map 100% reduce 21%
10/06/20 05:59:34 INFO mapred.JobClient:  map 100% reduce 31%
10/06/20 05:59:40 INFO mapred.JobClient:  map 100% reduce 100%
10/06/20 05:59:45 INFO mapred.JobClient: Job complete: job_201006200542_0001
10/06/20 05:59:49 INFO mapred.JobClient: Counters: 18
10/06/20 05:59:49 INFO mapred.JobClient:   Job Counters 
10/06/20 05:59:49 INFO mapred.JobClient:     Launched reduce tasks=1
10/06/20 05:59:49 INFO mapred.JobClient:     Launched map tasks=17
10/06/20 05:59:49 INFO mapred.JobClient:     Data-local map tasks=17
10/06/20 05:59:49 INFO mapred.JobClient:   FileSystemCounters
10/06/20 05:59:49 INFO mapred.JobClient:     FILE_BYTES_READ=184
10/06/20 05:59:49 INFO mapred.JobClient:     HDFS_BYTES_READ=21571
10/06/20 05:59:49 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1008
10/06/20 05:59:49 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=280
10/06/20 05:59:49 INFO mapred.JobClient:   Map-Reduce Framework
10/06/20 05:59:49 INFO mapred.JobClient:     Reduce input groups=7
10/06/20 05:59:49 INFO mapred.JobClient:     Combine output records=8
10/06/20 05:59:49 INFO mapred.JobClient:     Map input records=651
10/06/20 05:59:49 INFO mapred.JobClient:     Reduce shuffle bytes=280
10/06/20 05:59:49 INFO mapred.JobClient:     Reduce output records=7
10/06/20 05:59:49 INFO mapred.JobClient:     Spilled Records=16
10/06/20 05:59:49 INFO mapred.JobClient:     Map output bytes=217
10/06/20 05:59:49 INFO mapred.JobClient:     Map input bytes=21571
10/06/20 05:59:49 INFO mapred.JobClient:     Combine input records=11
10/06/20 05:59:49 INFO mapred.JobClient:     Map output records=11
10/06/20 05:59:49 INFO mapred.JobClient:     Reduce input records=8
10/06/20 05:59:52 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
10/06/20 05:59:55 INFO mapred.FileInputFormat: Total input paths to process : 1
10/06/20 06:00:00 INFO mapred.JobClient: Running job: job_201006200542_0002
10/06/20 06:00:01 INFO mapred.JobClient:  map 0% reduce 0%
10/06/20 06:00:10 INFO mapred.JobClient:  map 100% reduce 0%
10/06/20 06:00:23 INFO mapred.JobClient:  map 100% reduce 100%
10/06/20 06:00:25 INFO mapred.JobClient: Job complete: job_201006200542_0002
10/06/20 06:00:25 INFO mapred.JobClient: Counters: 18
10/06/20 06:00:25 INFO mapred.JobClient:   Job Counters 
10/06/20 06:00:25 INFO mapred.JobClient:     Launched reduce tasks=1
10/06/20 06:00:25 INFO mapred.JobClient:     Launched map tasks=1
10/06/20 06:00:25 INFO mapred.JobClient:     Data-local map tasks=1
10/06/20 06:00:25 INFO mapred.JobClient:   FileSystemCounters
10/06/20 06:00:25 INFO mapred.JobClient:     FILE_BYTES_READ=158
10/06/20 06:00:25 INFO mapred.JobClient:     HDFS_BYTES_READ=280
10/06/20 06:00:25 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=348
10/06/20 06:00:25 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=96
10/06/20 06:00:25 INFO mapred.JobClient:   Map-Reduce Framework
10/06/20 06:00:25 INFO mapred.JobClient:     Reduce input groups=3
10/06/20 06:00:25 INFO mapred.JobClient:     Combine output records=0
10/06/20 06:00:25 INFO mapred.JobClient:     Map input records=7
10/06/20 06:00:25 INFO mapred.JobClient:     Reduce shuffle bytes=158
10/06/20 06:00:25 INFO mapred.JobClient:     Reduce output records=7
10/06/20 06:00:25 INFO mapred.JobClient:     Spilled Records=14
10/06/20 06:00:25 INFO mapred.JobClient:     Map output bytes=138
10/06/20 06:00:25 INFO mapred.JobClient:     Map input bytes=194
10/06/20 06:00:25 INFO mapred.JobClient:     Combine input records=0
10/06/20 06:00:25 INFO mapred.JobClient:     Map output records=7

10/06/20 06:00:25 INFO mapred.JobClient:     Reduce input records=7

2. 查看结果:
root@ubuntu:/usr/hadoop# bin/hadoop fs -get output output // 将输出文件从分布式文件系统拷贝到本地文件系统
root@ubuntu:/usr/hadoop# cat output/*
cat: output/_logs: 是一个目录
3    dfs.class
2    dfs.period
2    dfs.replication
1    dfs.file
1    dfs.servers
1    dfsadmin
1    dfsmetrics.log

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值