运行hadoop jar

1.
拷贝文件,注意端口号
hadoop jar first-hadoop-0.0.1-SNAPSHOT.jar ch03.FileCopyWithProgress Hello.class hdfs://localhost:9000/user/a.txt

hadoop jar first-hadoop-0.0.1-SNAPSHOT.jar ch03.FileCopyWithProgress Hello.class /user/a.txt


2.
运行jar里的Hello
hadoop jar first-hadoop-0.0.1-SNAPSHOT.jar hello.Hello


3.结果

12/04/12 23:59:12 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/04/12 23:59:12 INFO mapred.FileInputFormat: Total input paths to process : 1
12/04/12 23:59:12 INFO mapred.JobClient: Running job: job_201204122244_0006
12/04/12 23:59:13 INFO mapred.JobClient: map 0% reduce 0%
12/04/12 23:59:28 INFO mapred.JobClient: map 100% reduce 0%
12/04/12 23:59:40 INFO mapred.JobClient: map 100% reduce 100%
12/04/12 23:59:45 INFO mapred.JobClient: Job complete: job_201204122244_0006
12/04/12 23:59:45 INFO mapred.JobClient: Counters: 30
12/04/12 23:59:45 INFO mapred.JobClient: Job Counters
12/04/12 23:59:45 INFO mapred.JobClient: Launched reduce tasks=1
12/04/12 23:59:45 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=18796
12/04/12 23:59:45 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
12/04/12 23:59:45 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
12/04/12 23:59:45 INFO mapred.JobClient: Launched map tasks=2
12/04/12 23:59:45 INFO mapred.JobClient: Data-local map tasks=2
12/04/12 23:59:45 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=10227
12/04/12 23:59:45 INFO mapred.JobClient: File Input Format Counters
12/04/12 23:59:45 INFO mapred.JobClient: Bytes Read=1996
12/04/12 23:59:45 INFO mapred.JobClient: File Output Format Counters
12/04/12 23:59:45 INFO mapred.JobClient: Bytes Written=506
12/04/12 23:59:45 INFO mapred.JobClient: FileSystemCounters
12/04/12 23:59:45 INFO mapred.JobClient: FILE_BYTES_READ=696
12/04/12 23:59:45 INFO mapred.JobClient: HDFS_BYTES_READ=2166
12/04/12 23:59:45 INFO mapred.JobClient: FILE_BYTES_WRITTEN=64709
12/04/12 23:59:45 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=506
12/04/12 23:59:45 INFO mapred.JobClient: Map-Reduce Framework
12/04/12 23:59:45 INFO mapred.JobClient: Map output materialized bytes=702
12/04/12 23:59:45 INFO mapred.JobClient: Map input records=23
12/04/12 23:59:45 INFO mapred.JobClient: Reduce shuffle bytes=702
12/04/12 23:59:45 INFO mapred.JobClient: Spilled Records=92
12/04/12 23:59:45 INFO mapred.JobClient: Map output bytes=598
12/04/12 23:59:45 INFO mapred.JobClient: Total committed heap usage (bytes)=337780736
12/04/12 23:59:45 INFO mapred.JobClient: CPU time spent (ms)=1480
12/04/12 23:59:45 INFO mapred.JobClient: Map input bytes=1330
12/04/12 23:59:45 INFO mapred.JobClient: SPLIT_RAW_BYTES=170
12/04/12 23:59:45 INFO mapred.JobClient: Combine input records=0
12/04/12 23:59:45 INFO mapred.JobClient: Reduce input records=46
12/04/12 23:59:45 INFO mapred.JobClient: Reduce input groups=2
12/04/12 23:59:45 INFO mapred.JobClient: Combine output records=0
12/04/12 23:59:45 INFO mapred.JobClient: Physical memory (bytes) snapshot=324440064
12/04/12 23:59:45 INFO mapred.JobClient: Reduce output records=46
12/04/12 23:59:45 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1121619968
12/04/12 23:59:45 INFO mapred.JobClient: Map output records=46


4.获取结果文件
-getmerge <src> <localdst>

拷贝到本地
hadoop fs -getmerge hdfs://localhost:9000/user/c.txt c.txt
hadoop fs -getmerge /user/c.txt c.txt


直接查看
hadoop fs -cat /user/c.txt/part-00000


5.hadoop存储文件目录/tmp/hadoop-root/dfs/data/current
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值