Hadoop 实战之运行MultiFile(二)

环境:Vmware 8.0 和ubuntu11.04

Hadoop 实战之运行MultiFile(二)---将输入数据的不同列提取为不同文件的程序

第一步:首先创建一个工程命名为HadoopTest.目录结构如下图:

第二步: 在/home/tanglg1987目录下新建一个start.sh脚本文件,每次启动虚拟机都要删除/tmp目录下的全部文件,重新格式化namenode,代码如下:

sudo rm -rf /tmp/*
rm -rf /home/tanglg1987/hadoop-0.20.2/logs
hadoop namenode -format
hadoop datanode -format
start-all.sh
hadoop fs -mkdir input 
hadoop dfsadmin -safemode leave

第三步:给start.sh增加执行权限并启动hadoop伪分布式集群,代码如下:

chmod 777 /home/tanglg1987/start.sh
./start.sh 

执行过程如下:

第四步:上传本地文件到hdfs

在专利局http://data.nber.org/patents/网站下载专利数据

http://data.nber.org/patents/apat63_99.zip

hadoop fs -put /home/tanglg1987/apat63_99.txt input

五步:新建一个MultiFile2.java,代码如下:

package com.baison.action;
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.SequenceFileInputFormat;
import org.apache.hadoop.mapred.SequenceFileOutputFormat;
import org.apache.hadoop.mapred.KeyValueTextInputFormat;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.lib.MultipleTextOutputFormat;
import org.apache.hadoop.mapred.lib.MultipleOutputs;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class MultiFile2 extends Configured implements Tool {    
    public static class MapClass extends MapReduceBase
        implements Mapper<LongWritable, Text, NullWritable, Text> {        
        private MultipleOutputs mos;
        private OutputCollector<NullWritable, Text> collector;        
        public void configure(JobConf conf) {
            mos = new MultipleOutputs(conf);
        }        
        public void map(LongWritable key, Text value,
                        OutputCollector<NullWritable, Text> output,
                        Reporter reporter) throws IOException {                        
            String[] arr = value.toString().split(",", -1);
            String chrono = arr[0] + "," + arr[1] + "," + arr[2];
            String geo    = arr[0] + "," + arr[4] + "," + arr[5];           
            collector = mos.getCollector("chrono", reporter);
            collector.collect(NullWritable.get(), new Text(chrono));
            collector = mos.getCollector("geo", reporter);
            collector.collect(NullWritable.get(), new Text(geo));
        }
        public void close() throws IOException {
            mos.close();
        }
    }
    public int run(String[] args) throws Exception {
        // Configuration processed by ToolRunner
        Configuration conf = getConf();       
        // Create a JobConf using the processed conf
        JobConf job = new JobConf(conf, MultiFile.class);       
        // Process custom command-line options
        Path in = new Path(args[0]);
        Path out = new Path(args[1]);
        FileInputFormat.setInputPaths(job, in);
        FileOutputFormat.setOutputPath(job, out);        
        // Specify various job-specific parameters     
        job.setJobName("MultiFile");
        job.setMapperClass(MapClass.class);       
        job.setInputFormat(TextInputFormat.class);
//        job.setOutputFormat(PartitionByCountryMTOF.class);
        job.setOutputKeyClass(NullWritable.class);
        job.setOutputValueClass(Text.class);
        job.setNumReduceTasks(0);        
        MultipleOutputs.addNamedOutput(job,
                                       "chrono",
                                       TextOutputFormat.class,
                                       NullWritable.class,
                                       Text.class);
        MultipleOutputs.addNamedOutput(job,
                                       "geo",
                                       TextOutputFormat.class,
                                       NullWritable.class,
                                       Text.class);       
        // Submit the job, then poll for progress until the job is complete
        JobClient.runJob(job);       
        return 0;
    }    
    public static void main(String[] args) throws Exception {
        // Let ToolRunner handle generic command-line options 
    	String [] arg={"hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt","hdfs://localhost:9100/user/tanglg1987/output"};
        int res = ToolRunner.run(new Configuration(), new MultiFile(), arg);
        System.exit(res);
    }
}

第六步:Run On Hadoop,运行过程如下:

12/10/22 21:25:59 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/10/22 21:25:59 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
12/10/22 21:25:59 INFO mapred.FileInputFormat: Total input paths to process : 1
12/10/22 21:25:59 INFO mapred.JobClient: Running job: job_local_0001
12/10/22 21:26:00 INFO mapred.FileInputFormat: Total input paths to process : 1
12/10/22 21:26:00 INFO mapred.MapTask: numReduceTasks: 0
12/10/22 21:26:00 INFO mapred.JobClient:  map 0% reduce 0%
12/10/22 21:26:03 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:26:04 INFO mapred.JobClient:  map 17% reduce 0%
12/10/22 21:26:06 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:26:07 INFO mapred.JobClient:  map 37% reduce 0%
12/10/22 21:26:09 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:26:10 INFO mapred.JobClient:  map 58% reduce 0%
12/10/22 21:26:12 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:26:12 INFO mapred.JobClient:  map 84% reduce 0%
12/10/22 21:26:14 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/10/22 21:26:14 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:26:14 INFO mapred.TaskRunner: Task attempt_local_0001_m_000000_0 is allowed to commit now
12/10/22 21:26:15 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:26:15 INFO mapred.JobClient:  map 100% reduce 0%
12/10/22 21:26:15 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_m_000000_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/22 21:26:15 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:26:15 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000000_0' done.
12/10/22 21:26:15 INFO mapred.MapTask: numReduceTasks: 0
12/10/22 21:26:18 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:67108864+67108864
12/10/22 21:26:19 INFO mapred.JobClient:  map 63% reduce 0%
12/10/22 21:26:21 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:67108864+67108864
12/10/22 21:26:22 INFO mapred.JobClient:  map 80% reduce 0%
12/10/22 21:26:24 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:67108864+67108864
12/10/22 21:26:25 INFO mapred.JobClient:  map 93% reduce 0%
12/10/22 21:26:27 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:67108864+67108864
12/10/22 21:26:28 INFO mapred.JobClient:  map 100% reduce 0%
12/10/22 21:26:28 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
12/10/22 21:26:28 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:67108864+67108864
12/10/22 21:26:28 INFO mapred.TaskRunner: Task attempt_local_0001_m_000001_0 is allowed to commit now
12/10/22 21:26:29 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_m_000001_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/22 21:26:29 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:67108864+67108864
12/10/22 21:26:29 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000001_0' done.
12/10/22 21:26:29 INFO mapred.MapTask: numReduceTasks: 0
12/10/22 21:26:32 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:134217728+67108864
12/10/22 21:26:32 INFO mapred.JobClient:  map 78% reduce 0%
12/10/22 21:26:35 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:134217728+67108864
12/10/22 21:26:35 INFO mapred.JobClient:  map 89% reduce 0%
12/10/22 21:26:38 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:134217728+67108864
12/10/22 21:26:38 INFO mapred.JobClient:  map 99% reduce 0%
12/10/22 21:26:40 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting
12/10/22 21:26:40 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:134217728+67108864
12/10/22 21:26:40 INFO mapred.TaskRunner: Task attempt_local_0001_m_000002_0 is allowed to commit now
12/10/22 21:26:41 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:134217728+67108864
12/10/22 21:26:41 INFO mapred.JobClient:  map 100% reduce 0%
12/10/22 21:26:41 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_m_000002_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/22 21:26:41 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:134217728+67108864
12/10/22 21:26:41 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000002_0' done.
12/10/22 21:26:41 INFO mapred.MapTask: numReduceTasks: 0
12/10/22 21:26:44 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:201326592+35576587
12/10/22 21:26:45 INFO mapred.JobClient:  map 86% reduce 0%
12/10/22 21:26:47 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:201326592+35576587
12/10/22 21:26:48 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000003_0 is done. And is in the process of commiting
12/10/22 21:26:48 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:201326592+35576587
12/10/22 21:26:48 INFO mapred.TaskRunner: Task attempt_local_0001_m_000003_0 is allowed to commit now
12/10/22 21:26:48 INFO mapred.JobClient:  map 100% reduce 0%
12/10/22 21:26:48 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_m_000003_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/22 21:26:48 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:201326592+35576587
12/10/22 21:26:48 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000003_0' done.
12/10/22 21:26:49 INFO mapred.JobClient: Job complete: job_local_0001
12/10/22 21:26:49 INFO mapred.JobClient: Counters: 8
12/10/22 21:26:49 INFO mapred.JobClient:   FileSystemCounters
12/10/22 21:26:49 INFO mapred.JobClient:     FILE_BYTES_READ=66764
12/10/22 21:26:49 INFO mapred.JobClient:     HDFS_BYTES_READ=639593233
12/10/22 21:26:49 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=137016
12/10/22 21:26:49 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=639556540
12/10/22 21:26:49 INFO mapred.JobClient:   Map-Reduce Framework
12/10/22 21:26:49 INFO mapred.JobClient:     Map input records=2923923
12/10/22 21:26:49 INFO mapred.JobClient:     Spilled Records=0
12/10/22 21:26:49 INFO mapred.JobClient:     Map input bytes=236903179
12/10/22 21:26:49 INFO mapred.JobClient:     Map output records=2923923

第七步:查看结果集,运行结果如下:

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值