Hadoop 实战之运行MultiFile(一)

环境:Vmware 8.0 和ubuntu11.04

Hadoop 实战之运行MultiFile(一)---根据国家将专利元数据分割到多个目录中

第一步:首先创建一个工程命名为HadoopTest.目录结构如下图:

第二步: 在/home/tanglg1987目录下新建一个start.sh脚本文件,每次启动虚拟机都要删除/tmp目录下的全部文件,重新格式化namenode,代码如下:

sudo rm -rf /tmp/*
rm -rf /home/tanglg1987/hadoop-0.20.2/logs
hadoop namenode -format
hadoop datanode -format
start-all.sh
hadoop fs -mkdir input 
hadoop dfsadmin -safemode leave

第三步:给start.sh增加执行权限并启动hadoop伪分布式集群,代码如下:

chmod 777 /home/tanglg1987/start.sh
./start.sh 

执行过程如下:

第四步:上传本地文件到hdfs

在专利局http://data.nber.org/patents/网站下载专利数据

http://data.nber.org/patents/apat63_99.zip

hadoop fs -put /home/tanglg1987/apat63_99.txt input

五步:新建一个MultiFile.java,代码如下:

package com.baison.action;
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.SequenceFileInputFormat;
import org.apache.hadoop.mapred.SequenceFileOutputFormat;
import org.apache.hadoop.mapred.KeyValueTextInputFormat;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.lib.MultipleTextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class MultiFile extends Configured implements Tool {    
    public static class MapClass extends MapReduceBase
        implements Mapper<LongWritable, Text, NullWritable, Text> {        
        public void map(LongWritable key, Text value,
                        OutputCollector<NullWritable, Text> output,
                        Reporter reporter) throws IOException {                        
            output.collect(NullWritable.get(), value);
        }
    }    
    public static class PartitionByCountryMTOF
        extends MultipleTextOutputFormat<NullWritable,Text>
    {
        protected String generateFileNameForKeyValue(NullWritable key,
                                                     Text value,
                                                     String inputfilename)
        {
            String[] arr = value.toString().split(",", -1);
            String country = arr[4].substring(1,3);
            return country+"/"+inputfilename;
        }
    }    
    public int run(String[] args) throws Exception {
        // Configuration processed by ToolRunner
        Configuration conf = getConf();        
        // Create a JobConf using the processed conf
        JobConf job = new JobConf(conf, MultiFile.class);       
        // Process custom command-line options
        Path in = new Path(args[0]);
        Path out = new Path(args[1]);
        FileInputFormat.setInputPaths(job, in);
        FileOutputFormat.setOutputPath(job, out);        
        // Specify various job-specific parameters     
        job.setJobName("MultiFile");
        job.setMapperClass(MapClass.class);        
        job.setInputFormat(TextInputFormat.class);
        job.setOutputFormat(PartitionByCountryMTOF.class);
        job.setOutputKeyClass(NullWritable.class);
        job.setOutputValueClass(Text.class);        
        job.setNumReduceTasks(0);       
        // Submit the job, then poll for progress until the job is complete
        JobClient.runJob(job);       
        return 0;
    }   
    public static void main(String[] args) throws Exception {
        // Let ToolRunner handle generic command-line options 
    	String [] arg={"hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt","hdfs://localhost:9100/user/tanglg1987/output"};
        int res = ToolRunner.run(new Configuration(), new MultiFile(), arg);  
        System.exit(res);
    }
}

第六步:Run On Hadoop,运行过程如下:

12/10/22 21:17:43 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/10/22 21:17:43 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
12/10/22 21:17:43 INFO mapred.FileInputFormat: Total input paths to process : 1
12/10/22 21:17:44 INFO mapred.JobClient: Running job: job_local_0001
12/10/22 21:17:44 INFO mapred.FileInputFormat: Total input paths to process : 1
12/10/22 21:17:44 INFO mapred.MapTask: numReduceTasks: 0
12/10/22 21:17:45 INFO mapred.JobClient:  map 0% reduce 0%
12/10/22 21:17:47 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:17:48 INFO mapred.JobClient:  map 19% reduce 0%
12/10/22 21:17:50 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:17:51 INFO mapred.JobClient:  map 40% reduce 0%
12/10/22 21:17:53 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:17:54 INFO mapred.JobClient:  map 62% reduce 0%
12/10/22 21:17:56 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:17:57 INFO mapred.JobClient:  map 89% reduce 0%
12/10/22 21:17:59 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:18:00 INFO mapred.JobClient:  map 100% reduce 0%
12/10/22 21:18:01 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/10/22 21:18:01 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:18:01 INFO mapred.TaskRunner: Task attempt_local_0001_m_000000_0 is allowed to commit now
12/10/22 21:18:01 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_m_000000_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/22 21:18:01 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:0+67108864
12/10/22 21:18:01 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000000_0' done.
12/10/22 21:18:01 INFO mapred.MapTask: numReduceTasks: 0
12/10/22 21:18:04 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:67108864+67108864
12/10/22 21:18:05 INFO mapred.JobClient:  map 64% reduce 0%
12/10/22 21:18:07 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:67108864+67108864
12/10/22 21:18:08 INFO mapred.JobClient:  map 81% reduce 0%
12/10/22 21:18:10 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:67108864+67108864
12/10/22 21:18:11 INFO mapred.JobClient:  map 85% reduce 0%
12/10/22 21:18:13 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:67108864+67108864
12/10/22 21:18:14 INFO mapred.JobClient:  map 99% reduce 0%
12/10/22 21:18:16 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
12/10/22 21:18:16 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:67108864+67108864
12/10/22 21:18:16 INFO mapred.TaskRunner: Task attempt_local_0001_m_000001_0 is allowed to commit now
12/10/22 21:18:16 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_m_000001_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/22 21:18:16 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:67108864+67108864
12/10/22 21:18:16 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000001_0' done.
12/10/22 21:18:16 INFO mapred.MapTask: numReduceTasks: 0
12/10/22 21:18:17 INFO mapred.JobClient:  map 100% reduce 0%
12/10/22 21:18:20 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:134217728+67108864
12/10/22 21:18:20 INFO mapred.JobClient:  map 74% reduce 0%
12/10/22 21:18:23 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:134217728+67108864
12/10/22 21:18:23 INFO mapred.JobClient:  map 82% reduce 0%
12/10/22 21:18:26 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:134217728+67108864
12/10/22 21:18:26 INFO mapred.JobClient:  map 90% reduce 0%
12/10/22 21:18:29 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:134217728+67108864
12/10/22 21:18:29 INFO mapred.JobClient:  map 100% reduce 0%
12/10/22 21:18:29 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting
12/10/22 21:18:29 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:134217728+67108864
12/10/22 21:18:29 INFO mapred.TaskRunner: Task attempt_local_0001_m_000002_0 is allowed to commit now
12/10/22 21:18:30 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_m_000002_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/22 21:18:30 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:134217728+67108864
12/10/22 21:18:30 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000002_0' done.
12/10/22 21:18:30 INFO mapred.MapTask: numReduceTasks: 0
12/10/22 21:18:33 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:201326592+35576587
12/10/22 21:18:34 INFO mapred.JobClient:  map 85% reduce 0%
12/10/22 21:18:36 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:201326592+35576587
12/10/22 21:18:37 INFO mapred.JobClient:  map 97% reduce 0%
12/10/22 21:18:38 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000003_0 is done. And is in the process of commiting
12/10/22 21:18:38 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:201326592+35576587
12/10/22 21:18:38 INFO mapred.TaskRunner: Task attempt_local_0001_m_000003_0 is allowed to commit now
12/10/22 21:18:38 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_m_000003_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/22 21:18:38 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/apat63_99.txt:201326592+35576587
12/10/22 21:18:38 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000003_0' done.
12/10/22 21:18:39 INFO mapred.JobClient:  map 100% reduce 0%
12/10/22 21:18:39 INFO mapred.JobClient: Job complete: job_local_0001
12/10/22 21:18:39 INFO mapred.JobClient: Counters: 8
12/10/22 21:18:39 INFO mapred.JobClient:   FileSystemCounters
12/10/22 21:18:39 INFO mapred.JobClient:     FILE_BYTES_READ=66764
12/10/22 21:18:39 INFO mapred.JobClient:     HDFS_BYTES_READ=639593233
12/10/22 21:18:39 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=137016
12/10/22 21:18:39 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=639556540
12/10/22 21:18:39 INFO mapred.JobClient:   Map-Reduce Framework
12/10/22 21:18:39 INFO mapred.JobClient:     Map input records=2923923
12/10/22 21:18:39 INFO mapred.JobClient:     Spilled Records=0
12/10/22 21:18:39 INFO mapred.JobClient:     Map input bytes=236903179
12/10/22 21:18:39 INFO mapred.JobClient:     Map output records=2923923

第七步:查看结果集,运行结果如下:

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值