HelloMapReduce

开发环境:eclipse-jee-indigo-SR2-linux-gtk + hadoop-0.20.205.0

1.安装eclipse,直接解压eclipse-jee-indigo-SR2-linux-gtk.tar.gz即可,如果启动时提示找不到jre或者jdk,可以进入/eclipse目录,执行如下命令:

mkdir jre

cd jre

ln -s $JAVA_HOME/bin

 

2.安装hadoop的eclipse插件,在/hadoop-0.20.205.0/contrib/eclipse-plugin/目录下将jar文件复制到/eclipse/plugins目录下,然后重启eclipse即可

 

3.配置Hadoop安装目录:

 

4.创建Hadoop项目

File->New->Other,找到Map/Reduce Project,点击next,如何项目名称,比如HelloMapReduce,然后点击Finish,项目创建完成



 

5.编写java文件

   该程序从data.txt文件中找出每年的最大值,data.txt中的每行表示年份和对应的值

 

package com.nexusy.hadoop;

import java.io.IOException;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class HelloMapReduce {
	
	static class MyMapper extends Mapper<LongWritable, Text, Text, IntWritable>{

		@Override
		protected void map(LongWritable key, Text value, Context context)
				throws IOException, InterruptedException {
			String line = value.toString();
			String year = line.substring(0, 4);
			String num = line.substring(5);
			context.write(new Text(year), new IntWritable(Integer.parseInt(num)));
		}
		
	}
	
	static class MyReducer extends Reducer<Text, IntWritable, Text, IntWritable>{

		@Override
		protected void reduce(Text key, Iterable<IntWritable> values, Context context)
				throws IOException, InterruptedException {
			int maxValue = Integer.MIN_VALUE;
			for(IntWritable value : values){
				maxValue = Math.max(maxValue, value.get());
			}
			context.write(key, new IntWritable(maxValue));
		}
		
	}
	
	public static void main(String[] args) throws Exception{
		Job job = new Job();
		job.setJarByClass(HelloMapReduce.class);
		
		FileInputFormat.addInputPath(job, new Path("data.txt"));
		FileOutputFormat.setOutputPath(job, new Path("output"));
		
		job.setMapperClass(MyMapper.class);
		job.setReducerClass(MyReducer.class);
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(IntWritable.class);
		
		System.exit(job.waitForCompletion(true) ? 0 : 1);
	}

}

 

6.在项目根目录下放一个data.txt文件,内容如下

2011 33
2011 600
2011 77
2011 33
2011 665
2011 123
2012 187
2012 25
2012 753
2012 134
2012 234
2012 575
2012 332
2012 100

 

7.运行第5步创建的.java文件

右键选择Run As->Run On Hadoop,然后选择Define a new hadoop server location,并点击next,输入Location Name,然后点击Finish即可。

 

8.控制台输出如下

12/03/04 16:46:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
12/03/04 16:46:30 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/03/04 16:46:30 INFO input.FileInputFormat: Total input paths to process : 1
12/03/04 16:46:30 INFO mapred.JobClient: Running job: job_local_0001
12/03/04 16:46:31 INFO util.ProcessTree: setsid exited with exit code 0
12/03/04 16:46:31 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@138c63
12/03/04 16:46:31 INFO mapred.MapTask: io.sort.mb = 100
12/03/04 16:46:31 INFO mapred.JobClient:  map 0% reduce 0%
12/03/04 16:46:36 INFO mapred.MapTask: data buffer = 79691776/99614720
12/03/04 16:46:36 INFO mapred.MapTask: record buffer = 262144/327680
12/03/04 16:46:36 INFO mapred.MapTask: Starting flush of map output
12/03/04 16:46:36 INFO mapred.MapTask: Finished spill 0
12/03/04 16:46:36 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/03/04 16:46:39 INFO mapred.LocalJobRunner: 
12/03/04 16:46:39 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
12/03/04 16:46:39 INFO mapred.JobClient:  map 100% reduce 0%
12/03/04 16:46:39 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1cef4f7
12/03/04 16:46:39 INFO mapred.LocalJobRunner: 
12/03/04 16:46:39 INFO mapred.Merger: Merging 1 sorted segments
12/03/04 16:46:39 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 156 bytes
12/03/04 16:46:39 INFO mapred.LocalJobRunner: 
12/03/04 16:46:39 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
12/03/04 16:46:39 INFO mapred.LocalJobRunner: 
12/03/04 16:46:39 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
12/03/04 16:46:39 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to output
12/03/04 16:46:42 INFO mapred.LocalJobRunner: reduce > reduce
12/03/04 16:46:42 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
12/03/04 16:46:43 INFO mapred.JobClient:  map 100% reduce 100%
12/03/04 16:46:43 INFO mapred.JobClient: Job complete: job_local_0001
12/03/04 16:46:43 INFO mapred.JobClient: Counters: 20
12/03/04 16:46:43 INFO mapred.JobClient:   File Output Format Counters 
12/03/04 16:46:43 INFO mapred.JobClient:     Bytes Written=30
12/03/04 16:46:43 INFO mapred.JobClient:   FileSystemCounters
12/03/04 16:46:43 INFO mapred.JobClient:     FILE_BYTES_READ=11442
12/03/04 16:46:43 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=78488
12/03/04 16:46:43 INFO mapred.JobClient:   File Input Format Counters 
12/03/04 16:46:43 INFO mapred.JobClient:     Bytes Read=121
12/03/04 16:46:43 INFO mapred.JobClient:   Map-Reduce Framework
12/03/04 16:46:43 INFO mapred.JobClient:     Map output materialized bytes=160
12/03/04 16:46:43 INFO mapred.JobClient:     Map input records=14
12/03/04 16:46:43 INFO mapred.JobClient:     Reduce shuffle bytes=0
12/03/04 16:46:43 INFO mapred.JobClient:     Spilled Records=28
12/03/04 16:46:43 INFO mapred.JobClient:     Map output bytes=126
12/03/04 16:46:43 INFO mapred.JobClient:     Total committed heap usage (bytes)=279183360
12/03/04 16:46:43 INFO mapred.JobClient:     CPU time spent (ms)=0
12/03/04 16:46:43 INFO mapred.JobClient:     SPLIT_RAW_BYTES=113
12/03/04 16:46:43 INFO mapred.JobClient:     Combine input records=0
12/03/04 16:46:43 INFO mapred.JobClient:     Reduce input records=14
12/03/04 16:46:43 INFO mapred.JobClient:     Reduce input groups=2
12/03/04 16:46:43 INFO mapred.JobClient:     Combine output records=0
12/03/04 16:46:43 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
12/03/04 16:46:43 INFO mapred.JobClient:     Reduce output records=2
12/03/04 16:46:43 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
12/03/04 16:46:43 INFO mapred.JobClient:     Map output records=14

 

9.最后在项目根目录下生成output目录及两个文件_SUCCESS和part-r-00000,如下图所示
   

 

PS:

1.如果JVM的内存太小将不能运行mapreduce任务,本人开始时虚拟机只有512MB内存,启动hadoop和eclipse后只剩几十MB内存,于是报OutOfMemeryError错误,将虚拟机内存调整为2GB后正常运行。

2.伪分布模式下启动Hadoop时,要先执行hadoop namenode -format,然后才是start-dfs.sh,start-mapred.sh
 
 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值