Hadoop运行计算任务,大概有以下几种方式
- 把MapReduce任务打包到jar里,上传到服务器,用命令行启动
- 通过Java向Hadoop集群提交MapReduce任务
- 服务器的hadoop配置拷到本地,设置hosts指向namenode和resourcemanager,本地执行hadoop jar
- 任务做成schedule,定时调用shell脚本运行java任务
- Eclipse的hadoop插件
本文介绍第5种方式
Eclipse下搭建hadoop2.7.3开发环境
- 下载并编译haddop-eclipse-plugin-2.7.3.jar
- 把hadoop-eclipse-plugin-2.7.3.jar放到myeclipse的安装目录下的plugins目录下,重启myeclipse
- 下载haddop2.7.3,并解压到c:\tools\hadoop-2.7.3
- 设置hosts将master.hadoop指向hadoop集群
- 设置Eclipes Hadoop位置
新建hadoop配置
参数说明:
Map/Reduce(V2)Master
- Host hadoop主机地址
- Port 对应mapred-site.xml下的jobtracher地址
<property>
<name>mapreduce.jobtracher.http.address</name>
<value>master.hadoop:50020</value>
</property>
DFS Master
- Port 对应core-site.xml下的fs.default.name端口
<property>
<name>fs.defaultFS</name>
<value>hdfs://master.hadoop:9000</value>
</property>
- User name:填写windows的用户名
Advanced Parameters - hadoop.tmp.dir 设置为core-site.xml下hadoop.tmp.dir参数
另外,修改hdfs-site.xml下的dfs.permission参数,允许连接
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
还要在hadoop新建目录
hdfs dfs -mkdir -p /user/hadoop/input
hdfs dfs -mkdir -p /user/hadoop/output
保存配置参数,并重启MyEclipse,可以看见如下的文件结构,配置成功。
(测试时遇到下面问题,未能解决,以后有时间再来处理,可以先不管它)
**拷贝hadoop.dll和winutils
下载hadoop.dll和winutils.exe到windows的hadoop/bin下
并把hadoop.dll添加到windows->system32
新建环境变量
新建项目测试
Hadoop建立测试文件
vi test.text
hello
world
who
are
you
thank you.
You are welcome.
hdfs dfs -mkdir -p /user/hadoop/input
hdfs dfs -put test.txt /user/hadoop/input
hdfs dfs -chmod 777 /user/hadoop/
新建 MapReduce项目
新建文件
log4j.properties
log4j.rootLogger=DEBUG,stdout,R
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p - %m%n
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=mapreduce_test.log
log4j.appender.R.MaxFileSize=1MB
log4j.appender.R.MaxBackupIndex=1
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%p %t %c - %m%n
log4j.logger.com.codefutures=INFO
WordCount.java
package com.test.hadoop;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static class TokenizerMapper extends
Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer extends
Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
//System.setProperty("hadoop.home.dir", "C:\\tools\\hadoop-2.7.3"); 加这一句则可以不要设置环境变量
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args)
.getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println(otherArgs.length);
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Run configuration
运行
右击testMapReduce项目,Run As-run on hadoop
查看结果
然而在 http://master.hadoop:8088/cluster/apps
中并未发现提交的任务。
拷贝配置文件
从集群中拷贝 core-site.xml , hdfs-sie.xml , mapred-site.xml , yarn-site.xml到项目中。修改main函数:
Configuration conf = new Configuration();
conf.addResource("core-site.xml");
conf.addResource("hdfs-site.xml");
conf.addResource("mapred-site.xml");
conf.addResource("yarn-site.xml");
再运行,遇到下面错误
参照
http://blog.csdn.net/xundh/article/details/46572183#t7
这里解决。
可以在All Applications看到任务运行情况。
运行失败,报异常信息:
Exception message: /bin/bash: 第 0 行:fg: 无任务控制
处理方法
在项目的core-site.xml添加:
<property>
<name>mapreduce.app-submission.cross-platform</name>
<value>true</value>
</property>
或main函数写:
conf.set("mapreduce.app-submission.cross-platform", "true");