1、新建IntelliJ下的maven项目
点击File->New->Project,在弹出的对话框中选择Maven,JDK选择你自己安装的版本,点击Next
2、填写Maven的GroupId和ArtifactId
你可以根据自己的项目随便填,点击Next
这样就新建好了一个空的项目
这里程序名填写WordCount,我们的程序是一个通用的网上的范例,用来计算文件中单词出现的次数
3、设置程序的编译版本
打开Intellij的Preference偏好设置,定位到Build, Execution, Deployment->Compiler->Java Compiler,
将WordCount的Target bytecode version修改为你的jdk版本(我的是1.8)
4、配置依赖
编辑pom.xml进行配置
1) 添加hadoop依赖
这里只需要用到基础依赖hadoop-core和hadoop-common;如果需要读写HDFS,
则还需要依赖hadoop-hdfs和hadoop-client;如果需要读写HBase,则还需要依赖hbase-client
在project内尾部添加
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.3</version>
</dependency>
</dependencies>
修改pom.xml完成后,Intellij右上角会提示Maven projects need to be Imported,点击Import Changes以更新依赖,或者点击Enable Auto Import
最后,我的完整的pom.xml如下:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>me.jinkun</groupId>
<artifactId>mapreducer</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.3</version>
</dependency>
</dependencies>
</project>
5、编写主程序
RunJob.java
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class RunJob {
public static void main(String[] args) throws Exception {
//System.setProperty("hadoop.home.dir", "D:\\software\\hadoop-2.7.3");
Configuration config = new Configuration();
//设置hdfs的通讯地址
config.set("fs.defaultFS", "hdfs://localhost:9000");
//设置RN的主机
config.set("yarn.resourcemanager.hostname", "localhost");
try {
FileSystem fs = FileSystem.get(config);
Job job = Job.getInstance(config);
job.setJarByClass(RunJob.class);
job.setJobName("wc");
job.setMapperClass(WcMapper.class);
job.setReducerClass(WcReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path("file:/D:/software/hadoop-2.7.3/tmp/input/LICENSE.txt"));
Path outpath = new Path("output");
if (fs.exists(outpath)) {
fs.delete(outpath, true);
}
FileOutputFormat.setOutputPath(job, outpath);
boolean f = job.waitForCompletion(true);
if (f) {
System.out.println("job任务执行成功");
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
map程序WcMapper.java
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.util.StringUtils;
import java.io.IOException;
public class WcMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] words = StringUtils.split(value.toString(), ' ');
for (String w : words) {
context.write(new Text(w), new IntWritable(1));
}
}
}
Reduce程序WcReducer.java
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class WcReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable i : values) {
sum = sum + i.get();
}
context.write(key, new IntWritable(sum));
}
}
6、配置输入和输出结果文件夹
1) 添加和src目录同级的input文件夹到项目中
在input文件夹中放置一个或多个输入文件源
我的输入文件源是定义的磁盘文件file:/D:/software/hadoop-2.7.3/tmp/input/LICENSE.txt
2) 配置运行参数
在Intellij菜单栏中选择Run->Edit Configurations,在弹出来的对话框中点击+,新建一个Application配置。配置Main class为RunJob(可以点击右边的…选择),
Program arguments为input/ output/,即输入路径为刚才创建的input文件夹,输出为output
我的设定的是如下方式,即存储在hdfs中
Path outpath = new Path("output");
FileOutputFormat.setOutputPath(job, outpath);
由于Hadoop的设定,下次运行时务必删除output文件夹!
好了,运行程序,至此,一个简单的hadoop程序完成!
常见问题:
1、Failed to locate the winutils binary in the hadoop binary path
windows上运行程序需要winutils.exe支持,否则会报错,自己百度下载一个,将winutils.exe和libwinutils.lib放在Hadoop的bin下,并将Hadoop.dll放入系统的C:\Windows\System32中
并且添加HADOOP_HOME和path环境变量
2、文件权限问题
修改hdfs-site.xml,添加入下配置
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
3、运行日志输出
在src统计目录添加log4j.properties
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.Target=System.out
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
log4j.rootLogger=INFO, console