第一次写博客,记录下自己的学习,一方面方便自己复习,提高学习的效率,另一方面提供自己的心得和体会,欢迎大家交流,共同进步呀。
1.整理一下大数据简单的wordcount实验。开发工具IDEA2019,hadoop2.7.3.wordcount实验有三个重要步骤,下面记录以下内容,含代码和注释。(代码注释不喜勿喷。)
2.首先利用maven建立好项目包,然后添加依赖和编辑pom.xml文件。
pom.xm
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>cn.lg</groupId>
<artifactId>wordcount</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-common</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>2.7.3</version>
</dependency>
</dependencies>
</project>
3.然后src同级创建并设置log4j.properties,Log4j由三个重要的组件构成:日志信息的优先级,日志信息的输出目的地,日志信息的输出格式。日志信息的优先级从高到低有ERROR、WARN、 INFO、DEBUG,分别用来指定这条日志信息的重要程度;日志信息的输出目的地指定了日志将打印到控制台还是文件中;而输出格式则控制了日志信息的显示内容。
log4j.properties
### 设置###
log4j.rootLogger = debug,stdout,D,E
### 输出信息到控制抬 ###
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target = System.out
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern = [%-5p] %d{yyyy-MM-dd HH:mm:ss,SSS} method:%l%n%m%n
### 输出DEBUG 级别以上的日志到=E://logs/error.log ###
log4j.appender.D = org.apache.log4j.DailyRollingFileAppender
log4j.appender.D.File = E://logs/log.log
log4j.appender.D.Append = true
log4j.appender.D.Threshold = DEBUG
log4j.appender.D.layout = org.apache.log4j.PatternLayout
log4j.appender.D.layout.ConversionPattern = %-d{yyyy-MM-dd HH:mm:ss} [ %t:%r ] - [ %p ] %m%n
### 输出ERROR 级别以上的日志到=E://logs/error.log ###
log4j.appender.E = org.apache.log4j.DailyRollingFileAppender
log4j.appender.E.File =E://logs/error.log
log4j.appender.E.Append = true
log4j.appender.E.Threshold = ERROR
log4j.appender.E.layout = org.apache.log4j.PatternLayout
log4j.appender.E.layout.ConversionPattern = %-d{yyyy-MM-dd HH:mm:ss} [ %t:%r ] - [ %p ] %m%n
4.最后分别建立wordcount 的mapreduce工作,它分为三部分,分别是WcMapper.java文件,WcReducer文件,WcDriver.java文件。代码如下:
WcMapper.java
package com.wordcount;
//先添加依赖 缺少依赖导致错误 添加log4j和修改pom.xml
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class WcMapper extends Mapper<LongWritable,Text, Text,IntWritable> {
private Text word;
private IntWritable one;
/*LongWritable 是偏移量
Text 是内容
Text 是字符 String类型
IntWritable是 整型输出
*/
//ctrl+o 重写父类
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
//拿到这一行数据
String line=value.toString();
//按照空格切分数据
String[] words =line.split( " ");
//遍历数组,把单词变成(word,1)的形式交给框架
for (String word: words){
//for 循环数组的一种方法 数组下表:数组名
//context.write输出
this.word.set(word);
context.write(this.word,this.one);
}
}
}
WcReducer.java
package com.wordcount;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class WcReducer extends Reducer<Text, IntWritable,Text,IntWritable> {
IntWritable total = new IntWritable();
@Override
//ctrl+o 重写父类
protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
//做累加
int sum = 0;
for (IntWritable value : values){
sum += value.get(); //用value.get()获取values数组的内容
}
//包装结果并输出
total.set(sum);//用total获取sum
context.write(key,total);
}
}
WcDriver.java
package com.wordcount;
import com.google.inject.internal.cglib.core.$Signature;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
public class WcDriver {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { // 任务的设置
//1.获取一个job实例
Job job= Job.getInstance(new Configuration());
//2.设置我们的类路径
job.setJarByClass(WcDriver.class);
//3.设置Mapper和Reducer
job.setMapperClass(WcMapper.class);
job.setReducerClass(Reducer.class);
//4.设置Mapper Reducer 输出类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
//5.设置输入输出数据
FileInputFormat.setInputPaths(job, args[0]);
FileOutputFormat.setOutputPath(job,new Path(args[1]));
//6.提交我们的job
System.out.println(job.waitForCompletion(true) ? 0:1);
}
}