文章目录
Hadoop案例:求取共同好友
题目
用户:用户的好友列表
A:B,C,D,F,E,O
B:A,C,E,K
C:F,A,D,I
D:A,E,F,L
E:B,C,D,M,L
F:A,B,C,D,E,O,M
G:A,C,D,E,F
H:A,C,D,E,O
I:A,O
J:B,O
K:A,C,D
L:D,E,F
M:E,F,G
O:A,H,I,J
求:哪两个人之间有共同好友,以及他们的共同好友都有哪些人
分析:用两个mapreduce实现
第一个mapreduce
读取数据
A:B,C,D,F,E,O
按照冒号进行切分
然后将好友列表继续按照逗号切分
String[] arrays = [“B,C,D,F,E,O”]
String[] arrays2 = E:B,C,D,M,L
map阶段发出去的数据:(好友为key,用户为value)
key2 | value2 |
---|---|
B | A |
C | A |
D | A |
F | A |
E | A |
O | A |
B | E |
数据到了reduce
B [A ,E]
有一个共同好友是B
A-E B
数据输出成这样的形式
A-B-E-F-G-H-K C
A-B E
实现程序
Mapper
package demo1.step1;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class Step1Mapper extends Mapper<LongWritable,Text,Text,Text> {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
String[] split = line.split(":");
String[] friendList = split[1].split(",");
for (String friend : friendList) {
context.write(new Text(friend),new Text(split[0]));
}
}
}
Reducer
package demo1.step1;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class Step1Reducer extends Reducer<Text,Text,Text,Text> {
//reduce接收到的数据为 B [A,E]
//B为好友,集合里为多个用户
//输出拼接字符串: A-B-E-F-G-H-K C
@Override
protected void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
StringBuffer sb = new StringBuffer();
for (Text value : values) {
sb.append(value.toString()).append("-");
}
context.write(new Text(sb.toString()),key);
}
}
Main
package demo1.step1;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class Step1Main extends Configured implements Tool {
@Override
public int run(String[] strings) throws Exception {
Job job = Job.getInstance(super.getConf(),"step1");
//第一步
job.setInputFormatClass(TextInputFormat.class);
TextInputFormat.addInputPath(job,new Path("file:///C:\\Users\\Yichun\\Desktop\\hadoop\\hadoop_05\\共同好友\\input"));
job.setMapperClass(Step1Mapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setReducerClass(Step1Reducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setOutputFormatClass(TextOutputFormat.class);
TextOutputFormat.setOutputPath(job,new Path("file:///C:\\Users\\Yichun\\Desktop\\hadoop\\hadoop_05\\共同好友\\output1"));
boolean b = job.waitForCompletion(true);
return b?0:1;
}
public static void main(String[] args) throws Exception {
int run = ToolRunner.run(new Configuration(), new Step1Main(), args);
System.exit(run);
}
}
第二个mapreduce
输入数据:
用户列表 | 好友 |
---|---|
F-D-O-I-H-B-K-G-C- | A |
E-A-J-F- | B |
K-A-B-E-F-G-H- | C |
G-K-C-A-E-L-F-H- | D |
G-F-M-B-H-A-L-D- | E |
M-D-L-A-C-G- | F |
M- | G |
O- | H |
C-O- | I |
O- | J |
B- | K |
E-D- | L |
F-E- | M |
J-I-H-A-F- | O |
将输入的k1通过两层for循环两两切割作为key
避免出现以下情况:
E-A B
A-E D
所以需要排序,才能得到:
A-E B,D
到reduce阶段:
A-B [C,E]
A-B C-E
程序实现
Mapper
package demo1.step2;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
import java.util.Arrays;
public class Step2Mapper extends Mapper<LongWritable, Text,Text,Text> {
@Override
protected void map(LongWritable key, Text value, Context context) throws