以java版的wordcount代码为例,看不同阶段的用户日志会以什么形式输出
public final class JavaWordCount {
private static final Pattern SPACE = Pattern.compile(" ");
public static void main(String[] args) throws Exception {
if (args.length < 1) {
System.err.println("Usage: JavaWordCount <file>");
System.exit(1);
}
SparkConf sparkConf = new SparkConf().setAppName("JavaWordCount");
JavaSparkContext ctx = new JavaSparkContext(sparkConf);
JavaRDD<String> lines = ctx.textFile(args[0], 1);
//// 在执行flatmap操作的executor上输出,会在stdout中显示,
JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
@Override
public Iterable<String> call(String s) {
String[] tempList = SPACE.split(s);
StringBuffer sb = new StringBuffer();
for (String string : tempList) {
sb.append(string + ".......");
}
System.out.println(sb.toString());
return Arrays.asList(tempList);
}
});
// 在执行maptopair操作的executor上输出,会在stdout中显示,
JavaPairRDD<String, Integer> ones = words.mapToPair(new PairFunction<String, String, Integer>() {
@Override
public Tuple2<String, Integer> call(String s) {
System.out.println(s + " " + 1);
return new Tuple2<String, Integer>(s, 1);
}
});
// 在执行reduce操作的executor上输出,会在stdout中显示,
JavaPairRDD<String, Integer> counts = ones.reduceByKey(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer i1, Integer i2) {
System.out.println(i1 + " + " + i2 + " = " + (i1 + i2));
return i1 + i2;
}
});
//终端输出
List<Tuple2<String, Integer>> output = counts.collect();for (Tuple2<?,?> tuple : output) {
System.out.println(tuple._1() + ": " + tuple._2());
}
ctx.stop();
}
综上所述,我们对比hadoop平台上的mapreduce计算模式,我们在job提交时打印的日志 和spark 一样,只会在终端输出,而进入map task 和 reduce task任务后,用户日志会在正在执行task的node节点上中输出,同样的,spark中,执行flatmap, maptopair以及reducebykey 等操作时,属于分布在不同节点上的task任务,因此用户自定义的日志会在相应的excecuotr中的stdout输出,而程序本身的执行信息 则会在stdeer中输出