关于Chukwa配置及运行实例,请参见:
http://my.oschina.net/xiangchen/blog/100424
Chukwa将收集到的数据以Sink Files的形式写入到HDFS中,如果不做Archive和Demux操作的话,默认存储在hdfs:///chukwa/logs目录下。
Sink File是Hadoop Sequence File,包含key-value集合。
其中key的类型为org.apache.hadoop.chukwa.ChukwaArchiveKey;
value的类型为org.apache.hadoop.chukwa.ChunkImpl.
一个完整的MapReduce过程可参考代码:
假定已经有Sequence File文件写入到hdfs:///chukwa/logs目录下,我们要对该文件做一个最基本的Word Count MapReduce操作。代码如下:
Tips: 读取Sequece File需要注明job.setInputFormatClass(SequenceFileInputFormat.class)
public class WordCount_SequenceFile {
public static class TokenizerMapper extends
Mapper<ChukwaArchiveKey, ChunkImpl, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(ChukwaArchiveKey key, ChunkImpl value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(new String(value.getData()));
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer extends
Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args)
.getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
@SuppressWarnings("deprecation")
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount_SequenceFile.class);
job.setInputFormatClass(SequenceFileInputFormat.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
关于MapReduce程序的编译,请参见:
http://my.oschina.net/xiangchen/blog/102091
在运行时需要添加chukwa-0.5.0.jar库:
hadoop jar MapReduce.jar demo.mapreduce.WordCount_SequenceFile
-libjars MapReduce.jar,chukwa-0.5.0.jar /chukwa/logs/xx.done /output_file