认识wordcount(一)

认识wordcount(一)

在这里插入图片描述

0. 执行前准备

0.1 拷贝Hadoop二进制包中的可执行Jar包

将Hadoop二进制包中的可执行Jar包拷贝到工程下新建的lib文件夹,将他们都右键【build path】->【Add to build path】

0.2 设置日志配置文件

将Hadoop二进制包中的hadoop-2.7.3\etc\hadoop\log4j.properties文件拷贝到src下,才可以看到执行过程的日志信息

1. 本地提交

1.1 测试数据

1.1.1 word1.txt

hadoop
hive spark
zookeeper pig flink
hadoop hbase
spark
hdfs
namenode
hadoop sqoop hive hbase

1.1.2 word2.txt

datanode namenode pig hive
spark hbase hadoop
sqoop
zookeeper
mahout
flume
yarn mapreduce
impala storm kafka

1.1.3 word3.txt

hadoop spark
hive
kafka
flume pig flink
zookeeper

1.2 WordCount程序

1.2.1 Mapper
package wordcount;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class WordCountMap extends Mapper<LongWritable, Text, Text, IntWritable>{
	private Text word = new Text();
	private IntWritable one = new IntWritable(1);
	@Override
	protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, IntWritable>.Context context)
			throws IOException, InterruptedException {
		String line = value.toString();
		String[] words = line.split(" ");
		for(String str : words) {
			word.set(str);
			context.write(word, one);
		}
	}
}

默认的文件读取格式是TextInputFormat(由job.setInputFormat(输入类.class)设定),key为每行的首字符在文件中的字节数,value是一行的文本。

  1. 将一行文本从TextWritable转为String
  2. 因为文本中间是空格分隔,所以用空格分解文本
  3. 遍历获得的String数组,将文本和1组合成<文本,1>的二元组
  4. 形成中间数据传递给Reducer
1.2.2 Reducer
package wordcount;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class WordCountRecuce extends Reducer<Text, IntWritable, Text, IntWritable>{
	private IntWritable _sum = new IntWritable();
	@Override
	protected void reduce(Text key, Iterable<IntWritable> values,
			Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException {
		int sum = 0;
		for (IntWritable value : values) {
			sum += value.get();
			_sum.set(sum);
		}
		context.write(key, _sum);
	}
}

Mapper阶段结束后,传给Reducer的是一个key和value集合的二元组<key,Iterable>

  1. 遍历Mapper传过来的IntWritable集合,将相同key值的value统计成总数
  2. 输出key和总数的二元组<key,_sum>
1.2.3 Driver
package wordcount;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;


public class WordCountDriver {
	public static void main(String[] args) throws ClassNotFoundException, InterruptedException, IOException {
		Configuration conf = new Configuration();
		conf.set("mapreduce.framework.name", "local");
		Path outfile = new Path("file:///D:/out");
		FileSystem fs = outfile.getFileSystem(conf);
		if(fs.exists(outfile)){
			fs.delete(outfile,true);
		}
		Job job = Job.getInstance(conf);
		job.setMapperClass(WordCountMap.class);
		job.setReducerClass(WordCountRecuce.class);
		job.setJobName("wordcount");
		job.setJarByClass(WordCountDriver.class);
		job.setMapOutputKeyClass(Text.class);
		job.setMapOutputValueClass(IntWritable.class);
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(IntWritable.class);
		FileInputFormat.addInputPath(job, new Path("file:///D:/wordcount"));
		FileOutputFormat.setOutputPath(job, outfile);
		System.exit(job.waitForCompletion(true)?0:1);
	}
}
  1. 首先,获取一个Configuration对象
  2. 设置配置为本地运行
  3. 创建一个输出的路径
  4. 用Configuration对象获得Job实例
  5. 设置Job的MapperClass
  6. 设置Job的ReducerClass
  7. 设置Job的Name
  8. 设置打包的主类JarByClass
  9. 设置Mapper输出的key类型MapOutputKeyClass
  10. 设置Mapper输出的value类型MapOutputValueClass
  11. 设置Reducer输出的key类型OutputKeyClass
  12. 设置Reducer输出的value类型OutputValueClass
  13. 设置文件输入的路径setInputPaths
  14. 设置文件输出的路径setOutputPath
  15. 设置等待Job完成的方法waitForCompletion(必须设置,否则不继续执行)

1.3 执行程序

19/09/21 09:19:05 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
19/09/21 09:19:05 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
19/09/21 09:19:06 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
19/09/21 09:19:06 WARN mapreduce.JobResourceUploader: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
19/09/21 09:19:07 INFO input.FileInputFormat: Total input paths to process : 3
19/09/21 09:19:07 INFO mapreduce.JobSubmitter: number of splits:3
19/09/21 09:19:08 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local291050806_0001
19/09/21 09:19:08 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
19/09/21 09:19:08 INFO mapreduce.Job: Running job: job_local291050806_0001
19/09/21 09:19:08 INFO mapred.LocalJobRunner: OutputCommitter set in config null
19/09/21 09:19:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/09/21 09:19:08 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
19/09/21 09:19:08 INFO mapred.LocalJobRunner: Waiting for map tasks
19/09/21 09:19:08 INFO mapred.LocalJobRunner: Starting task: attempt_local291050806_0001_m_000000_0
19/09/21 09:19:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/09/21 09:19:08 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
19/09/21 09:19:08 INFO mapred.Task: Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@1e9ffb4f
19/09/21 09:19:08 INFO mapred.MapTask: Processing split: file:/D:/wordcount/word2.txt:0+115
19/09/21 09:19:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/09/21 09:19:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/09/21 09:19:08 INFO mapred.MapTask: soft limit at 83886080
19/09/21 09:19:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/09/21 09:19:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/09/21 09:19:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTaskKaTeX parse error: Double subscript at position 523: …91050806_0001_m_̲000000_0 is don…MapOutputBuffer
19/09/21 09:19:08 INFO mapred.LocalJobRunner:
19/09/21 09:19:08 INFO mapred.MapTask: Starting flush of map output
19/09/21 09:19:08 INFO mapred.MapTask: Spilling map output
19/09/21 09:19:08 INFO mapred.MapTask: bufstart = 0; bufend = 155; bufvoid = 104857600
19/09/21 09:19:08 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214340(104857360); length = 57/6553600
19/09/21 09:19:09 INFO mapred.MapTask: Finished spill 0
19/09/21 09:19:09 INFO mapred.Task: Task:attempt_local291050806_0001_m_000001_0 is done. And is in the process of committing
19/09/21 09:19:09 INFO mapred.LocalJobRunner: map
19/09/21 09:19:09 INFO mapred.Task: Task ‘attempt_local291050806_0001_m_000001_0’ done.
19/09/21 09:19:09 INFO mapred.LocalJobRunner: Finishing task: attempt_local291050806_0001_m_000001_0
19/09/21 09:19:09 INFO mapred.LocalJobRunner: Starting task: attempt_local291050806_0001_m_000002_0
19/09/21 09:19:09 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/09/21 09:19:09 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
19/09/21 09:19:09 INFO mapreduce.Job: Job job_local291050806_0001 running in uber mode : false
19/09/21 09:19:09 INFO mapreduce.Job: map 100% reduce 0%
19/09/21 09:19:09 INFO mapred.Task: Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@5a3854ff
19/09/21 09:19:09 INFO mapred.MapTask: Processing split: file:/D:/wordcount/word3.txt:0+54
19/09/21 09:19:09 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/09/21 09:19:09 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/09/21 09:19:09 INFO mapred.MapTask: soft limit at 83886080
19/09/21 09:19:09 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/09/21 09:19:09 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/09/21 09:19:09 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/09/21 09:19:09 INFO mapred.LocalJobRunner:
19/09/21 09:19:09 INFO mapred.MapTask: Starting flush of map output
19/09/21 09:19:09 INFO mapred.MapTask: Spilling map output
19/09/21 09:19:09 INFO mapred.MapTask: bufstart = 0; bufend = 82; bufvoid = 104857600
19/09/21 09:19:09 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214368(104857472); length = 29/6553600
19/09/21 09:19:09 INFO mapred.MapTask: Finished spill 0
19/09/21 09:19:09 INFO mapred.Task: Task:attempt_local291050806_0001_m_000002_0 is done. And is in the process of committing
19/09/21 09:19:09 INFO mapred.LocalJobRunner: map
19/09/21 09:19:09 INFO mapred.Task: Task ‘attempt_local291050806_0001_m_000002_0’ done.
19/09/21 09:19:09 INFO mapred.LocalJobRunner: Finishing task: attempt_local291050806_0001_m_000002_0
19/09/21 09:19:09 INFO mapred.LocalJobRunner: map task executor complete.
19/09/21 09:19:09 INFO mapred.LocalJobRunner: Waiting for reduce tasks
19/09/21 09:19:09 INFO mapred.LocalJobRunner: Starting task: attempt_local291050806_0001_r_000000_0
19/09/21 09:19:09 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/09/21 09:19:09 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
19/09/21 09:19:09 INFO mapred.Task: Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@3fb1d90f
19/09/21 09:19:09 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@453cd5e7
19/09/21 09:19:09 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=1323407744, maxSingleShuffleLimit=330851936, mergeThreshold=873449152, ioSortFactor=10, memToMemMergeOutputsThreshold=10
19/09/21 09:19:09 INFO reduce.EventFetcher: attempt_local291050806_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
19/09/21 09:19:09 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local291050806_0001_m_000002_0 decomp: 100 len: 104 to MEMORY
19/09/21 09:19:09 INFO reduce.InMemoryMapOutput: Read 100 bytes from map-output for attempt_local291050806_0001_m_000002_0
19/09/21 09:19:09 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 100, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->100
19/09/21 09:19:09 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local291050806_0001_m_000001_0 decomp: 187 len: 191 to MEMORY
19/09/21 09:19:09 INFO reduce.InMemoryMapOutput: Read 187 bytes from map-output for attempt_local291050806_0001_m_000001_0
19/09/21 09:19:09 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 187, inMemoryMapOutputs.size() -> 2, commitMemory -> 100, usedMemory ->287
19/09/21 09:19:09 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local291050806_0001_m_000000_0 decomp: 207 len: 211 to MEMORY
19/09/21 09:19:09 INFO reduce.InMemoryMapOutput: Read 207 bytes from map-output for attempt_local291050806_0001_m_000000_0
19/09/21 09:19:09 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 207, inMemoryMapOutputs.size() -> 3, commitMemory -> 287, usedMemory ->494
19/09/21 09:19:09 INFO reduce.EventFetcher: EventFetcher is interrupted… Returning
19/09/21 09:19:09 INFO mapred.LocalJobRunner: 3 / 3 copied.
19/09/21 09:19:09 INFO reduce.MergeManagerImpl: finalMerge called with 3 in-memory map-outputs and 0 on-disk map-outputs
19/09/21 09:19:09 INFO mapred.Merger: Merging 3 sorted segments
19/09/21 09:19:09 INFO mapred.Merger: Down to the last merge-pass, with 3 segments left of total size: 467 bytes
19/09/21 09:19:09 INFO reduce.MergeManagerImpl: Merged 3 segments, 494 bytes to disk to satisfy reduce memory limit
19/09/21 09:19:09 INFO reduce.MergeManagerImpl: Merging 1 files, 494 bytes from disk
19/09/21 09:19:09 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
19/09/21 09:19:09 INFO mapred.Merger: Merging 1 sorted segments
19/09/21 09:19:09 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 479 bytes
19/09/21 09:19:09 INFO mapred.LocalJobRunner: 3 / 3 copied.
19/09/21 09:19:09 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
19/09/21 09:19:09 INFO mapred.Task: Task:attempt_local291050806_0001_r_000000_0 is done. And is in the process of committing
19/09/21 09:19:09 INFO mapred.LocalJobRunner: 3 / 3 copied.
19/09/21 09:19:09 INFO mapred.Task: Task attempt_local291050806_0001_r_000000_0 is allowed to commit now
19/09/21 09:19:09 INFO output.FileOutputCommitter: Saved output of task ‘attempt_local291050806_0001_r_000000_0’ to file:/D:/out/_temporary/0/task_local291050806_0001_r_000000
19/09/21 09:19:09 INFO mapred.LocalJobRunner: reduce > reduce
19/09/21 09:19:09 INFO mapred.Task: Task ‘attempt_local291050806_0001_r_000000_0’ done.
19/09/21 09:19:09 INFO mapred.LocalJobRunner: Finishing task: attempt_local291050806_0001_r_000000_0
19/09/21 09:19:09 INFO mapred.LocalJobRunner: reduce task executor complete.
19/09/21 09:19:10 INFO mapreduce.Job: map 100% reduce 100%
19/09/21 09:19:10 INFO mapreduce.Job: Job job_local291050806_0001 completed successfully
19/09/21 09:19:10 INFO mapreduce.Job: Counters: 35
File System Counters
FILE: Number of bytes read=4897
FILE: Number of bytes written=1165783
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=0
HDFS: Number of bytes written=0
HDFS: Number of read operations=0
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Map-Reduce Framework
Map input records=21
Map output records=39
Map output bytes=410
Map output materialized bytes=506
Input split bytes=279
Combine input records=0
Combine output records=0
Reduce input groups=18
Reduce shuffle bytes=506
Reduce input records=39
Reduce output records=18
Spilled Records=78
Shuffled Maps =3
Failed Shuffles=0
Merged Map outputs=3
GC time elapsed (ms)=0
Total committed heap usage (bytes)=1460142080
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=270
File Output Format Counters
Bytes Written=168

1.4 执行结果

datanode 1
flink 2
flume 2
hadoop 5
hbase 3
hdfs 1
hive 4
impala 1
kafka 2
mahout 1
mapreduce 1
namenode 2
pig 3
spark 4
sqoop 2
storm 1
yarn 1
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=270
File Output Format Counters
Bytes Written=168

1.4 执行结果

datanode 1
flink 2
flume 2
hadoop 5
hbase 3
hdfs 1
hive 4
impala 1
kafka 2
mahout 1
mapreduce 1
namenode 2
pig 3
spark 4
sqoop 2
storm 1
yarn 1
zookeeper 3

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,下面是一个简单的wordcount的mapreduce实现,您可以参考一下: Mapper代码: ``` public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); context.write(word, one); } } } ``` Reducer代码: ``` public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); } } ``` Driver代码: ``` public class WordCount { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(WordCountMapper.class); job.setCombinerClass(WordCountReducer.class); job.setReducerClass(WordCountReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } } ``` 这个实现假设输入文件是一些文本文件,每个文件的每一行都是一些单词,输出是每个单词出现的次数。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值