Hadoop 本地运行模式 单词统计案例

查看Hadoop目录内容
[atguigu@hadoop102 hadoop-3.3.4]$ ll
总用量 92
drwxr-xr-x. 2 atguigu atguigu   203 7月  29 2022 bin
drwxr-xr-x. 3 atguigu atguigu    20 7月  29 2022 etc
drwxr-xr-x. 2 atguigu atguigu   106 7月  29 2022 include
drwxr-xr-x. 3 atguigu atguigu    20 7月  29 2022 lib
drwxr-xr-x. 4 atguigu atguigu   288 7月  29 2022 libexec
-rw-rw-r--. 1 atguigu atguigu 24707 7月  29 2022 LICENSE-binary
drwxr-xr-x. 2 atguigu atguigu  4096 7月  29 2022 licenses-binary
-rw-rw-r--. 1 atguigu atguigu 15217 7月  17 2022 LICENSE.txt
-rw-rw-r--. 1 atguigu atguigu 29473 7月  17 2022 NOTICE-binary
-rw-rw-r--. 1 atguigu atguigu  1541 4月  22 2022 NOTICE.txt
-rw-rw-r--. 1 atguigu atguigu   175 4月  22 2022 README.txt
drwxr-xr-x. 3 atguigu atguigu  4096 7月  29 2022 sbin
drwxr-xr-x. 4 atguigu atguigu    31 7月  29 2022 share
在Hadoop目录下创建wcinput文件夹
[atguigu@hadoop102 hadoop-3.3.4]$ mkdir wcinput
[atguigu@hadoop102 hadoop-3.3.4]$ cd wcinput/
创建word.txt用于存放原始数据
[atguigu@hadoop102 wcinput]$ vim word.txt
[atguigu@hadoop102 wcinput]$ cd ../
[atguigu@hadoop102 hadoop-3.3.4]$ pwd
/opt/module/hadoop-3.3.4
调用自带的mapreduce程序进行统计
[atguigu@hadoop102 hadoop-3.3.4]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.4.jar wordcount wcinput/ wcoutput
2024-03-25 23:52:58,766 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties
2024-03-25 23:52:58,826 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2024-03-25 23:52:58,826 INFO impl.MetricsSystemImpl: JobTracker metrics system started
2024-03-25 23:52:59,129 INFO input.FileInputFormat: Total input files to process : 1
2024-03-25 23:52:59,146 INFO mapreduce.JobSubmitter: number of splits:1
2024-03-25 23:52:59,247 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local17770842_0001
2024-03-25 23:52:59,247 INFO mapreduce.JobSubmitter: Executing with tokens: []
2024-03-25 23:52:59,442 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
2024-03-25 23:52:59,443 INFO mapreduce.Job: Running job: job_local17770842_0001
2024-03-25 23:52:59,459 INFO mapred.LocalJobRunner: OutputCommitter set in config null
2024-03-25 23:52:59,480 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2


执行结束后查看统计结果
[atguigu@hadoop102 hadoop-3.3.4]$ cd wcoutput/
[atguigu@hadoop102 wcoutput]$ ll
总用量 4
-rw-r--r--. 1 atguigu atguigu 58 3月  25 23:53 part-r-00000
-rw-r--r--. 1 atguigu atguigu  0 3月  25 23:53 _SUCCESS
part-r-00000 为统计结果
[atguigu@hadoop102 wcoutput]$ cat part-r-00000 
6854	2
8788	1
90	2
90650	1
bibib	2
hahah	2
heiehi	1
hha	1
[atguigu@hadoop102 wcoutput]$ cd ../
[atguigu@hadoop102 hadoop-3.3.4]$ pwd
/opt/module/hadoop-3.3.4
[atguigu@hadoop102 hadoop-3.3.4]$ cd wcinput/
[atguigu@hadoop102 wcinput]$ pwd
/opt/module/hadoop-3.3.4/wcinput
原始内容如下
[atguigu@hadoop102 wcinput]$ cat word.txt 
6854 90650
90 8788 hha 
hahah bibib 
bibib heiehi
hahah 90 6854
[atguigu@hadoop102 wcinput]$ 


在这里插入图片描述

在这里插入图片描述

  • 3
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是一个简单的 Hadoop 单词统计代码示例,统计一个文本文件中每个单词出现的次数: Mapper 类: ``` import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); context.write(word, one); } } } ``` Reducer 类: ``` import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); } } ``` Driver 类: ``` import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class WordCountDriver { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "word count"); job.setJarByClass(WordCountDriver.class); job.setMapperClass(WordCountMapper.class); job.setCombinerClass(WordCountReducer.class); job.setReducerClass(WordCountReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } } ``` 在运行时,需要将输入文件和输出目录作为参数传递给程序。例如: ``` $ hadoop jar WordCount.jar WordCountDriver /input /output ``` 其中,`/input` 是输入文件所在目录,`/output` 是输出目录。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值