需要统计的文件
$ cat input.txt
foo foo quux iio oo pp pp oo
see you you again welcome test
test ddd gggg ggg
acc aaa dddd
bbb ddd ccc
ddd ccc aaa
wo ni ta
who am i
-----------------------------
mapper.py代码
$ cat mapper.py
#!/usr/bin/env python
import sys
#输入为标准输入stdin
for line in sys.stdin:
#删除开头和结果的空格
line = line.strip( )
#以默认空格分隔行单词到words列表
words = line.split( )
for word in words:
#输出所有单词,格式为“单词,1”以便作为reduce的输入
print '%s\t%s' %(word,1)
--------------------------
reducer.py代码
$ cat reducer.py
#!/usr/bin/env python
import sys
current_word = None
current_count = 0
word = None
#获取标准输入,即mapper.py的输出
for line in sys.stdin:
line = line.strip()
#解析mapper.py输出作为程序的输入,以tab作为分隔符
word,count = line.split('\t',1)
#转换count从字符型成整型
try:
count = int(count)
except ValueError:
#非字符时忽略此行
continue
#要求mapper.py的输出做排序(sort)操作,以便对连续的word做判断
if current_word == word:
current_count +=count
else:
if current_word:
#输出当前word统计结果到标准输出
print '%s\t%s' %(current_word,current_count)
current_count =count
current_word =word
#输出最后一个word统计
if current_word ==word:
print '%s\t%s' % (current_word,current_count)
------------------------------
# basic test
hduser@test:~$ echo "foo foo quux labs foo bar quux" | ./mapper.py
foo 1
foo 1
quux 1
labs 1
foo 1
bar 1
quux 1
hduser@test:~$ echo "foo foo quux labs foo bar quux" | ./mapper.py | sort -k1,1 | ./reducer.py
bar 1
foo 3
labs 1
quux 2
hduser@est:~$ cat /tmp/test.txt | ./mapper.py
The 1
Project 1
Gutenberg 1
EBook 1
of 1
[...]
(you get the idea)
----------------------------------
$ cat word_job.sh
#!/bin/bash
#先删除输出目录
hdfs dfs -rmr /user/hdfs/wordcount/output
#注意,下面配置中的环境值每个人机器不一样
hadoop jar /usr/hdp/2.3.0.0-2557/hadoop-mapreduce/hadoop-streaming.jar \
-D mapred.map.tasks=5 \
-D mapred.reduce.tasks=5 \
-D mapred.job.map.capacity=5 \
-D mapred.job.reduce.capacity=5 \
-D mapred.job.name="wordcount" \
-D num.key.fields.for.partition=1 \
-D stream.num.map.output.key.fields=2 \
-D stream.non.zero.exit.is.failure=false \
-partitioner org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner \
-input "/user/hdfs/wordcount/input.txt" \
-output "/user/hdfs/wordcount/output" \
-mapper "python mapper.py" \
-reducer "python reducer.py" \
-file "mapper.py" \
-file "reducer.py"
注意:
ambari hadoop-streaming.jar位置
/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/hadoop-streaming.jar
CDH hadoop-streaming.jar位置
/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/hadoop-mapreduce/hadoop-streaming.jar
$ hdfs dfs -cat /user/hdfs/wordcount/output/*
acc 1
ddd 3
dddd 1
i 1
ni 1
wo 1
aaa 2
am 1
iio 1
again 1
ccc 2
gggg 1
oo 2
quux 1
see 1
ta 1
test 2
welcome 1
bbb 1
foo 2
ggg 1
pp 2
who 1
you 2
python实现wordcount程序
最新推荐文章于 2024-09-05 21:27:48 发布