这是从《python自动化运维》上看来的,因版本不同做了点修改。
目的是统计文件words.txt中单词出现的词频,步骤如下:1、上传words.txt到HDFS;2、执行MapReduce任务;3、查看输出结果
我们在hadoop上操作之前现在本地测试一下:
一、本地测试
要处理的文件words.txt,因为hadoop会自动排序,我测试时候就模拟自动排序了一下,实际上传的文件单词胡乱分开就可以了:
<p>hehe lala</p><p>pig hehe nihao</p><p>nihao</p><p>hehe hehe hehe</p><p>hehe pig pig</p><p>hehe </p><p>zhu nihao pig</p><p>pig nihao zhu</p><p>cat</p><p>zhu</p><p>pig pig pig pig zhu zhu zhu zhu</p><p>zhu tian tian tain</p>
hadoop除了用java编写mapreduce任务,还提供了其他语言操作的API——Hadoop Streaming,他通过标准的输入和输出来实现map与reduce之间传递数据,映射到Python中便是sys.stdin输入数据、sys.stdout输出数据。其他业务逻辑也直接在Python中编写。
mapper.py代码:
#!/usr/bin/env python
import sys
for line in sys.stdin:
line = line.strip()
words = line.split()
for word in words:
print '%s\t%s' % (word,1)
reducer.py
#!/usr/bin/env python
from operator import itemgetter
import sys
current_word = None
current_count = 0
word = None
for line in sys.stdin:
line = line.strip()
word,count = line.split('\t',1)
try:
count = int(count)
except ValueError:
continue
if current_word == word:
current_count += count
else:
if current_word:
print '%s\t%s' % (current_word,current_count)
current_count = count
current_word = word
if current_word == word:
print '%s\t%s' % (current_word,current_count)
编写好后用 catwords.txt | python mapper.py | sort | python reducer.py测试一下结果是否正确,再去hadoop上进行处理。
二、hadoop测试
先新建一个word目录存放words.txt:
hadoop fs -mkdir /word/
把words.txt上传到HDFS:
hadoop fs -put words.txt /word/words.txt
列出上传的文件:
hadoop fs -ls /word/
<p>hadoop <span style="color:red;">jar</span>hadoop-streaming-2.7.1.jar <span style="color:red;">-mapper</span> ./mapper.py <span style="color:red;">-reducer</span> ./reducer.py <span style="color:red;">-input </span>/word/words.txt <span style="color:red;">-output</span> /hehe/</p>
这里的mapper.py和reducer.py都放在hadoop主目录下
hadoop-streaming-2.7.1.jar这个文件在hadoop目录下的/share/hadoop/tools/lib/中,我把他cp到hadoop的主目录下去了,方便执行
任务结束后去/hehe/查看结果,其中/hehepart-00000为分析结果文件。
目前用到的命令总结:
格式化HDFS:hadoop namenode –format;
启动Hadoop:sbin/start-all.sh (简单粗暴,但是系统推荐一个一个启动);
关闭Hadoop:sbin/stop-all.sh (同样系统推荐一个一个关闭);
关闭safe mode:hadoop dfsadmin -safemode leave
fs命令,操作文件和目录:
创建目录:hadoop fs -mkdir [-p] <paths>
-p:连同父目录一起创建
列出目录:hadoop fs -ls [-d] [-h] [-R] [<path> ...]
-d:Directories are listed as plain files.
-h: Format filesizes in a human-readable fashion (eg 64.0m instead of 67108864).
-R: Recursivelylist subdirectories encountered.
删除文件或目录:hadoop fs -rmr [-skipTrash] <args>
官方推荐:hadoop fs -rm –r
Skiptrash是,跳过回收站完全删除?
上传文件:hadoop fs –put <localsrc> ... <dst>
查看文件内容:hadoop fs -cat filename 和 hadoop fs -text filename (-text还有其他功能)
jar命令,运行jar文件:
主要用Hadoop Streaming进行,即hadoop jar hadoop-streaming-2.7.1.jar,举个例子,照这个运行就行:
---------------------------------------------------------
hadoop jar hadoop-streaming-2.7.1.jar \
-input myInputDirs \
-output myOutputDir \
-mapper /bin/cat \
-reducer /usr/bin/wc
--------------------------------------