...现在的实验或者是比赛都不得不用并行的算法来实现,而hadoop中的map/reduce框架正是多种并行框架中被广泛使用的一种。下面总结一下python+hadoop的几种方法: 1、hadoop流 hadoop为我们提供了一个计算平台和一个并行计算框架,Hadoop
由于数据量的疯狂增长,现在的实验或者是比赛都不得不用并行的算法来实现,而hadoop中的map/reduce框架正是多种并行框架中被广泛使用的一种。下面总结一下python+hadoop的几种方法: 1、hadoop流 hadoop为我们提供了一个计算平台和一个并行计算框架,Hadoop
由于数据量的疯狂增长,现在的实验或者是比赛都不得不用并行的算法来实现,而hadoop中的map/reduce框架正是多种并行框架中被广泛使用的一种。下面总结一下python+hadoop的几种方法:
1、hadoop流
hadoop为我们提供了一个计算平台和一个并行计算框架,Hadoop流提供的api允许用户使用任何脚本来编写map/reduce函数,因为它使用unix标准流作为程序与hadoop之间的接口,所以任何程序只要可以从标准输入流中读取数据,并且可以写入标准输出流,就可以实现MapReduce。
具体的实现example可以参见这篇blog:http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
中文版的在此:http://blog.c114.net/html/71/482871-63885.html
2、dumbo(详见https://github.com/klbostee/dumbo/wiki)
--Dumbo is a project that allows you to easily write and run Hadoop programs in Python (it’s named after Disney’s flying circus elephant, since thelogo of Hadoopis an elephant and Python was named after theBBCseries “Monty Python’s Flying Circus”). More generally, Dumbo can be considered to be a convenient PythonAPIfor writing MapReduce programs.
3、hadoopy(详见https://github.com/bwhite/hadoopy)
--hadoopyis another Streaming wrapper that is compatible with dumbo. Similarly, it focuses on typedbytes serialization of data, and directly writes typedbytes to HDFS.
4、pydoop(详见pydoop.sourceforge.net)
--n contrast to the other frameworks,pydoopwraps Hadoop Pipes, which is a C++ API into Hadoop. The project claims that they can provide a richer interface with Hadoop and HDFS because of this, as well as better performance, but this is not clear to me. However, one advantage is the ability to implement a PythonPartitioner,RecordReader, andRecordWriter. All input/output must be strings.
几种python实现或者是结合hadoop的方法都各有利弊,但都有一个共同的好处,就是你可以用简短的几行python代码就实现以前很多用c++或者是c实现的并行算法,加上hdfs,就可以很轻松地处理各种大数据了。
zzhttp://somemory.com/myblog/?post=56