winpythonhadoop_用python结合hadoop

用python结合hadoop

作者:阿俊 发布于:2013-3-31 21:48 Sunday

分类:python

由于数据量的疯狂增长,现在的实验或者是比赛都不得不用并行的算法来实现,而hadoop中的map/reduce框架正是多种并行框架中被广泛使用的一种。下面总结一下python+hadoop的几种方法:

1、hadoop流

hadoop为我们提供了一个计算平台和一个并行计算框架,Hadoop流提供的api允许用户使用任何脚本来编写map/reduce函数,因为它使用unix标准流作为程序与hadoop之间的接口,所以任何程序只要可以从标准输入流中读取数据,并且可以写入标准输出流,就可以实现MapReduce。

具体的实现example可以参见这篇blog:http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/

中文版的在此:http://blog.c114.net/html/71/482871-63885.html

2、dumbo(详见https://github.com/klbostee/dumbo/wiki)

--Dumbo is a project that allows you to easily write and run Hadoop programs in Python (it’s named after Disney’s flying circus elephant, since the logo of Hadoop is an elephant and Python was named after the BBC series “Monty Python’s Flying Circus”). More generally, Dumbo can be considered to be a convenient Python API for writing MapReduce programs.

3、hadoopy(详见https://github.com/bwhite/hadoopy)

--hadoopy is another Streaming wrapper that is compatible with dumbo. Similarly, it focuses on typedbytes serialization of data, and directly writes typedbytes to HDFS.

4、pydoop(详见pydoop.sourceforge.net)

--n contrast to the other frameworks, pydoop wraps Hadoop Pipes, which is a C++ API into Hadoop. The project claims that they can provide a richer interface with Hadoop and HDFS because of this, as well as better performance, but this is not clear to me. However, one advantage is the ability to implement a Python Partitioner,RecordReader, and RecordWriter. All input/output must be strings.

几种python实现或者是结合hadoop的方法都各有利弊,但都有一个共同的好处,就是你可以用简短的几行python代码就实现以前很多用c++或者是c实现的并行算法,加上hdfs,就可以很轻松地处理各种大数据了。

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值