Hadoop 实战之Streaming(九)

大家好,今天给大家介绍一下Hadoop提供的一个软件包aggregate

1. aggregate概述
aggregate是Hadoop提供的一个软件包,其用来做一些通用的计算和聚合。
Generally speaking, in order to implement an application using Map/Reduce model, the developer needs to implement Map and Reduce functions (and possibly Combine function). However, for a lot of applications related to counting and statistics computing, these functions have very similarcharacteristics. This provides a package implementing those patterns. In particular,the package provides a generic mapper class,a reducer class and a combiner class, and a set of built-in value aggregators.It also provides a generic utility class, ValueAggregatorJob, that offers a static function that creates map/reduce jobs。
在Streaming中通常使用Aggregate包作为reducer来做聚合统计。

2. aggregate class summary

DoubleValueSumThis class implements a value aggregator that sums up a sequence of double values.
LongValueMaxThis class implements a value aggregator that maintain the maximum of a sequence of long values.
LongValueMinThis class implements a value aggregator that maintain the minimum of a sequence of long values.
LongValueSumThis class implements a value aggregator that sums up a sequence of long values.
StringValueMaxThis class implements a value aggregator that maintain the biggest of a sequence of strings.
StringValueMinThis class implements a value aggregator that maintain the smallest of a sequence of strings.
UniqValueCountThis class implements a value aggregator that dedupes a sequence of objects.
UserDefinedValueAggregatorDescriptorThis class implements a wrapper for a user defined value aggregator descriptor.
ValueAggregatorBaseDescriptorThis class implements the common functionalities of the subclasses of ValueAggregatorDescriptor class.
ValueAggregatorCombiner<K1 extends WritableComparable,V1 extends Writable>This class implements the generic combiner of Aggregate.
ValueAggregatorJobThis is the main class for creating a map/reduce job using Aggregate framework.
ValueAggregatorJobBase<K1 extends WritableComparable,V1 extends Writable>This abstract class implements some common functionalities of the the generic mapper, reducer and combiner classes of Aggregate.
ValueAggregatorMapper<K1 extends WritableComparable,V1 extends Writable>This class implements the generic mapper of Aggregate.
ValueAggregatorReducer<K1 extends WritableComparable,V1 extends Writable>This class implements the generic reducer of Aggregate.
ValueHistogramThis class implements a value aggregator that computes the histogram of a sequence of strings
3. streaming中使用aggregate
在mapper任务的输出中添加控制,如下:
function:key\tvalue
eg:
LongValueSum:key\tvalue
此外,置-reducer = aggregate。此时,Reducer使用aggregate中对应的function类对相同key的value进行操作,例如,设置function为LongValueSum则将对每个键值对应的value求和。

环境:Vmware 8.0 和ubuntu11.04

Hadoop 实战之Streaming(八)---通过Aggregate包使用Streaming

第一步: 首先在/home/tanglg1987目录下新建一个start.sh脚本文件,每次启动虚拟机都要删除/tmp目录下的全部文件,重新格式化namenode,代码如下:
sudo rm -rf /tmp/*
rm -rf /home/tanglg1987/hadoop-0.20.2/logs
hadoop namenode -format
hadoop datanode -format
start-all.sh
hadoop fs -mkdir input 
hadoop dfsadmin -safemode leave

第二步:给start.sh增加执行权限并启动hadoop伪分布式集群,代码如下:

chmod 777 /home/tanglg1987/start.sh
./start.sh 

运行过程如下:

第三步:上传本地文件到hdfs

在专利局http://data.nber.org/patents/网站下载专利数据

http://data.nber.org/patents/apat63_99.zip

hadoop fs -put /home/tanglg1987/apat63_99.txt input

第四步:新建一个AttributeCount.py的Python文件

代码如下:

#!/usr/bin/env python
import sys
index = int(sys.argv[1])
for line in sys.stdin:
    fields = line.split(",")
    print "LongValueSum:" + fields[index] + "\t" + "1"

第五步:新建一个test.py的Python文件

解决Linux下运行Python脚本显示“: 没有那个文件或目录”的问题
我猜不少人都遇到过类似的问题:
在Windows下写好了一个python脚本,运行没问题
但放到Linux系统下就必须在命令行前加上一个python解释器才能运行
脚本开头的注释行已经指明了解释器的路径,也用chmod给了执行权限,但就是不能直接运行脚本。
比如这个脚本:
#!/usr/bin/env python
#-*- coding=utf-8 -*-
def main():
print('This is just a test!\r\n')
if __name__ == '__main__':
main()
按理说没错的,但为什么不能直接运行呢?
后来发现问题出在换行表示上……
Windows下,文本的换行是\r\n一同实现的,而*nix下则只用\n
所以我的第一行代码在Linux下就被识别为了:
#!/usr/bin/env python\r
很显然,系统不知道这个"python\r"是个什么东西……
知道了这个,解决方案就很显而易见了,写了一个自动替换换行标志的脚本:

#!/usr/bin/env python
#-*- coding=utf-8 -*-
import sys, os
def replace_linesep(file_name):
	if type(file_name) != str:
		raise ValueError
	new_lines = []
	
	#以读模式打开文件
	try:
		fobj_original = open(file_name, 'r')
	except IOError:
		print('Cannot read file %s!' % file_name)
		return False
	#逐行读取原始脚本
	print('Reading file %s' % file_name)
	line = fobj_original.readline()
	while line:
		if line[-2:] == '\r\n':
			new_lines.append(line[:-2] + '\n')
		else:
			new_lines.append(line)
		line = fobj_original.readline()
	fobj_original.close()
	
	#以写模式打开文件
	try:
		fobj_new = open(file_name, 'w')
	except IOError:
		print('Cannot write file %s!' % file_name)
		return False
	#逐行写入新脚本
	print('Writing file %s' % file_name)
	for new_line in new_lines:
		fobj_new.write(new_line)
	fobj_new.close()
	return True
		
def main():
	args = sys.argv
	if len(args) < 2:
		print('Please enter the file names as parameters follow this script.')
		os._exit(0)
	else:
		file_names = args[1:]
		for file_name in file_names:
			if replace_linesep(file_name):
				print('Replace for %s successfully!' % file_name)
			else:
				print('Replace for %s failed!' % file_name)
	os._exit(1)

if __name__ == '__main__':
	main()

第六步:新建一个replace.sh的shell文件

/home/tanglg1987/test/streaming/test.py *.py

运行过程如下:

第七步:编写一个名为:list-4-9.sh的shell脚本

$HADOOP_HOME/bin/hadoop  jar $HADOOP_HOME/hadoop-0.20.2-streaming.jar -input input/apat63_99.txt -output output -file	/home/tanglg1987/test/streaming/AttributeCount.py	-mapper 'AttributeCount.py 1' -reducer aggregate D mapred.reduce.tasks=1 

第八步:给list-4-9.sh增加执行权限并启动脚本,代码如下:

chmod 777 /home/tanglg1987/list-4-9.sh
./list-4-9.sh

第九步:运行过程如下:


第十步:查看结果集,运行结果如下:


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值