I have a huge CSV file I would like to process using Hadoop MapReduce on Amazon EMR (python).
The file has 7 fields, however, I am only looking at the date and quantity field.
"date" "receiptId" "productId" "quantity" "price" "posId" "cashierId"
Firstly, my mapper.py
import sys
def main(argv):
line = sys.stdin.readline()
try:
while line:
list = line.split('\t')
#If date meets criteria, add quantity to express key
if int(list[0][11:13])>=17 and int(list[0][11:13])<=19:
print '%s\t%s' % ("Express", int(list[3]))
#Else, add quantity to non-express key
else:
print '%s\t%s' % ("Non-express", int(list[3]))
line = sys.stdin.readline()
except "end of file":
return None
if __name__ == "__main__":
main(sys.argv)
For the reducer, I will be using the streaming command: aggregate.
Question:
Is my code right? I ran it in Amazon EMR but i got an empty output.
So my end result should be: express, XXX and non-express, YYY. Can I have it do a divide operation before returning the result? Just the result of XXX/YYY. Where should i put this code? A reducer??
Also, this is a huge CSV file, so will mapping break it up into a few partitions? Or do I need to explicitly call a FileSplit? If so, how do I do that?
解决方案
Answering my own question here!
The code is wrong. If you're using aggregate library to reduce, your output does not follow the usual key value pair. It requires a "prefix".
if int(list[0][11:13])>=17 and int(list[0][11:13])<=19:
#This is the correct way of printing for aggregate library
#Print all as a string.
print "LongValueSum:" + "Express" + "\t" + list[3]
The other "prefixes" available are: DoubleValueSum, LongValueMax, LongValueMin, StringValueMax, StringValueMin, UniqValueCount, ValueHistogram. For more info, look here http://hadoop.apache.org/common/docs/r0.15.2/api/org/apache/hadoop/mapred/lib/aggregate/package-summary.html.
Yes, if you want to do more than just the basic sum, min, max or count, you need to write your own reducer.
I do not yet have the answer.