How do we estimate relative frequencies from counts?
f(B|A):”Stripes”
Computing relative frequencies with the stripes approach is straightforward.
a->{b1:3, b2:12, b3:7, b4:1, ….}
- One pass to compute (a, *)
f(B|A):”Pairs” - “Order Inversion”
- What’s the issue?
- Computing relative frequencies requires marginal counts
- But the marginal cannot be computed until you see all counts
- Buffering is a bad idea!
- Solution
- 在本地统计(a,*),然后作为第一个数据发送给reducer,a是key
- Example
- For this to work
- Must emit extra (a,*) for every bn in mapper
- Must make sure all a’s get sent to same reducer(use parititioner)
- Must make sure (a,*) comes first (define sort order)
- Must hold state in reducer across diffrrent key-value pairs
Synchronization: Pairs vs. Stripes
- Pairs: turn synchronization into an ordering problem
- Sort keys into correct order of computation
- (a, *) is holded in the memory of reducer
- Stripes: construct data structures that bring partial results together
- Each reducer recieves all the data it needs to complete the computation
Secondary Sorting: Solutions
However, what if in addition to sorting by key, we also need to sort by value?
Example:
There are m sensors each taking readings on continuous basis.
Suppose we wish to reconstruct the activity at each individual sensor over time.
However, since MapReduce makes no guarantees about the ordering of values associated with the same key, the sensor readings will not likely be in temporal order.
Solution 1: “buffer and in-memory sort”
- 数据形式:m1 ->(t1, r80521), sensor id m1 is the key
- Buffer values in memory, then sort.
- It’s a bad idea: any in-memory buering of data introduces a potential scalability bottleneck.
Solution 2: “value-to-key conversion”
- 数据形式:(m1, t1) ->(r80521), sensor id m1 and the timestamp t1 are the keys
- Basic idea: move part of the value into the intermediate key to form a composite key, and let the MapReduce execution framework handle the sorting.