slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Large-Scale Data Processing with MapReduce PowerPoint Presentation
Download Presentation
Large-Scale Data Processing with MapReduce

Loading in 2 Seconds...

play fullscreen
1 / 213

Large-Scale Data Processing with MapReduce - PowerPoint PPT Presentation


  • 112 Views
  • Uploaded on

AAAI 2011 Tutorial. Large-Scale Data Processing with MapReduce. Jimmy Lin University of Maryland Sunday, August 7, 2011. These slides are available on my homepage at http :// www.umiacs.umd.edu/~jimmylin /.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Large-Scale Data Processing with MapReduce' - kosey


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

AAAI 2011 Tutorial

Large-Scale Data Processing with MapReduce

Jimmy Lin

University of Maryland

Sunday, August 7, 2011

These slides are available on my homepage at http://www.umiacs.umd.edu/~jimmylin/

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United StatesSee http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

first things first
First things first…
  • About me
  • Course history
  • Audience survey
agenda
Agenda
  • Setting the stage: Why large data? Why is this different?
  • Introduction to MapReduce
  • MapReduce algorithm design
  • Text retrieval
  • Managing relational data
  • Graph algorithms
  • Beyond MapReduce
expectations
Expectations
  • Focus on “thinking at scale”
  • Deconstruction into “design patterns”
  • Basic intuitions, not fancy math
  • Mapping well-known algorithms to MapReduce
  • Not a tutorial on programming Hadoop
  • Entry point to book
setting the stage why large data

Setting the stage

Introduction to MapReduce

MapReduce algorithm design

Text retrieval

Managing relational data

Graph algorithms

Beyond MapReduce

Setting the Stage:Why large data?
how much data
How much data?

processes 20 PB a day (2008)

Wayback Machine: 3 PB + 100 TB/month (3/2009)

6.5 PB of user data + 50 TB/day (5/2009)

LHC: 15 PB a year(any day now)

36 PB of user data + 80-90 TB/day (6/2010)

640Kought to be enough for anybody.

LSST: 6-10 PB a year (~2015)

no data like more data

(Banko and Brill, ACL 2001)

(Brants et al., EMNLP 2007)

No data like more data!

s/knowledge/data/g;

How do we get here if we’re not Google?

slide9

cheap commodity clusters

+ simple, distributed programming models

= data-intensive computing for the masses!

setting the stage why is this different

Setting the stage

Introduction to MapReduce

MapReduce algorithm design

Text retrieval

Managing relational data

Graph algorithms

Beyond MapReduce

Setting the Stage:Why is this different?
parallel computing is hard
Parallel computing is hard!

Fundamental issues

Different programming models

scheduling, data distribution, synchronization, inter-process communication, robustness, fault tolerance, …

Message Passing

Shared Memory

Memory

Architectural issues

Flynn’s taxonomy (SIMD, MIMD, etc.),network typology, bisection bandwidthUMA vs. NUMA, cache coherence

Common problems

P1

P2

P3

P4

P5

P1

P2

P3

P4

P5

livelock, deadlock, data starvation, priority inversion…

dining philosophers, sleeping barbers, cigarette smokers, …

Different programming constructs

producer

consumer

master

mutexes, conditional variables, barriers, …

masters/slaves, producers/consumers, work queues, …

work queue

slaves

The reality: programmer shoulders the burden of managing concurrency…

producer

consumer

(I want my students developing new algorithms, not debugging race conditions)

where the rubber meets the road
Where the rubber meets the road
  • Concurrency is difficult to reason about
    • At the scale of datacenters (even across datacenters)
    • In the presence of failures
    • In terms of multiple interacting services
  • The reality:
    • Lots of one-off solutions, custom code
    • Write you own dedicated library, then program with it
    • Burden on the programmer to explicitly manage everything
slide14

I think there is a world market for about five computers.

The datacenter is the computer!

Source: NY Times (6/14/2006)

what s the point
What’s the point?
  • It’s all about the right level of abstraction
  • Hide system-level details from the developers
    • No more race conditions, lock contention, etc.
  • Separating the what from how
    • Developer specifies the computation that needs to be performed
    • Execution framework (“runtime”) handles actual execution

The datacenter is the computer!

big ideas
“Big Ideas”
  • Scale “out”, not “up”
    • Limits of SMP and large shared-memory machines
  • Move processing to the data
    • Cluster have limited bandwidth
  • Process data sequentially, avoid random access
    • Seeks are expensive, disk throughput is reasonable
  • Seamless scalability
    • From the mythical man-month to the tradable machine-hour
introduction to mapreduce

Setting the stage

Introduction to MapReduce

MapReduce algorithm design

Text retrieval

Managing relational data

Graph algorithms

Beyond MapReduce

Introduction to MapReduce
typical large data problem
Typical Large-Data Problem
  • Iterate over a large number of records
  • Extract something of interest from each
  • Shuffle and sort intermediate results
  • Aggregate intermediate results
  • Generate final output

Map

Reduce

Key idea: provide a functional abstraction for these two operations

(Dean and Ghemawat, OSDI 2004)

mapreduce
MapReduce
  • Programmers specify two functions:

map (k, v) → <k’, v’>*

reduce (k’, v’) → <k’, v’>*

    • All values with the same key are sent to the same reducer
  • The execution framework handles everything else…
slide21

k1

v1

k2

v2

k3

v3

k4

v4

k5

v5

k6

v6

map

map

map

map

a

1

b

2

c

3

c

6

a

5

c

2

b

7

c

8

Shuffle and Sort: aggregate values by keys

a

1

5

b

2

7

c

2

3

6

8

reduce

reduce

reduce

r1

s1

r2

s2

r3

s3

mapreduce1
MapReduce
  • Programmers specify two functions:

map (k, v) → <k’, v’>*

reduce (k’, v’) → <k’, v’>*

    • All values with the same key are sent to the same reducer
  • The execution framework handles everything else…

What’s “everything else”?

mapreduce runtime
MapReduce “Runtime”
  • Handles scheduling
    • Assigns workers to map and reduce tasks
  • Handles “data distribution”
    • Moves processes to data
  • Handles synchronization
    • Gathers, sorts, and shuffles intermediate data
  • Handles errors and faults
    • Detects worker failures and restarts
  • Everything happens on top of a distributed FS
mapreduce2
MapReduce
  • Programmers specify two functions:

map (k, v) → <k’, v’>*

reduce (k’, v’) → <k’, v’>*

    • All values with the same key are reduced together
  • The execution framework handles everything else…
  • Not quite…usually, programmers also specify:

partition (k’, number of partitions) → partition for k’

    • Often a simple hash of the key, e.g., hash(k’) mod n
    • Divides up key space for parallel reduce operations

combine (k’, v’) → <k’, v’>*

    • Mini-reducers that run in memory after the map phase
    • Used as an optimization to reduce network traffic
slide25

k1

v1

k2

v2

k3

v3

k4

v4

k5

v5

k6

v6

map

map

map

map

a

1

b

2

c

3

c

6

a

5

c

2

b

7

c

8

combine

combine

combine

combine

a

1

b

2

c

9

a

5

c

2

b

7

c

8

partition

partition

partition

partition

Shuffle and Sort: aggregate values by keys

a

1

5

b

2

7

c

c

2

2

3

9

6

8

8

reduce

reduce

reduce

r1

s1

r2

s2

r3

s3

two more details
Two more details…
  • Barrier between map and reduce phases
    • But we can begin copying intermediate data earlier
  • Keys arrive at each reducer in sorted order
    • No enforced ordering across reducers
mapreduce can refer to
MapReduce can refer to…
  • The programming model
  • The execution framework (aka “runtime”)
  • The specific implementation

Usage is usually clear from context!

mapreduce implementations
MapReduce Implementations
  • Google has a proprietary implementation in C++
    • Bindings in Java, Python
  • Hadoop is an open-source implementation in Java
    • Original development led by Yahoo
    • Now an Apache open source project
    • Emerging as the de facto big data stack
    • Rapidly expanding software ecosystem
  • Lots of custom research implementations
    • For GPUs, cell processors, etc.
    • Includes variations of the basic programming model

Most of these slides are focused on Hadoop

slide30

UserProgram

(1) submit

Master

(2) schedule map

(2) schedule reduce

worker

split 0

(6) write

output

file 0

(5) remote read

worker

split 1

(3) read

split 2

(4) local write

worker

split 3

output

file 1

split 4

worker

worker

Input

files

Map

phase

Intermediate files

(on local disk)

Reduce

phase

Output

files

Adapted from (Dean and Ghemawat, OSDI 2004)

how do we get data to the workers
How do we get data to the workers?

SAN

Compute Nodes

NAS

What’s the problem here?

distributed file system
Distributed File System
  • Don’t move data to workers… move workers to the data!
    • Store data on the local disks of nodes in the cluster
    • Start up the workers on the node that has the data local
  • A distributed file system is the answer
    • GFS (Google File System) for Google’s MapReduce
    • HDFS (Hadoop Distributed File System) for Hadoop
gfs assumptions
GFS: Assumptions
  • Commodity hardware over “exotic” hardware
    • Scale “out”, not “up”
  • High component failure rates
    • Inexpensive commodity components fail all the time
  • “Modest” number of huge files
    • Multi-gigabyte files are common, if not encouraged
  • Files are write-once, mostly appended to
    • Perhaps concurrently
  • Large streaming reads over random access
    • High sustained throughput over low latency

GFS slides adapted from material by (Ghemawat et al., SOSP 2003)

gfs design decisions
GFS: Design Decisions
  • Files stored as chunks
    • Fixed size (64MB)
  • Reliability through replication
    • Each chunk replicated across 3+ chunkservers
  • Single master to coordinate access, keep metadata
    • Simple centralized management
  • No data caching
    • Little benefit due to large datasets, streaming reads
  • Simplify the API
    • Push some of the issues onto the client (e.g., data layout)

HDFS = GFS clone (same basic ideas)

from gfs to hdfs
From GFS to HDFS
  • Terminology differences:
    • GFS master = Hadoop namenode
    • GFS chunkservers = Hadoop datanodes
  • Functional differences:
    • File appends in HDFS is relatively new
    • HDFS performance is (likely) slower

For the most part, we’ll use the Hadoop terminology…

hdfs architecture
HDFS Architecture

HDFS namenode

Application

/foo/bar

(file name, block id)

File namespace

HDFS Client

block 3df2

(block id, block location)

instructions to datanode

datanode state

(block id, byte range)

block data

HDFS datanode

HDFS datanode

Linux file system

Linux file system

Adapted from (Ghemawatet al., SOSP 2003)

namenode responsibilities
Namenode Responsibilities
  • Managing the file system namespace:
    • Holds file/directory structure, metadata, file-to-block mapping, access permissions, etc.
  • Coordinating file operations:
    • Directs clients to datanodes for reads and writes
    • No data is moved through the namenode
  • Maintaining overall health:
    • Periodic communication with the datanodes
    • Block re-replication and rebalancing
    • Garbage collection
putting everything together
Putting everything together…

namenode

job submission node

namenode daemon

jobtracker

tasktracker

tasktracker

tasktracker

datanode daemon

datanode daemon

datanode daemon

Linux file system

Linux file system

Linux file system

slave node

slave node

slave node

mapreduce algorithm design

Setting the stage

Introduction to MapReduce

MapReduce algorithm design

Text retrieval

Managing relational data

Graph algorithms

Beyond MapReduce

MapReduce Algorithm Design
mapreduce recap
MapReduce: Recap
  • Programmers must specify:

map (k, v) → <k’, v’>*

reduce (k’, v’) → <k’, v’>*

    • All values with the same key are reduced together
  • Optionally, also:

partition (k’, number of partitions) → partition for k’

    • Often a simple hash of the key, e.g., hash(k’) mod n
    • Divides up key space for parallel reduce operations

combine (k’, v’) → <k’, v’>*

    • Mini-reducers that run in memory after the map phase
    • Used as an optimization to reduce network traffic
  • The execution framework handles everything else…
slide41

k1

v1

k2

v2

k3

v3

k4

v4

k5

v5

k6

v6

map

map

map

map

a

1

b

2

c

3

c

6

a

5

c

2

b

7

c

8

combine

combine

combine

combine

a

1

b

2

c

9

a

5

c

2

b

7

c

8

partition

partition

partition

partition

Shuffle and Sort: aggregate values by keys

a

1

5

b

2

7

c

2

9

8

reduce

reduce

reduce

r1

s1

r2

s2

r3

s3

everything else
“Everything Else”
  • The execution framework handles everything else…
    • Scheduling: assigns workers to map and reduce tasks
    • “Data distribution”: moves processes to data
    • Synchronization: gathers, sorts, and shuffles intermediate data
    • Errors and faults: detects worker failures and restarts
  • Limited control over data and execution flow
    • All algorithms must expressed in m, r, c, p
  • You don’t know:
    • Where mappers and reducers run
    • When a mapper or reducer begins or finishes
    • Which input a particular mapper is processing
    • Which intermediate key a particular reducer is processing
tools for synchronization
Tools for Synchronization
  • Cleverly-constructed data structures
    • Bring partial results together
  • Sort order of intermediate keys
    • Control order in which reducers process keys
  • Partitioner
    • Control which reducer processes which keys
  • Preserving state in mappers and reducers
    • Capture dependencies across multiple keys and values
preserving state
Preserving State

Mapper object

Reducer object

one object per task

state

state

configure

configure

API initialization hook

map

reduce

one call per input key-value pair

one call per intermediate key

close

close

API cleanup hook

scalable hadoop algorithms themes
Scalable Hadoop Algorithms: Themes
  • Avoid object creation
    • Inherently costly operation
    • Garbage collection
  • Avoid buffering
    • Limited heap size
    • Works for small datasets, but won’t scale!
importance of local aggregation
Importance of Local Aggregation
  • Ideal scaling characteristics:
    • Twice the data, twice the running time
    • Twice the resources, half the running time
  • Why can’t we achieve this?
    • Synchronization requires communication
    • Communication kills performance
  • Thus… avoid communication!
    • Reduce intermediate data via local aggregation
    • Combiners can help
shuffle and sort
Shuffle and Sort

Mapper

intermediate files (on disk)

merged spills (on disk)

Reducer

Combiner

circular buffer (in memory)

Combiner

other reducers

spills (on disk)

other mappers

word count baseline
Word Count: Baseline

What’s the impact of combiners?

word count version 1
Word Count: Version 1

Are combiners still needed?

word count version 2
Word Count: Version 2

Key: preserve state acrossinput key-value pairs!

Are combiners still needed?

design pattern for local aggregation
Design Pattern for Local Aggregation
  • “In-mapper combining”
    • Fold the functionality of the combiner into the mapper by preserving state across multiple map calls
  • Advantages
    • Speed
    • Why is this faster than actual combiners?
  • Disadvantages
    • Explicit memory management required
    • Potential for order-dependent bugs
combiner design
Combiner Design
  • Combiners and reducers share same method signature
    • Sometimes, reducers can serve as combiners
    • Often, not…
  • Remember: combiner are optional optimizations
    • Should not affect algorithm correctness
    • May be run 0, 1, or multiple times
  • Example: find average of all integers associated with the same key
computing the mean version 1
Computing the Mean: Version 1

Why can’t we use reducer as combiner?

computing the mean version 2
Computing the Mean: Version 2

Why doesn’t this work?

computing the mean version 4
Computing the Mean: Version 4

Are combiners still needed?

count and normalize
“Count and Normalize”
  • Many algorithms reduce to estimating relative frequencies:
    • In the case of EM, pseudo-counts instead of actual counts
  • For a large class of algorithms: intuition is the same, just varying complexity in terms of bookkeeping
  • Let’s start with the intuition…
algorithm design running example
Algorithm Design: Running Example
  • Term co-occurrence matrix for a text collection
    • M = N x N matrix (N = vocabulary size)
    • Mij: number of times i and j co-occur in some context (for concreteness, let’s say context = sentence)
  • Why?
    • Distributional profiles as a way of measuring semantic distance
    • Semantic distance useful for many language processing tasks
mapreduce large counting problems
MapReduce: Large Counting Problems
  • Term co-occurrence matrix for a text collection= specific instance of a large counting problem
    • A large event space (number of terms)
    • A large number of observations (the collection itself)
    • Goal: keep track of interesting statistics about the events
  • Basic approach
    • Mappers generate partial counts
    • Reducers aggregate partial counts

How do we aggregate partial counts efficiently?

first try pairs
First Try: “Pairs”
  • Each mapper takes a sentence:
    • Generate all co-occurring term pairs
    • For all pairs, emit (a, b) → count
  • Reducers sum up counts associated with these pairs
  • Use combiners!
pairs analysis
“Pairs” Analysis
  • Advantages
    • Easy to implement, easy to understand
  • Disadvantages
    • Lots of pairs to sort and shuffle around (upper bound?)
    • Not many opportunities for combiners to work
another try stripes
Another Try: “Stripes”
  • Idea: group together pairs into an associative array
  • Each mapper takes a sentence:
    • Generate all co-occurring term pairs
    • For each term, emit a → { b: countb, c: countc, d: countd … }
  • Reducers perform element-wise sum of associative arrays

(a, b) → 1

(a, c) → 2

(a, d) → 5

(a, e) → 3

(a, f) → 2

a → { b: 1, c: 2, d: 5, e: 3, f: 2 }

a → { b: 1, d: 5, e: 3 }

a → { b: 1, c: 2, d: 2, f: 2 }

a → { b: 2, c: 2, d: 7, e: 3, f: 2 }

+

Key: cleverly-constructed data structure

brings together partial results

stripes analysis
“Stripes” Analysis
  • Advantages
    • Far less sorting and shuffling of key-value pairs
    • Can make better use of combiners
  • Disadvantages
    • More difficult to implement
    • Underlying object more heavyweight
    • Fundamental limitation in terms of size of event space
slide66

Cluster size: 38 cores

Data Source: Associated Press Worldstream (APW) of the English Gigaword Corpus (v3), which contains 2.27 million documents (1.8 GB compressed, 5.7 GB uncompressed)

relative frequencies
Relative Frequencies
  • How do we estimate relative frequencies from counts?
  • Why do we want to do this?
  • How do we do this with MapReduce?
f b a stripes
f(B|A): “Stripes”
  • Easy!
    • One pass to compute (a, *)
    • Another pass to directly compute f(B|A)

a → {b1:3, b2 :12, b3 :7, b4 :1, … }

f b a pairs
f(B|A): “Pairs”
  • For this to work:
    • Must emit extra (a, *) for every bn in mapper
    • Must make sure all a’s get sent to same reducer (use partitioner)
    • Must make sure (a, *) comes first (define sort order)
    • Must hold state in reducer across different key-value pairs

(a, *) → 32

Reducer holds this value in memory

(a, b1) → 3

(a, b2) → 12

(a, b3) → 7

(a, b4) → 1

(a, b1) → 3 / 32

(a, b2) → 12 / 32

(a, b3) → 7 / 32

(a, b4) → 1 / 32

order inversion
“Order Inversion”
  • Common design pattern
    • Computing relative frequencies requires marginal counts
    • But marginal cannot be computed until you see all counts
    • Buffering is a bad idea!
    • Trick: getting the marginal counts to arrive at the reducer before the joint counts
  • Optimizations
    • Apply in-memory combining pattern to accumulate marginal counts
    • Should we apply combiners?
synchronization pairs vs stripes
Synchronization: Pairs vs. Stripes
  • Approach 1: turn synchronization into an ordering problem
    • Sort keys into correct order of computation
    • Partition key space so that each reducer gets the appropriate set of partial results
    • Hold state in reducer across multiple key-value pairs to perform computation
    • Illustrated by the “pairs” approach
  • Approach 2: construct data structures that bring partial results together
    • Each reducer receives all the data it needs to complete the computation
    • Illustrated by the “stripes” approach
secondary sorting
Secondary Sorting
  • MapReduce sorts input to reducers by key
    • Values may be arbitrarily ordered
  • What if want to sort value also?
    • E.g., k → (v1, r), (v3, r), (v4, r), (v8, r)…
secondary sorting solutions
Secondary Sorting: Solutions
  • Solution 1:
    • Buffer values in memory, then sort
    • Why is this a bad idea?
  • Solution 2:
    • “Value-to-key conversion” design pattern: form composite intermediate key, (k, v1)
    • Let execution framework do the sorting
    • Preserve state across multiple key-value pairs to handle processing
    • Anything else we need to do?
recap tools for synchronization
Recap: Tools for Synchronization
  • Cleverly-constructed data structures
    • Bring data together
  • Sort order of intermediate keys
    • Control order in which reducers process keys
  • Partitioner
    • Control which reducer processes which keys
  • Preserving state in mappers and reducers
    • Capture dependencies across multiple keys and values
issues and tradeoffs
Issues and Tradeoffs
  • Number of key-value pairs
    • Object creation overhead
    • Time for sorting and shuffling pairs across the network
  • Size of each key-value pair
    • De/serialization overhead
  • Local aggregation
    • Opportunities to perform local aggregation varies
    • Combiners make a big difference
    • Combiners vs. in-mapper combining
    • RAM vs. disk vs. network
text retrieval

Setting the stage

Introduction to MapReduce

MapReduce algorithm design

Text retrieval

Managing relational data

Graph algorithms

Beyond MapReduce

Text Retrieval
abstract ir architecture
Abstract IR Architecture

Documents

Query

document acquisition(e.g., web crawling)

online

offline

Representation

Function

Representation

Function

Query Representation

Document Representation

Index

Comparison

Function

Hits

bag of words
“Bag of Words”
  • Terms weights computed as functions of:
    • Term frequency
    • Collection frequency
    • Document frequency
    • Average document length
  • Well-known weighting functions
    • TF.IDF
    • BM25
    • Dirichlet scores (LM framework)
  • Similarity boils down to inner products of feature vectors:
inverted index
Inverted Index

Doc 1

Doc 2

Doc 4

Doc 3

tf

cat in the hat

one fish, two fish

red fish, blue fish

green eggs and ham

df

1

2

3

4

1

1

blue

1

blue

2

1

1

1

cat

1

cat

3

1

1

1

egg

1

egg

4

1

2

2

fish

2

2

fish

1

2

2

2

1

1

green

1

green

4

1

1

1

ham

1

ham

4

1

1

1

hat

1

hat

3

1

1

1

one

1

one

1

1

1

1

red

1

red

2

1

1

1

two

1

two

1

1

inverted index positional information
Inverted Index: Positional Information

Doc 2

Doc 1

Doc 3

Doc 4

tf

cat in the hat

green eggs and ham

red fish, blue fish

one fish, two fish

df

1

2

3

4

1

1

blue

1

blue

2

1

[3]

[1]

1

1

cat

1

cat

3

1

1

1

egg

1

egg

4

1

[2]

2

2

fish

2

2

fish

1

2

2

2

[2,4]

[2,4]

[1]

1

1

green

1

green

4

1

1

1

ham

1

ham

4

1

[3]

1

1

hat

1

hat

3

1

[2]

[1]

1

1

one

1

one

1

1

1

1

red

1

red

2

1

[1]

1

1

two

1

two

1

1

[3]

retrieval in a nutshell
Retrieval in a Nutshell
  • Look up postings lists corresponding to query terms
  • Traverse postings for each query term
  • Store partial query-document scores in accumulators
  • Select top k results to return
retrieval document at a time
Retrieval: Document-at-a-Time
  • Evaluate documents one at a time (score all query terms)
  • Tradeoffs
    • Small memory footprint (good)
    • Must read through all postings (bad), but skipping possible
    • More disk seeks (bad), but blocking possible

blue

9

2

21

1

35

1

fish

1

2

9

1

21

3

34

1

35

2

80

3

Document score in top k?

Accumulators

(e.g. priority queue)

Yes: Insert document score, extract-min if queue too large

No: Do nothing

retrieval query at a time
Retrieval: Query-at-a-Time
  • Evaluate documents one query term at a time
    • Usually, starting from most rare term (often with tf-sorted postings)
  • Tradeoffs
    • Early termination heuristics (good)
    • Large memory footprint (bad), but filtering heuristics possible

blue

9

2

21

1

35

1

Accumulators(e.g., hash)

Score{q=x}(doc n) = s

fish

1

2

9

1

21

3

34

1

35

2

80

3

mapreduce it
MapReduce it?
  • The indexing problem
    • Scalability is critical
    • Must be relatively fast, but need not be real time
    • Fundamentally a batch operation
    • Incremental updates may or may not be important
    • For the web, crawling is a challenge in itself
  • The retrieval problem
    • Must have sub-second response time
    • For the web, only need relatively few results

Perfect for MapReduce!

Uh… not so good…

indexing performance analysis
Indexing: Performance Analysis
  • Fundamentally, a large sorting problem
    • Terms usually fit in memory
    • Postings usually don’t
  • How is it done on a single machine?
  • How can it be done with MapReduce?
  • First, let’s characterize the problem size:
    • Size of vocabulary
    • Size of postings
vocabulary size heaps law
Vocabulary Size: Heaps’ Law
  • Heaps’ Law: linear in log-log space
  • Vocabulary size grows unbounded!

Mis vocabulary size

Tis collection size (number of documents)

kand bare constants

Typically, kis between 30 and 100, bis between 0.4 and 0.6

heaps law for rcv1
Heaps’ Law for RCV1

k = 44

b = 0.49

First 1,000,020 terms:

Predicted = 38,323

Actual = 38,365

Reuters-RCV1 collection: 806,791 newswire documents (Aug 20, 1996-August 19, 1997)

Manning, Raghavan, Schütze, Introduction to Information Retrieval (2008)

postings size zipf s law
Postings Size: Zipf’s Law
  • Zipf’s Law: (also) linear in log-log space
    • Specific case of Power Law distributions
  • In other words:
    • A few elements occur very frequently
    • Many elements occur very infrequently

cf is the collection frequency of i-th common term

cis a constant

zipf s law for rcv1
Zipf’s Law for RCV1

Fit isn’t that good… but good enough!

Reuters-RCV1 collection: 806,791 newswire documents (Aug 20, 1996-August 19, 1997)

Manning, Raghavan, Schütze, Introduction to Information Retrieval (2008)

mapreduce index construction
MapReduce: Index Construction
  • Map over all documents
    • Emit term as key, (docno, tf) as value
    • Emit other information as necessary (e.g., term position)
  • Sort/shuffle: group postings by term
  • Reduce
    • Gather and sort the postings (e.g., by docno or tf)
    • Write postings to disk
  • MapReduce does all the heavy lifting!
inverted indexing with mapreduce
Inverted Indexing with MapReduce

Doc 3

Doc 1

Doc 2

one

red

cat

1

1

2

1

3

1

red fish, blue fish

one fish, two fish

cat in the hat

Map

two

blue

hat

1

1

2

1

3

1

fish

fish

1

2

2

2

Shuffle and Sort: aggregate values by keys

cat

3

1

blue

2

1

Reduce

fish

1

2

2

2

hat

3

1

one

1

1

two

1

1

red

2

1

positional indexes
Positional Indexes

Doc 3

Doc 2

Doc 1

one

red

cat

1

1

[1]

2

1

[1]

3

1

[1]

red fish, blue fish

one fish, two fish

cat in the hat

Map

two

blue

hat

1

1

[3]

2

1

[3]

3

1

[2]

fish

fish

1

2

[2,4]

2

2

[2,4]

Shuffle and Sort: aggregate values by keys

cat

3

1

[1]

blue

[3]

2

1

Reduce

fish

1

2

[2,4]

2

2

[2,4]

hat

3

1

[2]

one

1

1

[1]

two

1

1

[3]

red

2

1

[1]

inverted indexing pseudo code1
Inverted Indexing: Pseudo-Code

What’s the problem?

scalability bottleneck
Scalability Bottleneck
  • Initial implementation: terms as keys, postings as values
    • Reducers must buffer all postings associated with key (to sort)
    • What if we run out of memory to buffer postings?
  • Uh oh!
another try
Another Try…

(key)

(values)

(keys)

(values)

fish

fish

1

2

[2,4]

1

[2,4]

fish

34

1

[23]

9

[9]

fish

21

3

[1,8,22]

21

[1,8,22]

fish

35

2

[8,41]

34

[23]

fish

80

3

[2,9,76]

35

[8,41]

fish

9

1

[9]

80

[2,9,76]

How is this different?

  • Let the framework do the sorting
  • Term frequency implicitly stored
  • Directly write compressed postings

Where have we seen this before?

postings encoding
Postings Encoding

Conceptually:

fish

1

2

9

1

21

3

34

1

35

2

80

3

In Practice:

  • Don’t encode docnos, encode gaps (or d-gaps)
  • But it’s not obvious that this save space…

fish

1

2

8

1

12

3

13

1

1

2

45

3

overview of index compression
Overview of Index Compression
  • Byte-aligned vs. bit-aligned
  • Non-parameterized bit-aligned
    • Unary codes
    •  codes
    •  codes
  • Parameterized bit-aligned
    • Golomb codes (local Bernoulli model)
  • Block-based methods
    • Simple-9
    • PForDelta

Want more detail? Start with Managing Gigabytes by Witten, Moffat, and Bell!

index compression performance
Index Compression: Performance

Comparison of Index Size (bits per pointer)

One common approach

Bible: King James version of the Bible; 31,101 verses (4.3 MB)

TREC: TREC disks 1+2; 741,856 docs (2070 MB)

Issue: For Golomb compression, optimal b ~ 0.69 (N/df)

Which means different b for every term!

Witten, Moffat, Bell, Managing Gigabytes (1999)

chicken and egg
Chicken and Egg?

(key)

(value)

fish

1

[2,4]

But wait! How do we set the Golomb parameter b?

fish

9

[9]

fish

21

[1,8,22]

Optimal b~ 0.69 (N/df)

fish

34

[23]

We need the df to set b…

fish

35

[8,41]

But we don’t know the df until we’ve seen all postings!

fish

80

[2,9,76]

Write directly to disk

Sound familiar?

getting the df
Getting the df
  • In the mapper:
    • Emit “special” key-value pairs to keep track of df
  • In the reducer:
    • Make sure “special” key-value pairs come first: process them to determine df
  • Remember: proper partitioning!
getting the df modified mapper
Getting the df: Modified Mapper

Input document…

Doc 1

one fish, two fish

(key)

(value)

fish

Emit normal key-value pairs…

1

[2,4]

one

1

[1]

two

1

[3]

fish

Emit “special” key-value pairs to keep track of df…

[1]

one

[1]

two

[1]

getting the df modified reducer
Getting the df: Modified Reducer

(key)

(value)

First, compute the df by summing contributions from all “special” key-value pair…

fish

[63]

[82]

[27]

Compute Golomb parameter b…

fish

1

[2,4]

fish

9

[9]

Important: properly define sort order to make sure “special” key-value pairs come first!

fish

21

[1,8,22]

fish

34

[23]

fish

35

[8,41]

fish

80

[2,9,76]

Write compressed postings

Where have we seen this before?

mapreduce it1
MapReduce it?
  • The indexing problem
    • Scalability is paramount
    • Must be relatively fast, but need not be real time
    • Fundamentally a batch operation
    • Incremental updates may or may not be important
    • For the web, crawling is a challenge in itself
  • The retrieval problem
    • Must have sub-second response time
    • For the web, only need relatively few results
retrieval with mapreduce
Retrieval with MapReduce?
  • MapReduce is fundamentally batch-oriented
    • Optimized for throughput, not latency
    • Startup of mappers and reducers is expensive
  • MapReduce is not suitable for real-time queries!
    • Use separate infrastructure for retrieval…
important ideas
Important Ideas
  • Partitioning (for scalability)
  • Replication (for redundancy)
  • Caching (for speed)
  • Routing (for load balancing)

The rest is just details!

term vs document partitioning
Term vs. Document Partitioning

D

T1

D

T2

Term Partitioning

T3

T

DocumentPartitioning

T

D2

D3

D1

typical search architecture
Typical Search Architecture

brokers

partitions

replicas

managing relational data

Setting the stage

Introduction to MapReduce

MapReduce algorithm design

Text retrieval

Managing relational data

Graph algorithms

Beyond MapReduce

Managing Relational Data
managing relational data1
Managing Relational Data
  • In the “good old days”, organizations used relational databases to manage big data
  • Then along came Hadoop…
  • Where does MapReduce fit in?

BTW, Hadoop is “hot” in the SIGMOD community…

relational databases vs mapreduce
Relational Databases vs. MapReduce
  • Relational databases:
    • Multipurpose: analysis and transactions; batch and interactive
    • Data integrity via ACID transactions
    • Lots of tools in software ecosystem (for ingesting, reporting, etc.)
    • Supports SQL (and SQL integration, e.g., JDBC)
    • Automatic SQL query optimization
  • MapReduce (Hadoop):
    • Designed for large clusters, fault tolerant
    • Data is accessed in “native format”
    • Supports many query languages
    • Programmers retain control over performance
    • Open source

Source: O’Reilly Blog post by Joseph Hellerstein (11/19/2008)

database workloads
Database Workloads
  • OLTP (online transaction processing)
    • Typical applications: e-commerce, banking, airline reservations
    • User facing: real-time, low latency, highly-concurrent
    • Tasks: relatively small set of “standard” transactional queries
    • Data access pattern: random reads, updates, writes (involving relatively small amounts of data)
  • OLAP (online analytical processing)
    • Typical applications: business intelligence, data mining
    • Back-end processing: batch workloads, less concurrency
    • Tasks: complex analytical queries, often ad hoc
    • Data access pattern: table scans, large amounts of data involved per query
one database or two
One Database or Two?
  • Downsides of co-existing OLTP and OLAP workloads
    • Poor memory management
    • Conflicting data access patterns
    • Variable latency
  • Solution: separate databases
    • User-facing OLTP database for high-volume transactions
    • Data warehouse for OLAP workloads
    • How do we connect the two?
oltp olap architecture
OLTP/OLAP Architecture

OLTP

OLAP

ETL(Extract, Transform, and Load)

oltp olap integration
OLTP/OLAP Integration
  • OLTP database for user-facing transactions
    • Retain records of all activity
    • Periodic ETL (e.g., nightly)
  • Extract-Transform-Load (ETL)
    • Extract records from source
    • Transform: clean data, check integrity, aggregate, etc.
    • Load into OLAP database
  • OLAP database for data warehousing
    • Business intelligence: reporting, ad hoc queries, data mining, etc.
    • Feedback to improve OLTP services
business intelligence
Business Intelligence
  • Premise: more data leads to better business decisions
    • Periodic reporting as well as ad hoc queries
    • Analysts, not programmers (importance of tools and dashboards)
  • Examples:
    • Slicing-and-dicing activity by different dimensions to better understand the marketplace
    • Analyzing log data to improve OLTP experience
    • Analyzing log data to better optimize ad placement
    • Analyzing purchasing trends for better supply-chain management
    • Mining for correlations between otherwise unrelated activities
oltp olap architecture hadoop
OLTP/OLAP Architecture: Hadoop?

OLTP

OLAP

What about here?

ETL(Extract, Transform, and Load)

Hadoop here?

oltp olap hadoop architecture
OLTP/OLAP/Hadoop Architecture

OLTP

Hadoop

OLAP

ETL(Extract, Transform, and Load)

Why does this make sense?

etl bottleneck
ETL Bottleneck
  • Reporting is often a nightly task:
    • ETL is often slow: why?
    • What happens if processing 24 hours of data takes longer than 24 hours?
  • Hadoop is perfect:
    • Most likely, you already have some data warehousing solution
    • Ingest is limited by speed of HDFS
    • Scales out with more nodes
    • Massively parallel
    • Ability to use any processing tool
    • Much cheaper than parallel databases
    • ETL is a batch process anyway!
working scenario
Working Scenario
  • Two tables:
    • User demographics (gender, age, income, etc.)
    • User page visits (URL, time spent, etc.)
  • Analyses we might want to perform:
    • Statistics on demographic characteristics
    • Statistics on page visits
    • Statistics on page visits by URL
    • Statistics on page visits by demographic characteristic

How to perform common relational operations in MapReduce…

Except, don’t! (later)

relational algebra
Relational Algebra
  • Primitives
    • Projection ()
    • Selection ()
    • Cartesian product ()
    • Set union ()
    • Set difference ()
    • Rename ()
  • Other operations
    • Join (⋈)
    • Group by… aggregation
projection
Projection
  • R1
  • R1
  • R2
  • R2
  • R3
  • R3
  • R4
  • R4
  • R5
  • R5
projection in mapreduce
Projection in MapReduce
  • Easy!
    • Map over tuples, emit new tuples with appropriate attributes
    • No reducers, unless for regrouping or resorting tuples
    • Alternatively: perform in reducer, after some other processing
  • Basically limited by HDFS streaming speeds
    • Speed of encoding/decoding tuples becomes important
    • Relational databases take advantage of compression
    • Semistructured data? No problem!
selection
Selection
  • R1
  • R2
  • R1
  • R3
  • R3
  • R4
  • R5
selection in mapreduce
Selection in MapReduce
  • Easy!
    • Map over tuples, emit only tuples that meet criteria
    • No reducers, unless for regrouping or resorting tuples
    • Alternatively: perform in reducer, after some other processing
  • Basically limited by HDFS streaming speeds
    • Speed of encoding/decoding tuples becomes important
    • Relational databases take advantage of compression
    • Semistructured data? No problem!
group by aggregation
Group by… Aggregation
  • Example: What is the average time spent per URL?
  • In SQL:
    • SELECT url, AVG(time) FROM visits GROUP BY url
  • In MapReduce:
    • Map over tuples, emit time, keyed by url
    • Framework automatically groups values by keys
    • Compute average in reducer
    • Optimize with combiners
relational joins
Relational Joins
  • R1
  • R4
  • R3
  • R2
  • R3
  • R2
  • R1
  • R4
  • S2
  • S1
  • S4
  • S4
  • S3
  • S2
  • S1
  • S3
types of relationships
Types of Relationships

One-to-Many

One-to-One

Many-to-Many

join algorithms in mapreduce
Join Algorithms in MapReduce
  • Reduce-side join
  • Map-side join
  • In-memory join
    • Striped variant
    • Memcached variant
reduce side join
Reduce-side Join
  • Basic idea: group by join key
    • Map over both sets of tuples
    • Emit tuple as value with join key as the intermediate key
    • Execution framework brings together tuples sharing the same key
    • Perform actual join in reducer
    • Similar to a “sort-merge join” in database terminology
  • Two variants
    • 1-to-1 joins
    • 1-to-many and many-to-many joins
reduce side join 1 to 1
Reduce-side Join: 1-to-1
  • Map
  • R1
  • R4
  • S2
  • S3
  • keys
  • values
  • R1
  • R4
  • S2
  • S3
  • Reduce
  • keys
  • values
  • R1
  • S2
  • S3
  • R4

Note: no guarantee if R is going to come first or S

reduce side join 1 to many
Reduce-side Join: 1-to-many
  • Map
  • R1
  • S2
  • S3
  • S9
  • keys
  • values
  • R1
  • S2
  • S3
  • S9
  • Reduce
  • keys
  • values
  • R1
  • S2
  • S3

What’s the problem?

reduce side join v to k conversion
Reduce-side Join: V-to-K Conversion
  • In reducer…
  • keys
  • values
  • R1

New key encountered: hold in memory

Cross with records from other set

  • S2
  • S3
  • S9
  • R4

New key encountered: hold in memory

Cross with records from other set

  • S3
  • S7
reduce side join many to many
Reduce-side Join: many-to-many
  • In reducer…
  • keys
  • values
  • R1
  • R5

Hold in memory

  • R8

Cross with records from other set

  • S2
  • S3
  • S9

What’s the problem?

map side join basic idea
Map-side Join: Basic Idea

Assume two datasets are sorted by the join key:

  • R1
  • R2
  • R3
  • R4
  • S1
  • S2
  • S3
  • S4

A sequential scan through both datasets to join(called a “merge join” in database terminology)

map side join parallel scans
Map-side Join: Parallel Scans
  • If datasets are sorted by join key, join can be accomplished by a scan over both datasets
  • How can we accomplish this in parallel?
    • Partition and sort both datasets in the same manner
  • In MapReduce:
    • Map over one dataset, read from other corresponding partition
    • No reducers necessary (unless to repartition or resort)
  • Consistently partitioned datasets: realistic to expect?
in memory join
In-Memory Join
  • Basic idea: load one dataset into memory, stream over other dataset
    • Works if R << S and R fits into memory
    • Called a “hash join” in database terminology
  • MapReduce implementation
    • Distribute R to all nodes
    • Map over S, each mapper loads R in memory, hashed by join key
    • For every tuple in S, look up join key in R
    • No reducers, unless for regrouping or resorting tuples
in memory join variants
In-Memory Join: Variants
  • Striped variant:
    • R too big to fit into memory?
    • Divide R into R1, R2, R3, … s.t. each Rn fits into memory
    • Perform in-memory join: n, Rn ⋈ S
    • Take the union of all join results
  • Memcached join:
    • Load R into memcached
    • Replace in-memory hash lookup with memcached lookup
which join to use
Which join to use?
  • In-memory join > map-side join > reduce-side join
    • Why?
  • Limitations of each?
    • In-memory join: memory
    • Map-side join: sort order and partitioning
    • Reduce-side join: general purpose
key features in databases
Key Features in Databases
  • Common optimizations in relational databases
    • Reducing the amount of data to read
    • Reducing the amount of tuples to decode
    • Data placement
    • Query planning and cost estimation
  • Same ideas can be applied to MapReduce
    • For example, column stores in Google Dremel
    • A few commercialized products
    • Many research prototypes
one size does not fit all
One size does not fit all…
  • Databases when:
    • You know what the question is: query optimizers work well
    • Well-specified schema, clean data
  • MapReduce when:
    • You don’t necessarily know what the question is: go brute force
    • Exploratory data analysis
    • Semi-structured, noisy, diverse data
    • ETL is the insight-generation process
graph algorithms

Setting the stage

Introduction to MapReduce

MapReduce algorithm design

Text retrieval

Managing relational data

Graph algorithms

Beyond MapReduce

Graph Algorithms
what s a graph
What’s a graph?
  • G = (V,E), where
    • V represents the set of vertices (nodes)
    • E represents the set of edges (links)
    • Both vertices and edges may contain additional information
  • Different types of graphs:
    • Directed vs. undirected edges
    • Presence or absence of cycles
  • Graphs are everywhere:
    • Hyperlink structure of the Web
    • Physical structure of computers on the Internet
    • Interstate highway system
    • Social networks
some graph problems
Some Graph Problems
  • Finding shortest paths
    • Routing Internet traffic and UPS trucks
  • Finding minimum spanning trees
    • Telco laying down fiber
  • Finding Max Flow
    • Airline scheduling
  • Identify “special” nodes and communities
    • Breaking up terrorist cells, spread of avian flu
  • Bipartite matching
    • Monster.com, Match.com
  • And of course... PageRank
graphs and mapreduce
Graphs and MapReduce
  • Graph algorithms typically involve:
    • Performing computations at each node: based on node features, edge features, and local link structure
    • Propagating computations: “traversing” the graph
  • Key questions:
    • How do you represent graph data in MapReduce?
    • How do you traverse a graph in MapReduce?
representing graphs
Representing Graphs
  • G = (V, E)
  • Two common representations
    • Adjacency matrix
    • Adjacency list
adjacency matrices
Adjacency Matrices

Represent a graph as an n x n square matrix M

  • n = |V|
  • Mij = 1 means a link from node i to j

2

1

3

4

adjacency matrices critique
Adjacency Matrices: Critique
  • Advantages:
    • Amenable to mathematical manipulation
    • Iteration over rows and columns corresponds to computations on outlinks and inlinks
  • Disadvantages:
    • Lots of zeros for sparse matrices
    • Lots of wasted space
adjacency lists
Adjacency Lists

Take adjacency matrices… and throw away all the zeros

1: 2, 4

2: 1, 3, 4

3: 1

4: 1, 3

adjacency lists critique
Adjacency Lists: Critique
  • Advantages:
    • Much more compact representation
    • Easy to compute over outlinks
  • Disadvantages:
    • Much more difficult to compute over inlinks
single source shortest path
Single Source Shortest Path
  • Problem: find shortest path from a source node to one or more target nodes
    • Shortest might also mean lowest weight or cost
  • First, a refresher: Dijkstra’s Algorithm
dijkstra s algorithm example
Dijkstra’s Algorithm Example

1

10

0

9

2

3

4

6

7

5

2

Example from CLR

dijkstra s algorithm example1
Dijkstra’s Algorithm Example

10

1

10

0

9

2

3

4

6

7

5

5

2

Example from CLR

dijkstra s algorithm example2
Dijkstra’s Algorithm Example

8

14

1

10

0

9

2

3

4

6

7

5

5

7

2

Example from CLR

dijkstra s algorithm example3
Dijkstra’s Algorithm Example

8

13

1

10

0

9

2

3

4

6

7

5

5

7

2

Example from CLR

dijkstra s algorithm example4
Dijkstra’s Algorithm Example

8

9

1

1

10

0

9

2

3

4

6

7

5

5

7

2

Example from CLR

dijkstra s algorithm example5
Dijkstra’s Algorithm Example

8

9

1

10

0

9

2

3

4

6

7

5

5

7

2

Example from CLR

single source shortest path1
Single Source Shortest Path
  • Problem: find shortest path from a source node to one or more target nodes
    • Shortest might also mean lowest weight or cost
  • Single processor machine: Dijkstra’s Algorithm
  • MapReduce: parallel Breadth-First Search (BFS)
finding the shortest path
Finding the Shortest Path
  • Consider simple case of equal edge weights
  • Solution to the problem can be defined inductively
  • Here’s the intuition:
    • Define: b is reachable from a if b is on adjacency list of a
    • DistanceTo(s) = 0
    • For all nodes p reachable from s, DistanceTo(p) = 1
    • For all nodes n reachable from some other set of nodes M, DistanceTo(n) = 1 + min(DistanceTo(m), mM)

d1

m1

d2

s

n

m2

d3

  • m3
visualizing parallel bfs
Visualizing Parallel BFS

n7

n0

n1

n2

n3

n6

n5

n4

n8

n9

from intuition to algorithm
From Intuition to Algorithm
  • Data representation:
    • Key: node n
    • Value: d (distance from start), adjacency list (list of nodes reachable from n)
    • Initialization: for all nodes except for start node, d = 
  • Mapper:
    • m adjacency list: emit (m, d + 1)
  • Sort/Shuffle
    • Groups distances by reachable nodes
  • Reducer:
    • Selects minimum distance path for each reachable node
    • Additional bookkeeping needed to keep track of actual path
multiple iterations needed
Multiple Iterations Needed
  • Each MapReduce iteration advances the “known frontier” by one hop
    • Subsequent iterations include more and more reachable nodes as frontier expands
    • Multiple iterations are needed to explore entire graph
  • Preserving graph structure:
    • Problem: Where did the adjacency list go?
    • Solution: mapper emits (n, adjacency list) as well
stopping criterion
Stopping Criterion
  • How many iterations are needed in parallel BFS (equal edge weight case)?
  • When a node is first “discovered”, we’re guaranteed to have found the shortest path
comparison to dijkstra
Comparison to Dijkstra
  • Dijkstra’s algorithm is more efficient
    • At any step it only pursues edges from the minimum-cost path inside the frontier
  • MapReduce explores all paths in parallel
    • Lots of “waste”
    • Useful work is only done at the “frontier”
  • Why can’t we do better using MapReduce?
weighted edges
Weighted Edges
  • Now add positive weights to the edges
  • Simple change: adjacency list now includes a weight w for each edge
    • In mapper, emit (m, d + wp) instead of (m, d + 1) for each node m
  • That’s it?
stopping criterion1
Stopping Criterion
  • How many iterations are needed in parallel BFS (positive edge weight case)?
  • When a node is first “discovered”, we’re guaranteed to have found the shortest path

Not true!

additional complexities
Additional Complexities

1

search frontier

1

1

n6

n7

n8

10

r

n9

1

n5

n1

1

1

s

q

p

n4

1

1

n2

n3

stopping criterion2
Stopping Criterion
  • How many iterations are needed in parallel BFS (positive edge weight case)?
  • Practicalities of implementation in MapReduce
graphs and mapreduce1
Graphs and MapReduce
  • Graph algorithms typically involve:
    • Performing computations at each node: based on node features, edge features, and local link structure
    • Propagating computations: “traversing” the graph
  • Generic recipe:
    • Represent graphs as adjacency lists
    • Perform local computations in mapper
    • Pass along partial results via outlinks, keyed by destination node
    • Perform aggregation in reducer on inlinks to a node
    • Iterate until convergence: controlled by external “driver”
    • Don’t forget to pass the graph structure between iterations
random walks over the web
Random Walks Over the Web
  • Random surfer model:
    • User starts at a random Web page
    • User randomly clicks on links, surfing from page to page
  • PageRank
    • Characterizes the amount of time spent on any given page
    • Mathematically, a probability distribution over pages
  • PageRank captures notions of page importance
    • Correspondence to human intuition?
    • One of thousands of features used in web search
    • Note: query-independent
pagerank defined
PageRank: Defined

Given page x with inlinkst1…tn, where

  • C(t) is the out-degree of t
  •  is probability of random jump
  • N is the total number of nodes in the graph

t1

X

t2

tn

computing pagerank
Computing PageRank
  • Properties of PageRank
    • Can be computed iteratively
    • Effects at each iteration are local
  • Sketch of algorithm:
    • Start with seed PRi values
    • Each page distributes PRi “credit” to all pages it links to
    • Each target page adds up “credit” from multiple in-bound links to compute PRi+1
    • Iterate until values converge
simplified pagerank
Simplified PageRank
  • First, tackle the simple case:
    • No random jump factor
    • No dangling links
  • Then, factor in these complexities…
    • Why do we need the random jump?
    • Where do dangling links come from?
sample pagerank iteration 1
Sample PageRank Iteration (1)

Iteration 1

n2 (0.2)

n2 (0.166)

0.1

n1 (0.2)

0.1

0.1

n1 (0.066)

0.1

0.066

0.066

0.066

n5 (0.2)

n5 (0.3)

n3 (0.2)

n3 (0.166)

0.2

0.2

n4 (0.2)

n4 (0.3)

sample pagerank iteration 2
Sample PageRank Iteration (2)

Iteration 2

n2 (0.166)

n2 (0.133)

0.033

0.083

n1 (0.066)

0.083

n1 (0.1)

0.033

0.1

0.1

0.1

n5 (0.3)

n5 (0.383)

n3 (0.166)

n3 (0.183)

0.3

0.166

n4 (0.3)

n4 (0.2)

pagerank in mapreduce
PageRank in MapReduce

Map

n2

n4

n3

n5

n4

n5

n1

n2

n3

n1

n2

n2

n3

n3

n4

n4

n5

n5

Reduce

complete pagerank
Complete PageRank
  • Two additional complexities
    • What is the proper treatment of dangling nodes?
    • How do we factor in the random jump factor?
  • Solution:
    • Second pass to redistribute “missing PageRank mass” and account for random jumps
    • p is PageRank value from before, p' is updated PageRank value
    • |G| is the number of nodes in the graph
    • m is the missing PageRank mass
pagerank convergence
PageRank Convergence
  • Alternative convergence criteria
    • Iterate until PageRank values don’t change
    • Iterate until PageRank rankings don’t change
    • Fixed number of iterations
  • Convergence for web graphs?
beyond pagerank
Beyond PageRank
  • Link structure is important for web search
    • PageRank is one of many link-based features: HITS, SALSA, etc.
    • One of many thousands of features used in ranking…
  • Adversarial nature of web search
    • Link spamming
    • Spider traps
    • Keyword stuffing
efficient graph algorithms tricks
Efficient Graph Algorithms: Tricks
  • In-mapper combining: efficient local aggregation
  • Smarter partitioning: create more opportunities for local aggregation
  • Schimmy: avoid shuffling the graph

Jimmy Lin and Michael Schatz. Design Patterns for Efficient Graph Algorithms in MapReduce. Proceedings of the Eighth Workshop on Mining and Learning with Graphs Workshop (MLG-2010), pages 78-85, July 2010, Washington, D.C.

in mapper combining
In-Mapper Combining
  • Use combiners
    • Perform local aggregation on map output
    • Downside: intermediate data is still materialized
  • Better: in-mapper combining
    • Preserve state across multiple map calls, aggregate messages in buffer, emit buffer contents at end
    • Downside: requires memory management

buffer

configure

map

close

better partitioning
Better Partitioning
  • Default: hash partitioning
    • Randomly assign nodes to partitions
  • Observation: many graphs exhibit local structure
    • E.g., communities in social networks
    • Better partitioning creates more opportunities for local aggregation
  • Unfortunately, partitioning is hard!
    • Sometimes, chick-and-egg…
    • But cheap heuristics sometimes available
    • For webgraphs: range partition on domain-sorted URLs
schimmy design pattern
Schimmy Design Pattern
  • Basic implementation contains two dataflows:
    • Messages (actual computations)
    • Graph structure (“bookkeeping”)
  • Schimmy: separate the two data flows, shuffle only the messages
    • Basic idea: merge join between graph structure and messages

both relations sorted by join key

both relations consistently partitioned and sorted by join key

S

S1

T

T1

S2

T2

S3

T3

do the schimmy
Do the Schimmy!
  • Schimmy = reduce side parallel merge join between graph structure and messages
    • Consistent partitioning between input and intermediate data
    • Mappers emit only messages (actual computation)
    • Reducers read graph structure directly from HDFS

from HDFS

(graph structure)

intermediate data

(messages)

from HDFS

(graph structure)

intermediate data

(messages)

from HDFS

(graph structure)

intermediate data

(messages)

S1

T1

S2

T2

S3

T3

Reducer

Reducer

Reducer

experiments
Experiments
  • Cluster setup:
    • 10 workers, each 2 cores (3.2 GHz Xeon), 4GB RAM, 367 GB disk
    • Hadoop 0.20.0 on RHELS 5.3
  • Dataset:
    • First English segment of ClueWeb09 collection
    • 50.2m web pages (1.53 TB uncompressed, 247 GB compressed)
    • Extracted webgraph: 1.4 billion links, 7.0 GB
    • Dataset arranged in crawl order
  • Setup:
    • Measured per-iteration running time (5 iterations)
    • 100 partitions
results
Results

“Best Practices”

results1
Results

+18%

1.4b

674m

results2
Results

+18%

1.4b

674m

-15%

results3
Results

+18%

1.4b

674m

-15%

-60%

86m

results4
Results

+18%

1.4b

674m

-15%

-60%

-69%

86m

beyond mapreduce

Setting the stage

Introduction to MapReduce

MapReduce algorithm design

Text retrieval

Managing relational data

Graph algorithms

Beyond MapReduce

Beyond MapReduce
from gfs to bigtable
From GFS to Bigtable
  • Google’s GFS is a distributed file system
  • Bigtable is a storage system for structured data
    • Built on top of GFS
    • Solves many GFS issues: real-time access, short files, short reads
    • Serves as a source and a sink for MapReduce jobs
bigtable data model
Bigtable: Data Model
  • A table is a sparse, distributed, persistent multidimensional sorted map
  • Map indexed by a row key, column key, and a timestamp
    • (row:string, column:string, time:int64) uninterpreted byte array
  • Supports lookups, inserts, deletes
    • Single row transactions only

Image Source: Chang et al., OSDI 2006

hbase
HBase

Image Source: http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html

slide201

The datacenter is the computer!

It’s all about the right level of abstraction

Source: NY Times (6/14/2006)

need for high level languages
Need for High-Level Languages
  • Hadoop is great for large-data processing!
    • But writing Java programs for everything is verbose and slow
    • Analysts don’t want to (or can’t) write Java
  • Solution: develop higher-level data processing languages
    • Hive: HQL is like SQL
    • Pig: Pig Latin is a dataflow language
hive and pig
Hive and Pig
  • Hive: data warehousing application in Hadoop
    • Query language is HQL, variant of SQL
    • Tables stored on HDFS as flat files
    • Developed by Facebook, now open source
  • Pig: large-scale data processing system
    • Scripts are written in Pig Latin, a dataflow language
    • Developed by Yahoo!, now open source
    • Roughly 1/3 of all Yahoo! internal jobs
  • Common idea:
    • Provide higher-level language to facilitate large-data processing
    • Higher-level language “compiles down” to Hadoop jobs
hive example
Hive: Example
  • Hive looks similar to an SQL database
  • Relational join on two tables:
    • Table of word counts from Shakespeare collection
    • Table of word counts from the bible

SELECT s.word, s.freq, k.freq FROM shakespeare s

JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10;

the 25848 62394

I 23031 8854

and 19671 38985

to 18038 13526

of 16700 34654

a 14170 8057

you 12702 2720

my 11297 4135

in 10797 12445

is 8882 6884

Source: Material drawn from Cloudera training VM

hive behind the scenes
Hive: Behind the Scenes

SELECT s.word, s.freq, k.freq FROM shakespeare s

JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10;

(Abstract Syntax Tree)

(TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF shakespeare s) (TOK_TABREF bible k) (= (. (TOK_TABLE_OR_COL s) word) (. (TOK_TABLE_OR_COL k) word)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) word)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) freq)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL k) freq))) (TOK_WHERE (AND (>= (. (TOK_TABLE_OR_COL s) freq) 1) (>= (. (TOK_TABLE_OR_COL k) freq) 1))) (TOK_ORDERBY (TOK_TABSORTCOLNAMEDESC (. (TOK_TABLE_OR_COL s) freq))) (TOK_LIMIT 10)))

(one or more of MapReduce jobs)

hive behind the scenes1
Hive: Behind the Scenes

STAGE DEPENDENCIES:

Stage-1 is a root stage

Stage-2 depends on stages: Stage-1

Stage-0 is a root stage

STAGE PLANS:

Stage: Stage-1

Map Reduce

Alias -> Map Operator Tree:

s

TableScan

alias: s

Filter Operator

predicate:

expr: (freq >= 1)

type: boolean

Reduce Output Operator

key expressions:

expr: word

type: string

sort order: +

Map-reduce partition columns:

expr: word

type: string

tag: 0

value expressions:

expr: freq

type: int

expr: word

type: string

k

TableScan

alias: k

Filter Operator

predicate:

expr: (freq >= 1)

type: boolean

Reduce Output Operator

key expressions:

expr: word

type: string

sort order: +

Map-reduce partition columns:

expr: word

type: string

tag: 1

value expressions:

expr: freq

type: int

Stage: Stage-2

Map Reduce

Alias -> Map Operator Tree:

hdfs://localhost:8022/tmp/hive-training/364214370/10002

Reduce Output Operator

key expressions:

expr: _col1

type: int

sort order: -

tag: -1

value expressions:

expr: _col0

type: string

expr: _col1

type: int

expr: _col2

type: int

Reduce Operator Tree:

Extract

Limit

File Output Operator

compressed: false

GlobalTableId: 0

table:

input format: org.apache.hadoop.mapred.TextInputFormat

output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat

Stage: Stage-0

Fetch Operator

limit: 10

Reduce Operator Tree:

Join Operator

condition map:

Inner Join 0 to 1

condition expressions:

0 {VALUE._col0} {VALUE._col1}

1 {VALUE._col0}

outputColumnNames: _col0, _col1, _col2

Filter Operator

predicate:

expr: ((_col0 >= 1) and (_col2 >= 1))

type: boolean

Select Operator

expressions:

expr: _col1

type: string

expr: _col0

type: int

expr: _col2

type: int

outputColumnNames: _col0, _col1, _col2

File Output Operator

compressed: false

GlobalTableId: 0

table:

input format: org.apache.hadoop.mapred.SequenceFileInputFormat

output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat

pig example
Pig: Example

Task: Find the top 10 most visited pages in each category

Visits

Url Info

Pig Slides adapted from Olston et al. (SIGMOD 2008)

pig query plan
Pig Query Plan

Load Visits

Group by url

Foreachurl

generate count

Load Url Info

Join on url

Group by category

Foreachcategory

generate top10(urls)

Pig Slides adapted from Olston et al. (SIGMOD 2008)

pig script
Pig Script

visits = load‘/data/visits’ as (user, url, time);

gVisits = group visits byurl;

visitCounts =foreachgVisitsgenerateurl, count(visits);

urlInfo = load‘/data/urlInfo’ as (url, category, pRank);

visitCounts= joinvisitCountsbyurl, urlInfobyurl;

gCategories= groupvisitCountsby category;

topUrls = foreachgCategoriesgenerate top(visitCounts,10);

store topUrls into ‘/data/topUrls’;

Pig Slides adapted from Olston et al. (SIGMOD 2008)

pig script in hadoop
Pig Script in Hadoop

Map1

Load Visits

Group by url

Reduce1

Map2

Foreachurl

generate count

Load Url Info

Join on url

Reduce2

Map3

Group by category

Reduce3

Foreachcategory

generate top10(urls)

Pig Slides adapted from Olston et al. (SIGMOD 2008)

different programming models
Different Programming Models
  • Multitude of MapReduce hybrids, variants, etc.
    • Mostly research prototypes
    • A few commercial companies
  • Dryad/DryadLINQ (Microsoft)
emerging themes
Emerging Themes
  • Continuing quest for alternative programming models
    • Batch vs. real-time data processing
  • Continuing quest for better implementations
  • MapReduce as yet another tool
  • Growth of the Hadoop ecosystem
  • Evolving role of MapReduce and parallel databases