slide1 n.
Skip this Video
Download Presentation
Schism: Graph Partitioning for OLTP Databases in a Relational Cloud Implications for the design of GraphLab

Loading in 2 Seconds...

play fullscreen
1 / 48

Schism: Graph Partitioning for OLTP Databases in a Relational Cloud Implications for the design of GraphLab - PowerPoint PPT Presentation

  • Uploaded on

Schism: Graph Partitioning for OLTP Databases in a Relational Cloud Implications for the design of GraphLab. Samuel Madden MIT CSAIL Director, Intel ISTC in Big Data. GraphLab Workshop 2012. The Problem with Databases. Tend to proliferate inside organizations

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Schism: Graph Partitioning for OLTP Databases in a Relational Cloud Implications for the design of GraphLab' - lovie

Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Schism: Graph Partitioning for OLTP Databases in a Relational CloudImplications for the design of GraphLab

Samuel Madden


Director, Intel ISTC in Big Data

GraphLab Workshop 2012

the problem with databases
The Problem with Databases
  • Tend to proliferate inside organizations
    • Many applications use DBs
  • Tend to be given dedicated hardware
    • Often not heavily utilized
  • Don’t virtualize well
  • Difficult to scale

This is expensive & wasteful

    • Servers, administrators, software licenses, network ports, racks, etc …
relationalcloud vision
  • Goal: A database service that exposes self-serve usage model
    • Rapid provisioning: users don’t worry about DBMS & storage configurations


  • User specifies type and size of DB and SLA(“100 txns/sec, replicated in US and Europe”)
  • User given a JDBC/ODBC URL
  • System figures out how & where to run user’s DB & queries
before database silos and sprawl
Before: Database Silos and Sprawl

Application #4

Application #1

Application #2

Application #3



Database #1

Database #2

Database #3

Database #4



Must deal with many one-off database configurations

And provision each for its peak load

after a single scalable service
After: A Single Scalable Service

App #2

App #3

App #4

App #1

Reduces server hardware by aggressive workload-aware multiplexing

Automatically partitions databases across multiple HW resources

Reduces operational costs by automating service management tasks

what about virtualization
What about virtualization?

Max Throughput w/ 20:1 consolidation (Us vs. VMWareESXi)

All DBs equal load

One DB 10x loaded

  • Could run each DB in a separate VM
  • Existing database services (Amazon RDS) do this
    • Focus is on simplified management, not performance
  • Doesn’t provide scalability across multiple nodes
  • Very inefficient
key ideas in this talk schism
Key Ideas in this Talk: Schism
  • How to automatically partition transactional (OLTP) databases in a database service
  • Some implications for GraphLab
system overview
System Overview


  • Not going to talk about:
  • Database migration
  • Security
  • Placement of data

This is your OLTP Database

Curino et al, VLDB 2010


New graph-based approach to automatically partition OLTP workloads across many machines

Input: trace of transactions and the DB

Output: partitioning plan

Results: As good or better than best manual partitioning

Static partitioning – not automatic repartitioning.

challenge partitioning
Challenge: Partitioning

Goal: Linear performance improvement when adding machines

Requirement: independence and balance

Simple approaches:

  • Total replication
  • Hash partitioning
  • Range partitioning
partitioning challenges
Partitioning Challenges

Transactions access multiple records?

Distributed transactions

Replicated data

Workload skew?

Unbalanced load on individual servers

Many-to-many relations?

Unclear how to partition effectively

distributed txn disadvantages
Distributed Txn Disadvantages

Require more communication

At least 1 extra message; maybe more

Hold locks for longer time

Increases chance for contention

Reduced availability

Failure if any participant is down


Single partition: 2 tuples on 1 machine

Distributed: 2 tuples on 2 machines

Same issue would arise in distributed GraphLab

Each transaction writes two different tuples

schism overview1
Schism Overview
  • Build a graph from a workload trace
    • Nodes: Tuples accessed by the trace
    • Edges: Connect tuples accessed in txn
schism overview2
Schism Overview
  • Build a graph from a workload trace
  • Partition to minimize distributed txns

Idea: min-cut minimizes distributed txns

schism overview3
Schism Overview
  • Build a graph from a workload trace
  • Partition to minimize distributed txns
  • “Explain” partitioning in terms of the DB

Use the METIS graph partitioner:

min-cut partitioning with balance constraint

Node weight:

# of accesses → balance workload

data size → balance data size

Output: Assignment of nodes to partitions

graph size reduction heuristics
Graph Size Reduction Heuristics

Coalescing: tuples always accessed together → single node (lossless)

Blanket Statement Filtering: Remove statements that access many tuples

Sampling: Use a subset of tuples or transactions

explanation phase
Explanation Phase


Compact rules to represent partitioning



explanation phase1
Explanation Phase


Compact rules to represent partitioning

Classification problem:

tuple attributes → partition mappings



decision trees
Decision Trees

Machine learning tool for classification

Candidate attributes:

attributes used in WHERE clauses

Output: predicates that approximate partitioning



IF (Salary>$12000)




evaluation partitioning strategies
Evaluation: Partitioning Strategies

Schism: Plan produced by our tool

Manual: Best plan found by experts

Replication: Replicate all tables

Hashing: Hash partition all tables

benchmark results simple
Benchmark Results: Simple

% Distributed Transactions

benchmark results tpc
Benchmark Results: TPC

% Distributed Transactions

benchmark results complex
Benchmark Results: Complex

% Distributed Transactions

implications for graphlab 1
Implications for GraphLab (1)
  • Shared architectural components for placement, migration, security, etc.
  • Would be great to look at building a database-like store as a backing engine for GraphLab
implications for graphlab 2
Implications for GraphLab (2)
  • Data driven partitioning
    • Can co-locate data that is accessed together
      • Edge weights can encode frequency of read/writes from adjacent nodes
    • Adaptively choose between replication and distributed depending on read/write frequency
    • Requires a workload trace and periodic repartitioning
    • If accesses are random, will not be a win
    • Requires heuristics to deal with massive graphs, e.g., ideas from GraphBuilder
implications for graphlab 3
Implications for GraphLab (3)
  • Transactions and 2PC for serializability
    • Acquire locks as data is accessed, rather than acquiring read/write locks on all neighbors in advance
    • Introduces deadlock possibility
    • Likely a win if adjacent updates are infrequent, or not all neighbors accessed on each iteration
    • Could also be implemented using optimistic concurrency control schemes

Automatically partitions OLTP databases as well or better than experts

Graph partitioning combined with decision trees finds good partitioning plans for many applications

Suggests some interesting directions for distributed GraphLab; would be fun to explore!

collecting a trace
Collecting a Trace

Need trace of statements and transaction ids (e.g. MySQLgeneral_log)

Extract read/write sets by rewriting statements into SELECTs

Can be applied offline: Some data lost

replicated data
Replicated Data

Read: Access the local copy

Write: Write all copies (distributed txn)

  • Add n + 1 nodes for each tuple

n = transactions accessing tuple

  • connected as star with weight = # writes

Cut a replication edge: cost = # of writes

partitioning advantages
Partitioning Advantages


  • Scale across multiple machines
  • More performance per dollar
  • Scale incrementally


  • Partial failure
  • Rolling upgrades
  • Partial migrations