- 142 Views
- Uploaded on
- Presentation posted in: General

Towards a Science of Parallel Programming

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Towards a Science of Parallel Programming

Keshav Pingali

University of Texas, Austin

- Community has worked on parallel programming for more than 30 years
- programming models
- machine models
- programming languages
- ….

- However, parallel programming is still a research problem
- matrix computations, stencil computations, FFTs etc. are well-understood
- each new application is a “new phenomenon”
- few insights for irregular applications

- Thesis: we need a science of parallel programming
- analysis: framework for thinking about parallelism in application
- synthesis: produce an efficient parallel implementation of application

“The Alchemist” Cornelius Bega (1663)

Seemingly

unrelated phenomena

Specialized models

that exploit structure

Unifying abstractions

- Seemingly unrelated parallel algorithms and data structures
- Stencil codes
- Delaunay mesh refinement
- Event-driven simulation
- Graph reduction of functional languages
- …

- Unifying abstractions
- Operator formulation of algorithms
- Amorphous data-parallelism
- Galois programming model
- Baseline parallel implementation

- Specialized implementations that exploit structure
- Structure of algorithms
- Optimized compiler and runtime system support for different kinds of structure

- Ongoing work

Some parallel algorithms

(i-1,j)

(i,j-1)

(i,j+1)

(i,j)

(i+1,j)

- Finite-difference method for solving PDEs
- discrete representation of domain: grid

- Values at interior points are updated using values at neighbors
- values at boundary points are fixed

- Data structure:
- dense arrays

- Parallelism:
- values at all interior points can be computed simultaneously
- parallelism is not dependent on input values

- Compiler can find the parallelism
- spatial loops are DO-ALL loops
//Jacobi iteration with 5-point stencil

//initialize array A

for time = 1, nsteps

for <i,j> in [2,n-1]x[2,n-1]

temp(i,j)=0.25*(A(i-1,j)+A(i+1,j)+A(i,j-1)+A(i,j+1))

for <i,j> in [2,n-1]x[2,n-1]

A(i,j) = temp(i,j)

- spatial loops are DO-ALL loops

5-point stencil

Mesh m = /* read in mesh */

WorkListwl;

wl.add(m.badTriangles());

while ( !wl.empty() ) {

Element e =wl.get();

if (e no longer in mesh) continue;

Cavity c = new Cavity(e);//determine new cavity

c.expand();

c.retriangulate();//re-triangulate region

m.update(c);//update mesh

wl.add(c.badTriangles());

}

- Iterative refinement to remove badly shaped triangles:
while there are bad triangles do {

pick a bad triangle;

find its cavity;

retriangulate cavity; // may create new bad triangles

}

- Don’t-care non-determinism:
- final mesh depends on order in which bad triangles are processed
- applications do not care which mesh is produced

- Data structure:
- graph in which nodes represent triangles and edges represent triangle adjacencies

- Parallelism:
- bad triangles with cavities that do not overlap can be processed in parallel
- parallelism is very “input-dependent”
- compilers cannot determine this parallelism

- (Miller et al.) at runtime, repeatedly build interference graph and find maximal independent sets for parallel execution

- Stations communicate by sending messages with time-stamps on FIFO channels
- Stations have internal state that is updated when a message is processed
- Messages must be processed in time-order at each station
- Data structure:
- Messages in event-queue, sorted in time-order

- Parallelism:
- conservative: Chandy-Misra-Bryant
- station fires when it has messages on all incoming edges and processes earliest message
- requires null messages to avoid deadlock

- optimistic: Jefferson time-warp
- station can fire when it has an incoming message on any edge
- requires roll-back if speculative conflict is detected

- conservative: Chandy-Misra-Bryant

2

A

6

4

B

5

- Diverse algorithms and data structures
- Exploiting parallelism in irregular algorithms is very complex
- Miller et al. DMR implementation: interference graph + maximal independent sets
- Jefferson Timewarp algorithm for event-driven simulation

- Algorithms:
- parallelism can be very input-dependent
- DMR, event-driven simulation, graph reduction,….

- don’t-care non-determinism
- has nothing to do with concurrency
- DMR, graph reduction

- activities created dynamically may interfere with existing activities
- event-driven simulation…

- parallelism can be very input-dependent
- Data structures:
- relatively few algorithms use dense arrays
- more common: graphs, trees, lists, priority queues,…

- Seemingly unrelated parallel algorithms and data structures
- Stencil codes
- Delaunay mesh refinement
- Event-driven simulation
- Graph reduction of functional languages
- ………

- Unifying abstractions
- Amorphous data-parallelism
- Baseline parallel implementation for exploiting amorphous data-parallelism

- Specialized implementations that exploit structure
- Structure of algorithms
- Optimized compiler and runtime system support for different kinds of structure

- Ongoing work

- Provide a model of parallelism in irregular algorithms
- Unified treatment of parallelism in regular and irregular algorithms
- parallelism in regular algorithms must emerge as a special case of general model
- (cf.) correspondence principles in Physics

- Abstractions should be effective
- should be possible to write an interpreter to execute algorithms in parallel

- Computation graph
- nodes are computations
- edges are dependences

- Parallelism
- width of the computation graph

- Effective parallel computation graph model
- dataflow model of Dennis, Arvind et al.

- Inadequate for irregular applications
- dependences between computations are a function of input data
- don’t-care non-determinism
- conflicting work may be created dynamically
- …

- Data structures play almost no role in this abstraction
- in most programs, parallelism comes from data-parallelism (concurrent operations on data structure elements)

- New abstraction
- data-centric: data structures play a central role
- we will use graph ADT to illustrate concepts

i3

- Algorithm =
repeated application of operator to graph

- active element:
- node or edge where operator is applied
- Jacobi: interior nodes of mesh
- DMR: nodes representing bad triangles
- Event-driven simulation: station with incoming message

- node or edge where operator is applied
- neighborhood:
- set of nodes and edges read/written to perform computation
- Jacobi: nodes in stencil
- DMR: cavity of bad triangle
- Event-driven simulation: station

- distinct usually from neighbors in graph

- set of nodes and edges read/written to perform computation
- ordering:
- order in which active elements must be executed in a sequential implementation
- any order (Jacobi, DMR, graph reduction)
- some problem-dependent order (event-driven simulation)

- order in which active elements must be executed in a sequential implementation

- active element:

i1

i2

i4

i5

: active node

: neighborhood

i3

i1

- Amorphous data-parallelism:
- parallelism in processing active nodes
subject to

- neighborhood constraints
- ordering constraints

- parallelism in processing active nodes
- Computations at two active elements are independent if
- Neighborhoods do not overlap
- More generally, neither of them writes to an element in the intersection of the neighborhoods

- Unordered active elements
- In principle, independent active elements can be processed in parallel
- How do we find independent active elements?

- Ordered active elements
- Independence is not enough since elements can become active dynamically (see example)
- How do we determine what is safe to execute in parallel?

- How do we make this model effective?

i2

i4

i5

2

A

3

4

6

B

C

5

- Program written in terms of abstractions in model
- Programming model: sequential, OO
- Graph class: provided by Galois library
- specialized versions to exploit structure (see later)

- Galois set iterators: for iterating over unordered and ordered sets of active elements
- for each e in Set S do B(e)
- evaluate B(e) for each element in set S
- no a priori order on iterations
- set S may get new elements during execution

- for each e in OrderedSet S do B(e)
- evaluate B(e) for each element in set S
- perform iterations in order specified by OrderedSet
- set S may get new elements during execution

- for each e in Set S do B(e)

Mesh m = /* read in mesh */

Set ws;

ws.add(m.badTriangles()); // initialize ws

for each tr in Set ws do { //unordered Set iteratorif (tr no longer in mesh) continue;

Cavity c = new Cavity(tr);

c.expand();

c.retriangulate();

m.update(c);

ws.add(c.badTriangles()); //bad triangles

}

DMR using Galois iterators

Galois parallel execution model

- Parallel execution model:
- shared-memory
- optimistic execution of Galois iterators

- Implementation:
- master thread begins execution of program
- when it encounters iterator, worker threads help by executing iterations concurrently
- barrier synchronization at end of iterator

- Independence of neighborhoods:
- software TM variety
- logical locks on nodes and edges

- Ordering constraints for ordered set iterator:
- execute iterations out of order but commit in order
- cf. out-of-order CPUs

Master

main()

….

for each …..{

…….

…….

}

.....

i3

i1

i2

i4

i5

Shared

Memory

Program

Threads

- Idealized execution model:
- unbounded number of processors
- applying operator at an active node takes one time step
- execute a maximal set of active nodes, subject to neighborhood and ordering constraints

- Measures amorphous data-parallelism in irregular program execution
- Useful as an analysis tool

- Input mesh:
- Produced by Triangle (Shewchuck)
- 550K triangles
- Roughly half are badly shaped

- Available parallelism:
- How many non-conflicting triangles can be expanded at each time step?

- Parallelism intensity:
- What fraction of the total number of bad triangles can be expanded at each step?

Boruvka: 10K node graph, avg degree 5

AC: 20K random points in 2D

- Boruvka MST algorithm
- Builds MST bottom-up
- Unordered active elements

- Agglomerative clustering (AC)
- Data-mining algorithm
- Ordered active elements

- Similarity in parallelism profiles arises from similarity in algorithmic structure

i3

i1

i2

i4

i5

- Old abstraction: computation graphs
- New abstraction: operator formulation of algorithms
- active elements
- neighborhoods
- ordering of active elements

- Amorphous data-parallelism
- generalizes conventional data-parallelism

- Baseline execution model
- Galois programming model
- sequential, OO
- uses new abstractions

- optimistic parallel execution

- Galois programming model
- Parameter tool
- provides estimates of amorphous data-parallelism in programs written using Galois programming model

- Seemingly unrelated parallel algorithms and data structures
- Stencil codes
- Delaunay mesh refinement
- Event-driven simulation
- Graph reduction of functional languages
- ………

- Unifying abstractions
- Operator formulation of algorithms
- Amorphous data-parallelism
- Galois programming model
- Baseline parallel implementation

- Specialized implementations that exploit structure
- Structure of algorithms
- Optimized compiler and runtime system support for different kinds of structure

- Ongoing work

- Baseline implementation is general but usually inefficient
- (e.g.) dynamic scheduling of iterations is not needed for Jacobi since grid structure is known at compile-time
- (e.g.) hand-written parallel implementations of DMR and Jacobi do not buffer updates to neighborhood until commit point

- Efficient execution requires exploitingstructure in algorithms and data structures
- How do we talk about structure in algorithms?
- Previous approaches: like descriptive biology
- Mattson et al. book
- Parallel programming patterns (PPP): Snir et al.
- Berkeley dwarfs
- …

- Our approach: like molecular biology
- based on amorphous data-parallelism framework

- Previous approaches: like descriptive biology

general graph

topology

grid

tree

morph: modifies structure of graph

iterative

algorithms

local computation: only updates values on nodes/edges

operator

reader: does not modify graph in any way

unordered

ordering

ordered

Jacobi: topology: grid, operator: local computation, ordering: unordered

DMR, graph reduction: topology: graph, operator: morph, ordering: unordered

Event-driven simulation: topology: graph, operator: local computation, ordering: ordered

u

uv

n

n

a

a

m

m

v

Edge contraction

Node elimination

refinement: DMR, Prim MST, Barnes-Hut tree building

node elimination: sparse Cholesky factorization

coarsening

morph

edge contraction: Metis, Kruskal MST, Boruvka MST, AC

…..

operator

sub-graph elimination: elimination-based dataflow analysis

….

general: graph reduction

Reducing Overheads of Optimistic Parallel Execution

Cores

- Algorithm structure:
- general graph/grid + unordered active elements

- Optimization I:
- partition the graph/grid and work-set between cores
- data-centric work assignment: core gets active elements from its own partition

- Pros and cons:
- eliminates contention for worklist
- improves locality and can dramatically reduce conflicts
- dynamic load-balancing may be needed

- Optimization II:
- lock coarsening: associate logical locks with partitions, not graph elements
- reduces overhead of lock management

- Over-decomposition may improve core utilization

- Cautious operator:
- reads all the elements in its neighborhood before modifying any of them
- (e.g.) Delaunay mesh refinement

- Algorithm structure:
- cautious operator + unordered active elements

- Optimization: optimistic execution w/o buffering updates
- grab locks on elements during read phase
- conflict: someone else has lock, so release your locks

- once update phase begins, no new locks will be acquired
- update in-place w/o making copies

- note: this is not two-phase locking

- grab locks on elements during read phase

- Algorithm structure:
- general graph/grid + cautious operator + unordered active elements

- Optimizations:
- partitioning + lock-coarsening + zero-buffering
- very efficient implementations possible

- Maverick@TACC
- 128-core Sun Fire E25K 1.05 GHz
- 64 dual-core processors
- Sun Solaris

- Speed-up of 20 on 32 cores for refinement
- Mesh partitioning is still sequential
- time for mesh partitioning starts to dominate after 8 processors (32 partitions)
- Need parallel mesh partitioning

- SP is a heuristic for solving difficult SAT problems
- SP: general graph + cautious operator + unordered elements
- Implementation:
- partitioning
- lock coarsening
- zero-buffering

Survey propagation on Maverick

(roughly 1500 clauses, 250-500 variables)

Eliminating the Need for Optimistic Parallel Execution

- Baseline implementation
- autonomous scheduling: no coordination between execution of different active elements

- Global coordination possible for some algorithms
- Run-time scheduling: cautious operator + unordered active elements
- execute all activities partially to determine neighborhoods
- create interference graph and find independent set of activities
- execute independent set of activities in parallel w/o synchronization
- used in Gary Miller’s implementation of DMR

- Just-in-time scheduling: local computation + structure-driven + cautious, unordered (e.g.) sparse MVM
- Inspector-executor approach

- Compile-time scheduling: previous case + graph is known at compile-time (e.g.) Jacobi
- make all scheduling decisions at compile-time time

- Run-time scheduling: cautious operator + unordered active elements

h2

h2

h2

n2

n2

n2

h1

n1

n1

n1

n3

n3

n3

h3

h3

h3

n4

n4

n4

h4

h4

h4

- Algorithm studies:
- divide-and-conquer algorithms
- transforming ordered algorithms into unordered algorithms
- intra-operator parallelism
- important for some algorithms on dense graphs

- locality

- Language/programming model:
- incorporating scheduling information into Galois
- program refinements?

- incorporating scheduling information into Galois
- Compiler analysis
- analyze and optimize code for operators

- Runtime system
- adaptive control system for managing threads

- Application studies
- Case studies of hand-optimized codes
- understand hand optimizations
- figure out how to incorporate them into system

- Lonestar benchmark suite for irregular programs
- joint work with Calin Cascaval’s group at IBM Yorktown Heights

- Case studies of hand-optimized codes

KavitaBala (Cornell)

Martin Burtscher (UT Austin)

Patrick Carribault (UT Austin)

CalinCascaval (IBM)

Paul Chew (Cornell)

Amber Hassaan (UT Austin)

Tony Ingraffea (Cornell)

Milind Kulkarni (UT Austin)

Mario Mendez (UT Austin)

RajasekharInkulu (UT Austin)

Donald Nguyen (UT Austin)

DimitriosPrountzos (UT Austin)

GaneshRamanarayanan (Microsoft)

Xin Sui (UT Austin)

Bruce Walter (Cornell)

ZifeiZhong (UT Austin)

i3

i1

i2

i4

i5

2

A

B

……..

Specialized models

that exploit structure

Unifying abstractions

Seemingly

unrelated algorithms