Loading in 2 Seconds...

Combinatorial Scientific Computing: Experiences, Directions, and Challenges

Loading in 2 Seconds...

- 110 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'Combinatorial Scientific Computing: Experiences, Directions, and Challenges' - herbert

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Combinatorial Scientific Computing: Experiences, Directions, and Challenges

John R. Gilbert

University of California, Santa Barbara

DOE CSCAPES Workshop

June 11, 2008

Support: DOE Office of Science, DARPA, SGI, MIT Lincoln Labs

Combinatorial Scientific Computing

“I observed that most of the coefficients in our matrices were zero; i.e., the nonzeros were ‘sparse’ in the matrix, and that typically the triangular matrices associated with the forward and back solution provided by Gaussian elimination would remain sparse if pivot elements were chosen with care”

- Harry Markowitz, describing the 1950s work on portfolio theory that won the 1990 Nobel Prize for Economics

Combinatorial Scientific Computing

“The emphasis on mathematical methods seems to be shifted more towards combinatorics and set theory – and away from the algorithm of differential equations which dominates mathematical physics.”

- John von Neumann & Oskar Morgenstern, 1944

Combinatorial Scientific Computing

“Combinatorial problems generated by challenges in data mining and related topics are now central to computational science. Finally, there’s the Internet itself, probably the largest graph-theory problem ever confronted.”

- Isabel Beichl & Francis Sullivan, 2008

A few directions in CSC

- Hybrid discrete & continuous computations
- Multiscale combinatorial computation
- Analysis, management, and propagation of uncertainty
- Economic & game-theoretic considerations
- Computational biology & bioinformatics
- Computational ecology
- Knowledge discovery & machine learning
- Relationship analysis
- Web search and information retrieval
- Sparse matrix methods
- Geometric modeling
- . . .

#1: The Parallelism Challenge

Two Nvidia 8800 GPUs

> 1 TFLOPS

LANL / IBM Roadrunner

> 1 PFLOPS

Intel 80-core chip

> 1 TFLOPS

- Different programming models
- Different levels of fit to irregular problems & graph algorithms

#2: The Architecture Challenge

- The memory wall: Most of memory is hundreds or thousands of cycles away from the processor that wants it.
- Computations that follow the edges of irregular graphs are unavoidably latency-limited.
- Speed of light: “You can buy more bandwidth, but you can’t buy less latency.”
- Some help from encapsulating coarse-grained primitives in carefully-tuned library routines. . .
- . . . but the difficulty is intrinsic to most graph computations, hence can likely only be addressed by machine architecture.

An architectural approach: Cray MTA / XMT

- Hide latency by massive multithreading
- Per-tick context switching
- Slower clock rate
- Uniform (sort of) memory access time
- But the economic case is less than completely clear….

#3: The Algorithms Challenge

- Efficient sequential algorithms for combinatorial problems often follow long sequential dependencies.
- Example: Assefaw’s talk on graph coloring
- Several parallelization strategies exist, but no silver bullet:
- Partitioning (e.g. for coloring)
- Pointer-jumping (e.g. for connected components)
- Sometimes it just depends on the graph . . . .

Sample kernel: Sort logically triangular matrix

- Used in sparse linear solvers (e.g. Matlab’s)
- Simple kernel, abstracts many other graph operations (see next)
- Sequential: linear time; greedy topological sort; no locality
- Parallel: very unbalanced; one DAG level per step; possible long sequential dependencies

Permuted to upper triangular form

Original matrix

Matching in bipartite graph

1

2

3

4

5

1

4

1

5

2

2

3

3

3

1

4

2

4

5

PA

5

1

2

3

4

5

- Perfect matching: set of edges that hits each vertex exactly once
- Matrix permutation to put nonzeros on diagonal
- Variant: Maximum-weight matching

1

2

3

4

5

A

Strongly connected components

1

2

4

7

5

3

6

1

2

2

1

4

7

5

4

5

7

3

6

6

3

- Symmetric permutation to block triangular form
- Diagonal blocks are strong Hall (irreducible / strongly connected)
- Sequential: linear time by depth-first search [Tarjan]
- Parallel:divide & conquer algorithm, performance depends on input [Fleischer, Hendrickson, Pinar]

G(A)

PAPT

Coloring for parallel nonsymmetric preconditioning [Aggarwal, Gibou, G]

263 million DOF

- Level set method for multiphase interface problems in 3D.
- Nonsymmetric-structure, second-order-accurate octree discretization.
- BiCGSTAB preconditioned by parallel triangular solves.

#4: The Primitives Challenge

Peak

BLAS 3

BLAS 2

BLAS 1

BLAS 3 (n-by-n matrix-matrix multiply) BLAS 2 (n-by-n matrix-vector multiply) BLAS 1 (sum of scaled n-vectors)

- By analogy to numerical linear algebra,
- What would the combinatorial BLAS look like?

Primitives for HPC graph programming

- Visitor-based multithreaded – MTGL + XMT

+ search templates natural for many algorithms

+ relatively simple load balancing

– complex thread interactions, race conditions

– unclear how applicable to standard architectures

- Array-based data parallel – GAPDT + parallel Matlab / Python

+ relatively simple control structure

+ user-friendly interface

– some algorithms hard to express naturally

– load balancing not so easy

- Scan-based vectorized – NESL: something of a wild card
- We don’t really know the right set of primitives yet!

“Graph Algorithms in the Language of Linear Algebra”

Graph Algorithms

in the Language of

Linear Algebra

Jeremy Kepner and

John R. Gilbert

(editors)

- Editors: Kepner (MIT-LL) and Gilbert (UCSB)
- Contributors
- Bader (GA Tech)
- Buluc (UCSB)
- Chakrabarti (CMU)
- Dunlavy (Sandia)
- Faloutsos (CMU)
- Fineman (MIT-LL & MIT)
- Gilbert (UCSB)
- Kahn (MIT-LL & Brown)
- Kegelmeyer (Sandia)
- Kepner (MIT-LL)
- Kleinberg (Cornell)
- Kolda (Sandia)
- Leskovec (CMU)
- Madduri (GA Tech)
- Robinson (MIT-LL & NEU)
- Shah (ISC & UCSB)

Multiple-source breadth-first search

2

1

4

5

7

6

3

- Sparse array representation => space efficient
- Sparse matrix-matrix multiplication => work efficient
- Load balance depends on SpGEMM implementation
- Not a panacea for the memory latency wall!

AT

X

ATX

SpGEMM: Sparse Matrix x Sparse Matrix [Buluc, G]

- Shortest path calculations (APSP)
- Betweenness centrality
- BFS from multiple source vertices
- Subgraph / submatrix indexing
- Graph contraction
- Cycle detection
- Multigrid interpolation & restriction
- Colored intersection searching
- Applying constraints in finite element computations
- Context-free parsing

Parallel dense case

Parallel Efficiency:

1-D Layout:

2-D Layout:

- In the dense case, 2-D scales better with the number of processors
- Turns out to be same for the sparse case. . . .

p(0,0) p(0,1) p(0,2)

p(1,0) p(1,1) p(1,2)

p(2,0) p(2,1) p(2,2)

Should be zero for perfect efficiency

Upper bounds on speedup, sparse 1-D & 2-D[ICPP’08]

2-D algorithm

1-D algorithm

N

N

P

P

- 1-D algorithms do not scale beyond 40x
- Break-even point is around 50 processors.

2-D example: Sparse SUMMA

Bkj

j

k

k

*

=

i

Cij

Aik

- Cij+= Aik* Bkj

- Based on dense SUMMA
- Generalizes to nonsquare matrices, etc.

Submatrices are hypersparse (i.e. nnz << n)

nnz’ =

blocks

Average of c nonzeros per column

Total Storage:

blocks

- A data structure or algorithm that depends on the matrix dimension n (e.g. CSR or CSC) is asymptotically too wasteful for submatrices

#5: The Libraries Challenge

- The software version of the primitives challenge!
- What languages, libraries, and environments will support combinatorial scientific computing?
- Zoltan, (P)BGL, MTGL, . . . .

Layer 1: Graph Theoretic Tools

Graph operations

Global structure of graphs

Graph partitioning and clustering

Graph generators

Visualization and graphics

Scan and combining operations

Utilities

GAPDT: Toolbox for graph analysis and pattern discovery[G, Reinhardt, Shah]Sample application stack

Computational ecology, CFD, data exploration

Applications

CG, BiCGStab, etc. + combinatorial preconditioners (AMG, Vaidya)

Preconditioned Iterative Methods

Graph querying & manipulation, connectivity, spanning trees,

geometric partitioning, nested dissection, NNMF, . . .

Graph Analysis & PD Toolbox

Arithmetic, matrix multiplication, indexing, solvers (\, eigs)

Distributed Sparse Matrices

Landscape connectivity modeling

- Habitat quality, gene flow, corridor identification, conservation planning
- Pumas in southern California: 12 million nodes, < 1 hour
- Targeting larger problems: Yellowstone-to-Yukon corridor

Figures courtesy of Brad McRae, NCEAS

#6: The Productivity Challenge

“Once we settled down on it, it was sort of like digging the Panama Canal - one shovelful at a time.”

- Ken Appel (& Wolfgang Haken), 1976

Productivity

Raw performance isn’t always the only criterion.

Other factors include:

- Seamless scaling from desktop to HPC
- Interactive response for exploration and visualization
- Rapid prototyping
- Usability by non-experts
- Just plain programmability

Interactive graph viz [Hollerer & Trethewey]

- Nonlinearly-scaled breadth-first search
- Distant vertices stay put, selected vertex moves to place
- Real-time click&drag for moderately large graphs

#7: The Data Size Challenge

“Can we understand anything interesting about our data when we do not even have time to read all of it?”

- Ronitt Rubinfeld

Issues in (many) large graph applications

- Where does the graph live? Disk or memory?
- Often want approximate answers from sampling
- Multiple simultaneous queries to same graph
- Graph may be fixed, or slowly changing
- Throughput and response time both important
- Dynamic subsetting
- User needs to solve problem on “my own” version of the main graph
- E.g. landscape data masked by geographic location, filtered by obstruction type, resolved by species of interest

#8: The Uncertainty Challenge

- “Discrete” quantities may be probability distributions
- May want to manage and quantify uncertainty between multiple levels of modeling
- May want to statistically sample too-large data, or extrapolate probabilistically from incomplete data
- For example, in graph algorithms:
- The graph itself may not be known with certainty
- Vertex / edge labels may be stochastic
- May want analysis of sensitivities or thresholds

Horizontal-vertical decomposition ofdynamical systems [Mezic et al.]

Stable and unstable directions at multiple scales?

How to identify functional vs regulatory components?

Model reduction and graph decomposition

Spectral graph decomposition technique combined with dynamical systems analysis leads to deconstruction of a possibly unknown network into inputs, outputs, forward and feedback loops and allows identification of a minimal functional unit(MFU)of a system.

Additional functional

requirements

Allows identification of roles of

different feedback loops

Level of output

For MFU

Minimal functional units:

sensitive edges (leading to lack of production)

easily identifiable

Level of output

with feedback loops

Approach:

- Decompose networks
- Propagate uncertainty through components
- Iteratively aggregate component uncertainty

Output, execution

Trim the network,

preserve dynamics!

(node 4 and

several connections pruned,

with no loss of performance)

H-V decomposition

Feedback loops

Forward, production unit

Input, initiator

Mezic group, UCSB

Parallel modeling of fish interaction [Barbaro, Trethewey, Youssef, Birnir, G]

- Capelin schools in seas around Iceland
- Economic impact and ecological impact
- Collapse of stock in several prominent fishing areas demonstrates the need for careful tracking of fish
- Limitations on modeling
- Group-behavior phenomena missed by lumped models
- Real schools contain billions of fish; thousands of iterations
- Challenges include dynamic load balancing and accurate multiscale modeling

#9: The Education Challenge

- How do you teach this stuff?
- Where do you go to take courses in
- Graph algorithms …
- … on massive data sets …
- … in the presence of uncertainty …
- … analyzed on parallel computers …
- … applied to a domain science?
- This another whole discussion, but a crucial one.

#10: The Foundations Challenge

“Numerical analysis is the study of algorithms for the problems of continuous mathematics.”

- L. Nick Trefethen

CS and CS&E (Hendrickson 2003)

Computer Science

Computational Science

Algorithmics

- Combinatorial Scientific Computing

- What’s in the intersection?

NIST workshop 2007: “Foundations of Measurement Science for Information Systems”

A few suggested research areas in measurement science for complex networks:

- Measurement of global properties of networks:
- Not just density, diameter, degree distribution, etc.
- Connectivity, robustness
- Spectral properties: Laplacian eigenvectors, Cheeger bounds, …
- Other global measures of complexity?
- Sensitivity analysis of all of the above
- Stochastic settings for all of the above
- Multiscale modeling of complex networks
- Building useful reference data sets and generators
- Fundamentals of high-performance combinatorial computing

Ten Challenges In CSC

1. Parallelism

2. Architecture

3. Algorithms

4. Primitives

5. Libraries

6. Productivity

7. Data size

8. Uncertainty

9. Education

10. Foundations

Morals (from Hendrickson, 2003)

- Things are clearer if you look at them from multiple perspectives
- Combinatorial algorithms are pervasive in scientific computing and will become more so
- Lots of exciting opportunities
- High impact for discrete algorithms work
- Enabling for scientific computing

Conclusion

This is a great time to be doing research in combinatorial scientific computing!

Thanks …

Vikram Aggarwal, David Bader, Alethea Barbaro, Jon Berry, Aydin Buluc, Alan Edelman, Jeremy Fineman, Frederic Gibou, Bruce Hendrickson, Tobias Hollerer, Crystal Kahn, Stefan Karpinski, Jeremy Kepner, Jure Leskovic, Brad McRae, Igor Mezic, Cleve Moler, Steve Reinhardt, Eric Robinson, Rob Schreiber, Viral Shah, Peterson Trethewey, James Watson, Lamia Youssef

Download Presentation

Connecting to Server..