slide1
Download
Skip this Video
Download Presentation
Ernie Chan

Loading in 2 Seconds...

play fullscreen
1 / 37

Ernie Chan - PowerPoint PPT Presentation


  • 80 Views
  • Uploaded on

Collective Communication on Architectures that Support Simultaneous Communication over Multiple Links. Ernie Chan. Ernie Chan Robert van de Geijn Department of Computer Sciences The University of Texas at Austin. William Gropp Rajeev Thakur Mathematics and Computer Science Division

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Ernie Chan' - lareina-giles


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

Collective Communication on Architectures that Support Simultaneous Communication over Multiple Links

Ernie Chan

authors
Ernie Chan

Robert van de Geijn

Department of Computer Sciences

The University of Texas at Austin

William Gropp

Rajeev Thakur

Mathematics and Computer Science Division

Argonne National Laboratory

Authors
testbed architecture
Testbed Architecture
  • IBM Blue Gene/L
    • 3D torus point-to-point interconnect network
    • One rack
      • 1024 dual-processor nodes
      • Two 8 x 8 x 8 midplanes
    • Special feature to send simultaneously
      • Use multiple calls to MPI_Isend
outline
Outline
  • Testbed Architecture
  • Model of Parallel Computation
  • Sending Simultaneously
  • Collective Communication
  • Generalized Algorithms
  • Performance Results
  • Conclusion
model of parallel computation
Model of Parallel Computation
  • Target Architectures
    • Distributed-memory parallel architectures
  • Indexing
    • p computational nodes
    • Indexed 0 … p - 1
  • Logically Fully Connected
    • A node can send directly to any other node
model of parallel computation1
Model of Parallel Computation
  • Topology
    • N-dimensional torus

0

1

2

3

4

5

6

7

9

10

11

8

12

14

15

13

model of parallel computation2
Model of Parallel Computation
  • Old Model of Communicating Between Nodes
    • Unidirectional sending or receiving
model of parallel computation3
Model of Parallel Computation
  • Old Model of Communicating Between Nodes
    • Simultaneous sending and receiving
model of parallel computation4
Model of Parallel Computation
  • Old Model of Communicating Between Nodes
    • Bidirectional exchange
model of parallel computation5
Model of Parallel Computation
  • Communicating Between Nodes
    • A node can send or receive with 2N other nodes simultaneously along its 2N different links
model of parallel computation6
Model of Parallel Computation
  • Communicating Between Nodes
    • Cannot perform bidirectional exchange on any link while sending or receiving simultaneously with multiple nodes
model of parallel computation7
Model of Parallel Computation
  • Cost of Communication

α + nβ

    • α: startup time, latency
    • n: number of bytes to communicate
    • β: per data transmission time, bandwidth
outline1
Outline
  • Testbed Architecture
  • Model of Parallel Computation
  • Sending Simultaneously
  • Collective Communication
  • Generalized Algorithms
  • Performance Results
  • Conclusion
sending simultaneously
Sending Simultaneously
  • Old Cost of Communication with Sends to Multiple Nodes
    • Cost to send to m separate nodes

(α + nβ) m

sending simultaneously1
Sending Simultaneously
  • New Cost of Communication with Simultaneous Sends

(α + nβ) m

can be replaced with

(α + nβ) + (α + nβ) (m - 1)

sending simultaneously2
Sending Simultaneously
  • New Cost of Communication with Simultaneous Sends

(α + nβ) m

can be replaced with

(α + nβ) + (α + nβ) (m - 1) τ

Cost of one send

Cost of extra sends

sending simultaneously3
Sending Simultaneously
  • New Cost of Communication with Simultaneous Sends

(α + nβ) m

can be replaced with

0 ≤τ ≤ 1

(α + nβ) + (α + nβ) (m - 1) τ

Cost of one send

Cost of extra sends

sending simultaneously4
Sending Simultaneously
  • Benchmarking Sending Simultaneously
    • Logarithmic-Logarithmic timing graphs
    • Midplane – 512 nodes
    • Sending simultaneously with 1 – 6 neighbors
    • 8 bytes – 4 MB
sending simultaneously6
Sending Simultaneously
  • Cost of Communication with Simultaneous Sends

(α + nβ) (1 + (m - 1) τ)

outline2
Outline
  • Testbed Architecture
  • Model of Parallel Computation
  • Sending Simultaneously
  • Collective Communication
  • Generalized Algorithms
  • Performance Results
  • Conclusion
collective communication
Collective Communication
  • Broadcast (Bcast)
    • Motivating example

Before After

outline3
Outline
  • Testbed Architecture
  • Model of Parallel Computation
  • Sending Simultaneously
  • Collective Communication
  • Generalized Algorithms
  • Performance Results
  • Conclusion
generalized algorithms
Generalized Algorithms
  • Short-Vector Algorithms
    • Minimum-Spanning Tree
  • Long-Vector Algorithms
    • Bucket Algorithm
generalized algorithms1
Generalized Algorithms
  • Minimum-Spanning Tree
generalized algorithms2
Generalized Algorithms
  • Minimum-Spanning Tree
    • Divide p nodes into N+1 partitions
generalized algorithms3
Generalized Algorithms
  • Minimum-Spanning Tree
    • Disjointed partitions on N-dimensional mesh

0

1

2

3

4

5

6

7

9

10

11

8

12

14

15

13

generalized algorithms4
Generalized Algorithms
  • Minimum-Spanning Tree
    • Divide dimensions by a decrementing counter from N+1

0

1

2

3

4

5

6

7

9

10

11

8

12

14

15

13

generalized algorithms5
Generalized Algorithms
  • Minimum-Spanning Tree
    • Now divide into 2N+1 partitions

0

1

2

3

4

5

6

7

9

10

11

8

12

14

15

13

outline4
Outline
  • Testbed Architecture
  • Model of Parallel Computation
  • Sending Simultaneously
  • Collective Communication
  • Generalized Algorithms
  • Performance Results
  • Conclusion
performance results
Performance Results

Single point-to-point

communication

outline5
Outline
  • Testbed Architecture
  • Model of Parallel Computation
  • Sending Simultaneously
  • Collective Communication
  • Generalized Algorithms
  • Performance Results
  • Conclusion
conclusion
Conclusion
  • IBM Blue Gene/L supports functionality of sending simultaneously
    • Benchmarking along with model checking verifies this claim
  • New generalized algorithms show clear performance gains
conclusion1
Conclusion
  • Future Directions
    • Room for optimization to reduce implementation overhead
    • What if not using MPI_COMM_WORLD?
    • Possible new algorithm for Bucket Algorithm
  • Questions? [email protected]
ad