Collective Communication on Architectures that Support Simultaneous Communication over Multiple Link...
This presentation is the property of its rightful owner.
Sponsored Links
1 / 37

Ernie Chan PowerPoint PPT Presentation


  • 44 Views
  • Uploaded on
  • Presentation posted in: General

Collective Communication on Architectures that Support Simultaneous Communication over Multiple Links. Ernie Chan. Ernie Chan Robert van de Geijn Department of Computer Sciences The University of Texas at Austin. William Gropp Rajeev Thakur Mathematics and Computer Science Division

Download Presentation

Ernie Chan

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Ernie chan

Collective Communication on Architectures that Support Simultaneous Communication over Multiple Links

Ernie Chan


Authors

Ernie Chan

Robert van de Geijn

Department of Computer Sciences

The University of Texas at Austin

William Gropp

Rajeev Thakur

Mathematics and Computer Science Division

Argonne National Laboratory

Authors


Testbed architecture

Testbed Architecture

  • IBM Blue Gene/L

    • 3D torus point-to-point interconnect network

    • One rack

      • 1024 dual-processor nodes

      • Two 8 x 8 x 8 midplanes

    • Special feature to send simultaneously

      • Use multiple calls to MPI_Isend


Outline

Outline

  • Testbed Architecture

  • Model of Parallel Computation

  • Sending Simultaneously

  • Collective Communication

  • Generalized Algorithms

  • Performance Results

  • Conclusion


Model of parallel computation

Model of Parallel Computation

  • Target Architectures

    • Distributed-memory parallel architectures

  • Indexing

    • p computational nodes

    • Indexed 0 … p - 1

  • Logically Fully Connected

    • A node can send directly to any other node


Model of parallel computation1

Model of Parallel Computation

  • Topology

    • N-dimensional torus

0

1

2

3

4

5

6

7

9

10

11

8

12

14

15

13


Model of parallel computation2

Model of Parallel Computation

  • Old Model of Communicating Between Nodes

    • Unidirectional sending or receiving


Model of parallel computation3

Model of Parallel Computation

  • Old Model of Communicating Between Nodes

    • Simultaneous sending and receiving


Model of parallel computation4

Model of Parallel Computation

  • Old Model of Communicating Between Nodes

    • Bidirectional exchange


Model of parallel computation5

Model of Parallel Computation

  • Communicating Between Nodes

    • A node can send or receive with 2N other nodes simultaneously along its 2N different links


Model of parallel computation6

Model of Parallel Computation

  • Communicating Between Nodes

    • Cannot perform bidirectional exchange on any link while sending or receiving simultaneously with multiple nodes


Model of parallel computation7

Model of Parallel Computation

  • Cost of Communication

    α + nβ

    • α: startup time, latency

    • n: number of bytes to communicate

    • β: per data transmission time, bandwidth


Outline1

Outline

  • Testbed Architecture

  • Model of Parallel Computation

  • Sending Simultaneously

  • Collective Communication

  • Generalized Algorithms

  • Performance Results

  • Conclusion


Sending simultaneously

Sending Simultaneously

  • Old Cost of Communication with Sends to Multiple Nodes

    • Cost to send to m separate nodes

      (α + nβ) m


Sending simultaneously1

Sending Simultaneously

  • New Cost of Communication with Simultaneous Sends

    (α + nβ) m

    can be replaced with

(α + nβ) + (α + nβ) (m - 1)


Sending simultaneously2

Sending Simultaneously

  • New Cost of Communication with Simultaneous Sends

    (α + nβ) m

    can be replaced with

(α + nβ) + (α + nβ) (m - 1) τ

Cost of one send

Cost of extra sends


Sending simultaneously3

Sending Simultaneously

  • New Cost of Communication with Simultaneous Sends

    (α + nβ) m

    can be replaced with

0 ≤τ ≤ 1

(α + nβ) + (α + nβ) (m - 1) τ

Cost of one send

Cost of extra sends


Sending simultaneously4

Sending Simultaneously

  • Benchmarking Sending Simultaneously

    • Logarithmic-Logarithmic timing graphs

    • Midplane – 512 nodes

    • Sending simultaneously with 1 – 6 neighbors

    • 8 bytes – 4 MB


Sending simultaneously5

Sending Simultaneously


Sending simultaneously6

Sending Simultaneously

  • Cost of Communication with Simultaneous Sends

    (α + nβ) (1 + (m - 1) τ)


Sending simultaneously7

Sending Simultaneously


Sending simultaneously8

Sending Simultaneously


Outline2

Outline

  • Testbed Architecture

  • Model of Parallel Computation

  • Sending Simultaneously

  • Collective Communication

  • Generalized Algorithms

  • Performance Results

  • Conclusion


Collective communication

Collective Communication

  • Broadcast (Bcast)

    • Motivating example

      Before After


Outline3

Outline

  • Testbed Architecture

  • Model of Parallel Computation

  • Sending Simultaneously

  • Collective Communication

  • Generalized Algorithms

  • Performance Results

  • Conclusion


Generalized algorithms

Generalized Algorithms

  • Short-Vector Algorithms

    • Minimum-Spanning Tree

  • Long-Vector Algorithms

    • Bucket Algorithm


Generalized algorithms1

Generalized Algorithms

  • Minimum-Spanning Tree


Generalized algorithms2

Generalized Algorithms

  • Minimum-Spanning Tree

    • Divide p nodes into N+1 partitions


Generalized algorithms3

Generalized Algorithms

  • Minimum-Spanning Tree

    • Disjointed partitions on N-dimensional mesh

0

1

2

3

4

5

6

7

9

10

11

8

12

14

15

13


Generalized algorithms4

Generalized Algorithms

  • Minimum-Spanning Tree

    • Divide dimensions by a decrementing counter from N+1

0

1

2

3

4

5

6

7

9

10

11

8

12

14

15

13


Generalized algorithms5

Generalized Algorithms

  • Minimum-Spanning Tree

    • Now divide into 2N+1 partitions

0

1

2

3

4

5

6

7

9

10

11

8

12

14

15

13


Outline4

Outline

  • Testbed Architecture

  • Model of Parallel Computation

  • Sending Simultaneously

  • Collective Communication

  • Generalized Algorithms

  • Performance Results

  • Conclusion


Performance results

Performance Results

Single point-to-point

communication


Performance results1

Performance Results

my-bcast-MST


Outline5

Outline

  • Testbed Architecture

  • Model of Parallel Computation

  • Sending Simultaneously

  • Collective Communication

  • Generalized Algorithms

  • Performance Results

  • Conclusion


Conclusion

Conclusion

  • IBM Blue Gene/L supports functionality of sending simultaneously

    • Benchmarking along with model checking verifies this claim

  • New generalized algorithms show clear performance gains


Conclusion1

Conclusion

  • Future Directions

    • Room for optimization to reduce implementation overhead

    • What if not using MPI_COMM_WORLD?

    • Possible new algorithm for Bucket Algorithm

  • [email protected]


  • Login