1 / 25

Communication operations

Communication operations. Efficient Parallel Algorithms COMP308. Communication time. Communication requires 3 costs: 1. Static start up time ( t s ): It is the time required to handle a message at the sending processor 2. Per-hop time ( t h ) with l the #Links that the message passes

yvon
Download Presentation

Communication operations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Communication operations Efficient Parallel Algorithms COMP308

  2. Communication time Communication requires 3 costs: 1. Static start up time (ts): • It is the time required to handle a message at the sending processor 2. Per-hop time (th) with l the #Links that the message passes • It is take a finite amount of time to reach the next processor in its path after a message leaves a processor. 3. Per-word transfer time (tw): with m the #bytes • If the channel bandwidth is r words per second, then each word takes time tw=1/r to traverse the link.

  3. There are 2 main communication schemes:

  4. “store and forward” vs “cut-through” • In “store and forward” routing, when a message is traversing a path with multiple links, each intermediate node on the path forwards the message to the next node after it has received. • In “cut-through” routing an intermediate nodes does not wait for the entire message to arrive before forwarding it. • A tracer is first sent from the source to the designation node to establish a connection. • Once a connection has been established, the flits are sent one after the other. All flits follows the same path in a dovetailed fashion. • As soon as a flit is received at an intermediate node, the flit is passed on to the next node.

  5. One to All Broadcast • Initially, only the source processor has the data of size m that need to be broadcast. At the end of the termination of the procedure, there are P copies of the initial data, one residing at each processor.

  6. Broadcast on ring (Store and Forward) If the sender sends the messages consecutively to the p-1 other processors, it takes p-1 steps. By optimisation, we can reduce this to p/2 steps. Eg.: a 8-processor ring requires 4 steps

  7. NS diagram for “broadcast on ring”

  8. Ring network, Cut-Through routing • With cut-through routing, messages can be sent faster to nodes that are multiple hops away in the network. By using this, we send the message first to the outermost node. In general, in a p-processor ring the source processor first sends the data to the processor at distance p/2, then both processors sends the message to the processors at distance of p/4 in the same direction, then to p/8, etc.

  9. Broadcast on mesh (Store and Forward) Most of the optimised communication algorithms on a mesh are simple extensions of their ring counterparts, by consecutively applying the ring algorithm on each dimension of the mesh.

  10. Hypercube • The regular binary structure of the hypercube plays an important role in optimising communication. • Here, a broadcast is performed by sending the message along each dimension at each step. This results in log p or d steps. • It can be proved easily that log p is the minimal number of steps for every network.

  11. Hypercube • Important properties of the networks: • Small degree, • Small diameter, • Regular recursive structure, • Easy way to embed trees, etc • Hypercube – two nodes connected if they are differ precisely on one bit

  12. Hypercube – two nodes connected if they are differ precisely on one bit

  13. 0 000 010 10 0000 100 110 00 0010 0100 0110 1000 1010 1100 1110 111 001 0001 0011 11 0101 1001 0111 01 1011 1 1101 011 101 1111

  14. 1000 1010 1100 1110 0000 0010 0100 0110 001 011 1101 1111 0001 0011 0101 0111

  15. Broadcast on hypercube (S&F)

  16. Broadcast on ring (Cut-Through )

  17. Broadcast on mesh (C-T)

  18. Broadcast on binary tree (C-T)

  19. Gossiping All-to-All Communication

  20. Gossiping on Ring (Store and Forward)

  21. Gossiping on Mesh (Store and Forward)

  22. Gossiping on Hypercube (S&F)

  23. Gossiping on Ring (and Mesh)Cut-Through Routing • Each process sends m(p-1) words of data because it has an m-word packet for every other processor • The average distance that an m word packet travels is • Since there are p processors, each performing the same type of communication, the total traffic on the network is • The total number of communication channels in the network to share this load is p. Hence this procedure cannot be improved by using CT routing

  24. Gossiping on Hypercube (CT routing)

More Related