1 / 25

Models of Parallel Computation

Models of Parallel Computation. Advanced Algorithms & Data Structures Lecture Theme 12 Prof. Dr. Th. Ottmann Summer Semester 2006. Overview. Models of parallel computation Characteristics of SIMD models Design issue for network SIMD models The mesh and the hypercube architectures

uriah
Download Presentation

Models of Parallel Computation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Models of Parallel Computation Advanced Algorithms & Data Structures Lecture Theme 12 Prof. Dr. Th. Ottmann Summer Semester 2006

  2. Overview • Models of parallel computation • Characteristics of SIMD models • Design issue for network SIMD models • The mesh and the hypercube architectures • Classification of the PRAM model • Matrix multiplication on the EREW PRAM

  3. Models of parallel computation • Parallel computational models can be broadly classified into two categories, • Single Instruction Multiple Data (SIMD) • Multiple Instruction Multiple Data (MIMD)

  4. Models of parallel computation • SIMD models are used for solving problems which have regular structures. We will mainly study SIMD models in this course. • MIMD models are more general and used for solving problems which lack regular structures.

  5. SIMD models • An N- processor SIMD computer has the following characteristics : • Each processor can store both program and data in its local memory. • Each processor stores an identical copy of the same program in its local memory.

  6. SIMD models • At each clock cycle, each processor executes the same instruction from this program. However, the data are different in different processors. • The processors communicate among themselves either through an interconnection network or through a shared memory.

  7. Design issues for network SIMD models • A network SIMD model is a graph. The nodes of the graph are the processors and the edges are the links between the processors. • Since each processor solves only a small part of the overall problem, it is necessary that processors communicate with each other while solving the overall problem.

  8. Design issues for network SIMD models • The main design issues for network SIMD models are communication diameter, bisection width, and scalability. • We will discuss two most popular network models, mesh and hypercube in this lecture.

  9. Communication diameter • Communication diameter is the diameter of the graph that represents the network model. The diameter of a graph is the longest distance between a pair of nodes. • If the diameter for a model is d, the lower bound for any computation on that model is Ω(d).

  10. Communication diameter • The data can be distributed in such a way that the two furthest nodes may need to communicate.

  11. Communication diameter Communication between two furthest nodes takes Ω(d) time steps.

  12. Bisection width • The bisection width of a network model is the number of links to be removed to decompose the graph into two equal parts. • If the bisection width is large, more information can be exchanged between the two halves of the graph and hence problems can be solved faster.

  13. Bisection width Dividing the graph into two parts.

  14. Scalability • A network model must be scalable so that more processors can be easily added when new resources are available. • The model should be regular so that each processor has a small number of links incident on it.

  15. Scalability • If the number of links is large for each processor, it is difficult to add new processors as too many new links have to be added. • If we want to keep the diameter small, we need more links per processor. If we want our model to be scalable, we need less links per processor.

  16. Diameter and Scalability • The best model in terms of diameter is the complete graph. The diameter is 1. However, if we need to add a new node to an n -processor machine, we need n - 1 new links.

  17. Diameter and Scalability • The best model in terms of scalability is the linear array. We need to add only one link for a new processor. However, the diameter is n for a machine with n processors.

  18. The mesh architecture • Each internal processor of a 2-dimensional mesh is connected to 4 neighbors. • When we combine two different meshes, only the processors on the boundary need extra links. Hence it is highly scalable.

  19. The mesh architecture • Both the diameter and bisection width of an n-processor, 2-dimensional mesh is O(√n). • A 4 x 4 mesh

  20. The hypercube architecture Hypercubes of 0, 1, 2 and 3 dimensions

  21. The hypercube architecture • Each node of a d-dimensional hypercube is numbered using d bits. Hence, there are 2d processors in a d-dimensional hypercube. • Two nodes are connected by a direct link if their numbers differ only by one bit.

  22. The hypercube architecture • The diameter of a d-dimensional hypercube is d as we need to flip at most d bits (traverse d links) to reach one processor from another. • The bisection width of a d-dimensional hypercube is 2d-1.

  23. The hypercube architecture • The hypercube is a highly scalable architecture. Two d-dimensional hypercubes can be easily combined to form a d+1-dimensional hypercube. • The hypercube has several variants like butterfly, shuffle-exchange network and cube-connected cycles.

  24. Adding n numbers on the mesh Adding n numbers in √nsteps

  25. Adding n numbers on the hypercube Adding n numbers in log n steps

More Related