1 / 15

Complexity Measures for Parallel Computation

This article explores various complexity measures for parallel computation, including execution time, parallelism, and communication volume. It discusses models such as the Work/Span Model, Latency/Bandwidth Model, and Cache/Memory Model.

cmcclellan
Download Presentation

Complexity Measures for Parallel Computation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Complexity Measures for Parallel Computation

  2. Several possible models! • Execution time and parallelism: • Work / Span Model • Total cost of moving data: • Communication Volume Model • Detailed models that try to capture time for moving data: • Latency / Bandwidth Model (for message-passing) • Cache Memory Model (for hierarchical memory) • Other detailed models we won’t discuss: LogP, UMH, ….

  3. Work / Span Model tp= execution time on p processors

  4. Work / Span Model tp= execution time on p processors t1=work

  5. Work / Span Model tp = execution time on p processors t1 =work t∞ = span * * Also called critical-path length or computational depth.

  6. Work / Span Model tp = execution time on p processors t1 = work t∞= span * • WORK LAW • tp≥t1/p • SPAN LAW • tp ≥ t∞ * Also called critical-path length or computational depth.

  7. Series Composition A B Work:t1(A∪B) = Work:t1(A∪B) = t1(A) + t1(B) Span: t∞(A∪B) = t∞(A) +t∞(B) Span: t∞(A∪B) =

  8. Parallel Composition A B Work:t1(A∪B) = t1(A) + t1(B) Span: t∞(A∪B) = max{t∞(A), t∞(B)}

  9. Speedup Def. t1/tP= speedupon p processors. If t1/tP= (p), we have linear speedup, = p, we have perfect linear speedup, > p, we have superlinear speedup, (which is not possible in this model, because of the Work Lawtp≥ t1/p)

  10. Parallelism Because the Span Law requires tp≥ t∞, the maximum possible speedup is t1/t∞ = (potential) parallelism =the average amount of work per step along the span.

  11. Communication Volume Model • Network of p processors • Each with local memory • Message-passing • Communication volume (v) • Total size (words) of all messages passed during computation • Broadcasting one word costs volume p (actually, p-1) • No explicit accounting for communication time • Thus, can’t really model parallel efficiency or speedup; for that, we’d use the latency-bandwidth model (see next slide)

  12. Complexity Measures for Parallel Computation Problem parameters: • n index of problem size • p number of processors Algorithm parameters: • tp running time on p processors • t1 time on 1 processor = sequential time = “work” • t∞ time on unlimited procs = critical path length = “span” • v total communication volume Performance measures • speedups = t1 / tp • efficiency e = t1 / (p*tp) = s / p • (potential) parallelism pp = t1 / t∞

  13. Laws of Parallel Complexity • Work law: tp ≥ t1 / p • Span law: tp ≥ t∞ • Amdahl’s law: • If a fraction f, between 0 and 1, of the work must be done sequentially, then speedup ≤ 1 / f • Exercise: prove Amdahl’s law from the span law.

  14. Detailed complexity measures for data movement I: Latency/Bandwith Model Moving data between processors by message-passing • Machine parameters: • aortstartup latency (message startup time in seconds) • b or tdatainverse bandwidth (in seconds per word) • between nodes of Triton, a ~ 2.2 × 10-6and b ~ 6.4 × 10-9 • Time to send & recv or bcast a message of w words: a + w*b • tcomm total commmunication time • tcomp total computation time • Total parallel time: tp = tcomp + tcomm

  15. Detailed complexity measures for data movement II: Cache Memory Model Moving data between cache and memory on one processor: • Assume just two levels in memory hierarchy, fast and slow • All data initially in slow memory • m = number of memory elements (words) moved between fast and slow memory • tm = time per slow memory operation • f= number of arithmetic operations • tf= time per arithmetic operation, tf << tm • q = f / maverage number of flops per slow element access • Minimum possible time = f * tfwhen all data in fast memory • Actual time • f * tf + m * tm = f * tf * (1 + tm/tf * 1/q) • Larger q means time closer to minimum f * tf

More Related