1 / 35

Analysis of Parallel Algorithms for Energy Conservation in Scalable Multicore Architectures

Analysis of Parallel Algorithms for Energy Conservation in Scalable Multicore Architectures. Vijay Anand Reddy and Gul Agha University of Illinois. Overview. Motivation Problem Definition & Assumptions Methodology & A case study Related Work & Conclusion. Energy and Multi-core.

lcharlene
Download Presentation

Analysis of Parallel Algorithms for Energy Conservation in Scalable Multicore Architectures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis of Parallel Algorithms for Energy Conservation in Scalable Multicore Architectures Vijay Anand Reddy and Gul Agha University of Illinois

  2. Overview • Motivation • Problem Definition & Assumptions • Methodology & A case study • Related Work & Conclusion

  3. Energy and Multi-core 2% of energy consumed in the US is by computers. Efficiency = Performance/Watt Want to optimize efficiency: • Low power processors are typically more efficient. • Varying the frequency at which cores run to balance performance and energy consumption.

  4. Parallel Programming • Parallel programming involves • Dividing computation into autonomous actors • Specifying interaction (shared memory or message passing) between them.

  5. Parallel Performance • How many actors may execute at the same time: Concurrency Index • The number of available cores • The speed at which they execute • How much and when they need to communicate: Communication overhead • Network congestion at memory affects performance • Performance depends on both the parallel application and parallel architecture.

  6. Scalable Multicore Architectures • We are interested in (energy) efficiency as the number of cores are scaled up… • Can multicore architectures be scaled up?

  7. Performance Vs Number of cores Increasing Cores may not benefit parallel programming applications if shared memory is maintained. Taken from IEEE spectrum Magazine (Sandia Research Labs)

  8. Message Passing, Performance and Energy Consumption • Parallel programming involves message passing between actors. • Increasing the number of cores: • Leads to an increase in the number of messages communicated between them. • Increasing cores may reduce performance. • May lead to increased energy consumption. • Depends on the parallel application and architectural parameters.

  9. Energy versus Performance • For a fixed performance target, increasing cores may decrease the energy consumed for computation: • Cores can be run at lower frequency • But increasing cores will also increase the energy consumed for communication. • Question: what is the trade off? • Depends on the parallel application. • Depends on the network architecture. • Depends on the memory structure at each core.

  10. Energy Scalability under Iso-Performance • Given a parallel algorithm, an architecture model and the performance measure, what is the appropriate number of cores required for minimum energy consumption as a function of input size? • Important for response time in interactive Applications.

  11. Simplifying Architectural Assumptions • All cores operate at the same speed. • Speed of cores can be varied by frequency scaling. • Computation time of the cores can be scaled (by controlling the speed), but not communication time between cores. • Communication time between cores is constant. • No memory hierarchy at the cores.

  12. Energy Model • Energy: E = Ec£ (number of cycles) £ T £ X3 Where • Ec is hardware constant • X is the frequency of the processor • Running Time T = (number of cycles) £ (1/X)

  13. Constants • Em : Energy Consumed per message. • F : Maximum frequency of a core. • N : Input Size. • M : Number of cores. • Kc : Number of cycles at max frequency for single message communication time. • Pidle : Static power consumed per unit of time.

  14. Case Study: Adding N Numbers • Example N numbers – 4 Actors 2 1 3 4 N/4 additions Communication period • In the end, actor 1 stores the sum of all N numbers

  15. Methodology • Step 1: Evaluate the critical path of the parallel algorithm 2 1 3 4

  16. Methodology • Step 1: Evaluate the critical path of the parallel algorithm • Step 2: Partition the critical path based on communication and computation steps. 2 1 3 4 Computation Communication

  17. Methodology • Step 3: Scale computation steps so that the parallel performance matches the sequential performance. • F’ = F ¢ (N/M − 1 + log(M))¢¯ N ¢¯ − log(M) ¢Kc where βis the number of cycles per addition 2 1 3 4

  18. Methodology • Step 3: Scale computation steps so that the parallel performance matches the sequential performance. • Step 4: Evaluate number of messages sends in the parallel algorithm 2 1 3 4 M − 1 Messages

  19. Methodology • Step 5: Frame an equation for energy consumption for the parallel application • Energy for communication Ecomm = Em¢ (M - 1) • Energy for computation Ecomp = Ec¢ (N - 1) ¢¯¢ F’2 • Energy for Idle Computation (static power).

  20. Methodology • Step 6 : Analyze the equation to obtain appropriate number of cores required for minimum energy consumption as a function of input size. • Differentiate w.r.t. the number of cores.

  21. Plot: Energy-N-M ¯ = 1 Kc = 5 units Em / (Ec¢ F2)= 500 Ps /F = 1 270 cores at N =1010 70 cores at N = 108

  22. Sensitivity Analysis ( k = Em / (Ec¢ F2 )) As k increases , optimal number of cores decreases.

  23. Naïve Quicksort • Assume input array is on a single core. • A single core partitions an array and sends part of it to another core. • Recursively divide the array until all the cores are used (assume static division). • Merge the numbers.

  24. Naïve Quicksort Analysis

  25. Case Study: Naïve Quicksort Energy Input Size No: of Cores No Tradeoff: Single Core is good enough

  26. Parallel Quicksort • Data to be sorted is distributed across the cores (assume parallel I/O). • A single pivot is broadcast to all cores. • Each core partitions its own data • Data is moved so that the lessers are at cores in one region, and greaters are in another. • Recursively quicksort each region.

  27. Parallel Quicksort Analysis

  28. Parallel Quicksort Algorithm

  29. Comparing Quicksort Algorithms • Recall: Parallel Quicksort has scalability characteristics under performance iso-efficiency compared to that of Naïve Quicksort. (Vipin. et al.) • Both Quicksort algorithms have similar bad energy scalability under Iso-performance characteristics.

  30. LU Factorization • Given an N x N matrix A, find a unit lower triangular matrix L and an upper triangular matrix U, such that A = L U • Use the coarse-grain 1-D column parallel algorithm

  31. LU Factorization Analysis

  32. Case Study : LU Factorization

  33. Related Work • Hardware Simulation Based Technique (J. Li and J.F. Martinez) • Runtime adaptation technique (online) • Goal: Find the appropriate frequency and number of cores for power efficient execution. • Search space: O(L ¢ M), where L is the number of available frequency levels and M is the number of cores. • Prediction Based Technique (Matthew et.al) • Performance prediction model with low runtime overhead: Dynamically adjust L and M . • Statistically analyzes samples of hardware rate events (collected from performance monitors). • Based on profiled data collected from real work load

  34. Conclusion and Future work • Theoretical methodology has been proposed to evaluate the Energy-performance tradeoffs for parallel applications on multi-core architectures as a function of input size. • We plan to analyze various genre of parallel algorithms for Energy-performance trade offs. • We also plan to build on this methodology to consider various memory structures for energy analysis of parallel applications.

  35. References [1]. Introduction to Parallel Computing by Vipin Kumar et al. [2]. Dynamic Power-Performance Adaptation of Parallel Computation on Chip Multiprocessor, J. Li and J.F. Martinez, 2006 [3]. Prediction Models for Multi dimensional Power-Performance Optimization on Many Cores. Matthew et al., 2008

More Related