1 / 26

Serge G. Petiton June 23rd, 2008

What Programming Paradigms and algorithms for Petascale Scientific Computing, a Hierarchical Programming Methodology Tentative. Serge G. Petiton June 23rd, 2008. Japan-French Informatic Laboratory (JFIL). Outline. Introduction Present Petaflops, on the Road to Future Exaflops

tegan
Download Presentation

Serge G. Petiton June 23rd, 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What Programming Paradigms and algorithms for Petascale Scientific Computing, a Hierarchical Programming Methodology Tentative Serge G. Petiton June 23rd, 2008 Japan-French Informatic Laboratory (JFIL)

  2. Outline • Introduction • Present Petaflops, on the Road to Future Exaflops • Experimentations, toward models and extrapolations • Conclusion PAAP

  3. Outline • Introduction • Present Petaflops, on the Road to Future Exaflops • Experimentations, toward models and extrapolations • Conclusion PAAP

  4. Introduction The Petaflop frontier was crossed (May 25-26 night) – top500 Sustained Petaflop would be soon available by a large number of computers As scheduled since the 9Oth, we didn’t really have large technological gaps to access Petaflops computers Language and tools are not so different since first SMPs What about languages, tools, methods for sustained 10 Petaflops Exaflop would probably ask for new technology advancements and new ecosystems On the road toward Exaflops, we would soon face difficult challenges and we have to anticipate new problems around the 10 Petaflop frontier. PAAP

  5. Outline • Introduction • Present Petaflops, on the Road to Future Exaflops • Experimentations, toward models and extrapolations • Conclusion PAAP

  6. Hyper Large Scale Hierarchical Distributed Parallel Architectures • Many-cores ask for new programming paradigm, as data parallel, • Message passing would be efficient for gang of cluster, • Workflow and Grid-like programming may be a solution for the higher level programming, • Accelerators, vector computing, • Energy consumption optimization, • Optical networks, • “Inter” and “intra” (chip, cluster, gang,….) communications • Distributed/Shared Memory computer on a chip. PAAP

  7. On the road from Petaflop toward Exaflop • Multi programming and execution paradigms, • Technological and software challenge : compilers, systems, middleware, schedulers, fault tolerance,… • New applications and Numerical Methods, • Arithmetic and elementary function (multiple and hybrids) • Data distributed on networks and grids, • Education challenges, we have to educate scientists PAAP

  8. and the road would be dificult…. • Multi-level programming paradigms, • Component Technologies, • Mixed data migration and computing, with large instrument control, • We have to use end-users expertise, • Indeterminist distributed computing, component dependence graph, • Middleware and Platform independent • “Time to solution” minimization, new metrics • We have to allow end-users to propose scheduler assistance and to give some advice to anticipate data migration data PAAP

  9. Outline • Introduction • Present Petaflops, on the Road to Future Exaflops • Experimentations, toward models and extrapolations • Conclusion PAAP

  10. YMLLanguage Front end : Depends only of the applications Back end : depends of middleware. Ex. XtermWeb (F), OmniRPC (Jp), and Condor (USA). PAAP http://yml.prism.uvsq.fr/

  11. Components/Tasks Graph Dependency par compute tache1(..); signal(e1); // compute tache2(..);migrate matrix(..); signal(e2); // wait(e1 and e2); Par compute tache3(..); signal(e3); // compute tache4(..); signal(e4); // compute tache5(..);control robot(..); signal(e5);visualize mesh(…) ; end par // wait(e3 and e4 and e5); compute tache6(..); compute tache7(..); end par Résultat A Generic component node Begin node End node Graph node PAAP Dependence

  12. LAKe Library (Nahid Emad, UVSQ) PAAP

  13. YML/LAKe PAAP

  14. Block Gauss-Jordan, 101 processor Cluster, Grid 5000; YML versus YML/OmniRPC (with Maximes Hugues (TOTAL and LIFL)) Time Taille de bloc = 1500 We optimize the « Time to Solution » Several middleware may be choose Number of Blocks PAAP

  15. GRID 5000, BGJ,10, 101 nœuds, YML versus YML/OmniRPC Block sizes = 1500 PAAP

  16. BGJ, YML/OmniRPC versus YML Block Size = 1500 PAAP

  17. Asynchronous Restarted Iterative Methods on multi-node computers With Guy Bergère, Zifan Li, and Ye Zhang (LIFL) PAAP

  18. Convergence on GRID 5000 Residual Norm PAAP Time (second)

  19. One or two distributed sites, same number of processors, communication overlay One site Two sites PAAP

  20. Cell/GPU CEA/DEN : with Christophe Calvin et Jérome Dubois (CEA/DEN Saclay) • MINOS/APOLLO3 solver • Netronic tranport problem • Power Method to compute the dominante eigenvalue • Slow convergence • Large number of floating point operations • Experimentations on : • Bi-Xeons quadcore 2.83GHz (45 GFlops) • CellBlade (Cines Montpellier) (400 GFlops) • GPU Quadro FX 4600 (240 GFlops) PAAP

  21. Matrix size Power method : Performances PAAP 21

  22. Difference Power Method : Arithmetic Accuracy PAAP 22

  23. Matrix Size Arnoldi Projection: Performances PAAP 23

  24. Orthogonalization degradation Arnoldi Projection : Arithmetic Accuracy PAAP 24

  25. Outline • Introduction • Present Petaflops, on the Road to Future Exaflops • Experimentations, toward models and extrapolations • Conclusion PAAP

  26. Conclusion • We plan to extrpolate from Grid5000 and our multi-core experimentations some behaviors of the future hiearachical large petascale computers, using YML for the higher level, • We need to propose new high-level languages to program large Petaflop computers, to be able to minimize “Time to Solution” and energy consumptions, with system and middleware independencies, MPI would probably very difficult to dethrone, • Other important codes would be still carefully “hand-optimized” • Several Programming paradigms, with respect to the different level, have to me mixed. The interface have to be well-specified; MPI would probably very difficult to dethrone, • End-users have to be able to give expertise to help middleware management such as scheduling, and to chose libraries • New Asynchronous Hybrid Methods have to be introduced PAAP

More Related