340 likes | 443 Views
The PRACE project and the Application Development Programme (WP8-2IP). Claudio Gheller (ETH-CSCS). PRACE - Partnership for Advanced Computing in Europe.
E N D
The PRACE project and the Application Development Programme (WP8-2IP) Claudio Gheller (ETH-CSCS)
PRACE - Partnership for Advanced Computing in Europe • PRACE and has the aim of creating a European Research Infrastructure providing world class systems and services and coordinating their use throughout Europe.
PRACE RI PRACE History – An Ongoing Success Story HPC part of the ESFRI Roadmap;creation of a vision involving 15 European countries Creation of the Scientific Case Signature of the MoU Creation of the PRACE Research Infrastructure HPCEUR HET PRACE Initiative 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 PRACE-3IP PRACE Preparatory Phase Project PRACE-1IP PRACE-2IP
PRACE-2IP • 22 partners (21 countries), funding 18 Million € • Preparation/Coordination: FZJ/JSC/PRACE PMO • 1.9.2011 – 31.8.2013, extended to 31.8.2014 (only selected WPs) • Main objectives: • Provision of HPC resources access • Refactoring and scaling of major user codes • Tier-1 Integration (DEISA PRACE) • Consolidation of the Research Infrastructure
PRACE-3IP • Funding 20 Million € • Started: summer 2013 • Objectives • Provision of HPC resources access • Planned: Pre-commercial procurement exercise • Planned: Industry application focus
Access to Tier-0 supercomputers Open Call for Proposals Technical Peer Review Scientific PeerReview Priorisation + Resource Allocation Project + Final Report ~ 3 Months ~ 1 year ~ 2 Months Technical experts in PRACE systems and software Researchers withexpertise in scientific fieldofproposal Access Committee Researcher - PRACE director decides on the proposal of the Access Committee
PRACE-2IP WP8: Enabling Scientific Codes to the Next Generation of HPC Systems
PRACE 2IP workpackages ETH leading the WP • WP1 Management • WP2 Framework for Resource Interchange • WP3 Dissemination • WP4 Training • WP5 Best Practices for HPC Systems Commissioning • WP6 European HPC Infrastructure Operation and Evolution • WP7 Scaling Applications for Tier-0 and Tier-1 Users • WP8 Community Code Scaling • WP9 Industrial Application Support • WP10 Advancing the Operational Infrastructure • WP11 Prototyping • WP12 Novel Programming Techniques
WP8 objectives • Initiate a sustainable program in application development for coming generation of supercomputing architectures with a selection of community codes targeted at problems of high scientific impact that require HPC. • Refactoring of community codes in order to optimally map applications to future supercomputing architectures. • Integrate and validate these new developments into existing applications communities.
WP8 principles • scientific communities, with their high-end research challenges, are the main drivers for software development; • synergy between HPC experts and application developers from the communities; • Supercomputer have to recast their service activities in order to support, guide and enable scientific program developers and researchers in refactoring codes and re-engineering algorithms • strong commitment from the scientific community has to be granted
WP8 workflow Task 1 Scientific Domains and Communities Selection Scientific Communities Engagement Codes screening Codes Performance Analysis and Model Communities build-up Codes and kernels selection Task 2 Codes Refactoring Communities consolidation Prototypes experimentation Task 3 Code Validation and reintegration
Communities selection (task 1) • the candidate community must have high impact on science and/or society; • the candidate community must rely on and leverage high performance computing; • WP8 can have a high impact on the candidate community; • the candidate community must be willing to actively invest in software refactoring and algorithm re-engineering. • Astrophysics • Climate • Material Science • ParticlePhysics • Engineering
Codes and kernels selection methodology (task 1) • Performance Modelling methodology • Objective and quantitative way to select code and estimate possible performance improvements • Performance modelling goal is gaining insight into an application’s performance on a given computer system. • achieved first by measurement and analysis, and then by the synthesis of the application and computing system characteristics • also represents a predictive tool, estimating the behaviour on a different computing architecture identifying the most promising areas for performance improvement.
Codes Refactoring (task 2) • Still running (last few weeks) • Specific codes’ kernels are being re-designed and re-implemented according to the workplans defined in task 1 • Each group works independently • Check points at Face to Face workshops and All-Hands meetings • Specific Wiki Web site implemented for report progresses, collect and exchange information and documents and to manage and release implemented code: http://prace2ip-wp8.hpcforge.org
Codes validation and re-introduction (task 3) • Collaborative work (daily basis) involving code developers and HPC experts • Dedicated workshops • Face to Face meetings • Participation and contribution to conferences This way, no actual need of a special/specific re-integration procedure was needed
Case study: RAMSES • The RAMSES code was developed to study the evolution of the large-scale structure of the universe and the process of galaxy formation. • adaptive mesh refinement (AMR) multi-species code (baryons – hydrodynamics – plus dark matter – N-Body) • Gravity couples the two components. Solved by multigrid approach • Other components supported (e.g. MHD, radiative transfer), but not subject of our anaysis
Performance analysis example Parallel Profiling, large test (5123 base grid 9 refinement levels – 250 GB): strong scaling For this test Communication becomes the most relevant part, and it is dominated by synchronizations, due to the difficulties in load balancing the AMR-Multigrid algorithms Strong improvements can be obtained tuning the load balance among computational elements (nodes?)
Performance analysis: conclusions The performance analysis identified the critical kernels of the code: • Hydro: all the functions needed to solve the hydrodynamic problem are included. Within these functions, we have those that collect from grids at different resolutions the data necessary to update each single cell, those that calculate fluxes to solve conservation equations, Riemann solvers, finite-volume solvers. • Gravity: this group comprises functions needed to calculate the gravitational potential at different resolutions using a multigrid-relaxation approach. • MPI: comprises all the communication related MPI calls (data tranfer, synchronisation, management)
Performance improvements Two main objectives • hybrid OpenMP+MPI parallelization, to exploit systems with distributed nodes, each accounting for cores with shared memory • Exploitation of accelerators, in particular GPUs, adopting different paradigms (CUDA, OpenCL, directives) From the analysis of the performance and of the characteristics of the kernels under investigation we can say that: • The Hydro kernel is suitable for both approaches. Specific care must be posed to memory access issues. • The Gravity kernel can benefit from the hybrid implementation. • Due to the multigrid structure, however, an efficient GPU version can be particularly challenging, so it will be considered only if time and resources permit.
Performance modeling • Hybrid version (trivial modeling): THYBRID = TMPIeMPI,NTOT / (eOMP,NcoreseMPI,Nnodes) • GPU version TTOT = TCPU + TCPU-GPU + TGPU-GPU + TGPU
GPU implementation – approach 1 Step 2: solve Hydro equations for cell i,j,k Step 3: compose results array New Hydro variables Copy to the CPU Step 4: copy results back to the CPU
Results Sedov Blast wave test (hydro only, unigrid): 20 GB tranferred in/out (constant overhead)
Claudio Gheller Performance pitfalls • Amount of transferred data • Overhead increasing linearly with data size • Data structure, irregular data distribution • PREVENTS any asynchronous operation: NO overlap of computation and data transfer. • Ineffective memory access • Prevents coalesced memory access • Low flops per byte ratio • this is intrinsic to the algorithm… • Asynchronous operations not permitted • See above…
GPU implementation – approach 2 Step 1: compose data chunks on the CPU Hydro variables CPU memory Gravitational forces Other quantities Data chunks are the basic building block of the RAMSES’ AMR hierarchy: OCTs and their refinements
Data is moved to and from the GPU in chunks • Data transfer and computation can be overlapped Step 3: solve Hydro equations for chuncks N, M… Step 2: copy multiple data chunks to the GPU Step 4: compose results array New Hydro variables Copy to the CPU
Claudio Gheller Advantages over previous implementation • Data is regularly distributed in each chunk and its access is efficient. Improved flop per byte ratio • Effective usage of the GPU computing architecture • Data re-organization is performed on the CPU and its overhead hidden by asynchronous processes • Data transfer overhead almost completely hidden • AMR naturally supported • DRAWBACKS: much more complex implementation
Conclusions • PRACE is providing European scientist top level HPC services • PRACE-2IP WP8 successfully introduced a methodology for code development relying on a close synergy between scientists, community codes developers and HPC experts • Many community codes re-design and implemented to exploit novel HPC architectures (see http://prace2ip-wp8.hpcforge.org/ for details) • Most of WP8 results are already available to the community • WP8 is going to be extended one more year (no similar activity in PRACE-3IP)