1 / 12

Introduction

Introduction. Application of parallel programming to the KAMM model Thesis to postulate to the Title of Engineer in Computation of The University of La Serena Orlando Astudillo Reynoso. Topics of Discussion. KAMM with multiple Nesting For that parallel programming? Algorithm Results.

Download Presentation

Introduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction • Application of parallel programming to the KAMM model • Thesis to postulate to the Title of Engineer in Computation of The University of La Serena Orlando Astudillo Reynoso

  2. Topics of Discussion • KAMM with multiple Nesting • For that parallel programming? • Algorithm • Results

  3. Kamm • KAMM is a model of physical bases that describes the three-dimensional atmospheric fields of the wind, temperature, pressure and humidity in the mesoscale. • The Model was developed to be executed on a Vectorial Computer in the Institute of Meteorology and Climatology (IMK), Germany.

  4. Boundary conditions • Upper boundary: climate/weather prediction model. • Lower boundary: given by a soil and vegetation model. • Lateral boundary: “radiation” condition with diagnostic phase velocity (eliminate reflexion of gravity waves)

  5. Nesting The process of Nesting for the KAMM Model, consists on make simulations over nested regions, so that each internal region, it uses like boundary conditions, the values obtained in the external region, in order to eliminate the artificial boundary conditions, and to obtain a more realist simulation, focused in a more interest area.

  6. Serial execution of KAMM Model CharacteristicSun 280 R Server Processors : 2 processors UltraSparc-III Performance : 3 GFlops (1,5 GFlops for processor) Memory : 8 GByte SDRAM Simulation Grid resolution : 90(East-West) x 50(North-South) x (40 levels of height) Dx=2000 M. Dy =2000 M. Sampling of data : Every hour, three-dimensional fields of data. CPU time : 26 hours of simulatión = 78 CPU hours Nesting for 4 regions : Over 13 days

  7. How improve the performance for the procedure of Nesting Paralell New technologies increase in the number of instructions that can be executed simultaneously COSTS GRID hardware improvements Decrease in the times required to execute an instruction Distributed computation Access into a Net of heterogeneous resources that act as a great virtual computer.

  8. Parallel Nesting

  9. How distributing the boundary conditions?

  10. MPI Programming(message passing interface) • Simultaneous execution of processes with instances of the model for each region involved in the procedure of Nesting. • Send and reception of regular messages, in this case the boundary conditions, this conditions or values are stored on not contiguous memory regions. • The presence of collective operations to the moment to generate the step of time for the group of instances of the simulation.

  11. Campus Grid User Applications life sciences molecular dynamics fluid mechanics meteorology solid-state physics Compute Server existing hardware + NEC vector computer + Infiniband cluster with nodes based on Itanium/Opteron collaborating instituts: IFIA, IFP, IMK, INT Infrastructure Middleware consistent batch interface global data storage user authentication accounting external collaborator: MTU Aero Engines CEAZA, Chile KAMM (IMK) climate simulations KAMM (IMK) climate simulations • Cooperation with: • NEC European Supercomputer Systems • PARTEC AG • Department for Grid Computing and e-Science • Department for Grid Computing Infrastructure and Services COSMOS (IFIA) Molecular Mechanics and Dynamics using Bond Polarization Theory

  12. Conclusions • Exploit the parallelism like a tool in the model develop. • Interaction among professionals of the computation and scientific of the area • Generate environments of distributed computing.

More Related