1 / 12

Towards Multiscale Computing Tools based on GridSpace

Towards Multiscale Computing Tools based on GridSpace. Katarzyna Rycerz, Eryk Ciepiela, Daniel Harężlak, Marian Bubak ACC Cyfronet and Institute of Computer Science, AGH, Krakow, Poland dice.cyfronet.pl

mahon
Download Presentation

Towards Multiscale Computing Tools based on GridSpace

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Towards Multiscale Computing Tools based on GridSpace Katarzyna Rycerz, Eryk Ciepiela, Daniel Harężlak, Marian Bubak ACC Cyfronet and Institute of Computer Science, AGH, Krakow, Poland dice.cyfronet.pl Work supported by MAPPER: Multiscale Applications on European e-Infrastructures, http://www.mapper-project.eu , „e-Infrastructures” Project Director: Alfons Hoekstra, Amsterdam University Cracow Grid Workshop 2010

  2. Overview • Multiscale simulations - overview • MAPPER motivation and architecture • GridSpace – short reminder from yesterday • Preliminary experiment with multiscale application in GridSpace • Demo of the experiment

  3. Multiscale Simulations • Consists of modules of different scale • Examples – e.g. modelling: • virtual physiological human initiative • reacting gas flows • capillary growth • colloidal dynamics • stellar systems • and many more ... the reoccurrence of stenosis, anarrowing of a blood vessel,leading to restricted blood flow

  4. MAPPER architecture Develop computational strategies, software and services for distributed multiscale simulations across disciplines exploiting existing and evolving European e-infrastructure Deploy a computational science infrastructure Deliver high quality components aiming at large-scale, heterogeneous, high performance multi-disciplinary multiscale computing. Advance state-of-the-art in high performance computing on e-infrastructures enable distributed execution of multiscale models across e-Infrastructures,

  5. GridSpace • Easy access using Web browser • Experiment workbench • Constructing experiment plans from code snippets • Interactively run experiments • Experiment Execution Environment • Multiple interpreters • Access to libraries, programs and services (gems) • Access to computing infrastructure: Cluster, grid, cloud • Example applications using GS • Binding sites in proteins • Analysis of water solutions of aminoacids • Experience • Virolab project • PL-Grid NGI

  6. Preliminary experiment with multiscale application in GridSpace MUSE application simulation time • Multiscale dense stellar system simulations (from MUSE; http://www.muse.li) • Two modules with different scales: • stellar evolution (macroscale) • stellar dynamics - N-body simulation (mesoscale) • Data management • masses of evolving stars sent from evolution (macroscale) to dynamics (mesoscale) • no data is transmitted from dynamics to evolution • dynamics should not outpace evolution No. of steps data 2 ... 2000 2 1002 data 1 1001 ... 1000 1 2 1 dynamics evolution

  7. Interactions between components in our experiment • We use a special communication bus (called HLA) to synchronize simulation modules with time management • Time management • Simulation modules called federates • regulatingfederate (evolution) regulates the progress of theconstrainedfederate (dynamics) • federates exchange data with time stamps • The furthest point in time which the constrained federate can reach at a given moment (LBTS) iscalculated dynamically, according to the position of the regulating federate on thetime axis Dynamics Evolution HLA communication bus

  8. H2O pluglet implementation of simulation model Wrapping simulation models as software components • To enable users to steer the behavior of the simulation from outside we wrap simulation models into software components • We use the H2O framework • simulation modules can expose remotely accessible external interfaces • implementations of simulation models are wrapped and placed inside pluglets • containers for pluglets are called kernels • pluglets are deployed into kernels Start/stop Change time policy Switch on/off data exchange Remote node H2O kernel

  9. Run PBS job allocate nodes start H2O kernels Demo experiment user GridSpace Ruby script (snippet 1) PBS run job (start H2O kernel) H2O kernel H2O kernel node A node B

  10. Asksselectedcomponents to joinsimulation system Asksselectedcomponents to publishorsubscribe to data objects (stars) Askscomponents to set their time policy Determineswhereoutput/errorstreamsshould go Demo experiment user GridSpace Ruby script (snippet 1) Jruby script (snippet 2) join federation join federation create components set streaming subscribe set streaming publish be constrained be regulating Dynamics HLAComponent Evolution HLAComponent H2O kernel H2O kernel node A node B HLA communication

  11. Asks components to start Alters the time policy at runtime Stop unset constrained Star data object Star data object Demo experiment user GridSpace Ruby script (snippet 1) Jruby script (snippet 2) Jruby script (snippet 3) Dynamics view Evolution view Jruby script (snippet 4) start Out/err start stop unset regulation stop Evolution HLAComponent Dynamics HLAComponent H2O kernel H2O kernel node A node B HLA communication

  12. Delete job stop H2O kernels release nodes Demo experiment user GridSpace Ruby script (snippet 1) Ruby script (snippet 5) Ruby script (snippet 1) Ruby script (snippet 1) Ruby script (snippet 5) PBS Delete job ( stop H2O kernels) H2O kernel H2O kernel node A node B

More Related