1 / 20

Component Approach to Distributed Multiscale Simulations

Component Approach to Distributed Multiscale Simulations. Katarzyna Rycerz(1,2), Marian Bubak(1,2) (1) Institute of Computer Science AGH, Mickiewicza 30, 30-059 Kraków, Poland (2) ACC Cyfronet AGH, ul. Nawojki 11, 30-950 Kraków, Poland. KU KDM, Zakopane, 18-19.03.2010.

jada
Download Presentation

Component Approach to Distributed Multiscale Simulations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Component Approach to Distributed Multiscale Simulations Katarzyna Rycerz(1,2), Marian Bubak(1,2) (1) Institute of Computer Science AGH, Mickiewicza 30, 30-059 Kraków, Poland (2) ACC Cyfronet AGH, ul. Nawojki 11, 30-950 Kraków, Poland KU KDM, Zakopane, 18-19.03.2010

  2. Requirements of multiscale simulations Motivation for a component model for such simulations HLA-based component model (idea, design challenges, possible solutions) Experiment with Multiscale Multiphysics Scientific Environment (MUSE) Possible integration with GridSpace VL Summary Outline

  3. Consists of modules of different scale Examples – e.g. modelling: reacting gas flows capillary growth colloidal dynamics stellar systems (e.g. Multiscale Multiphysics Scientific Environment – MUSE used in this work) and many more ... Multiscale Simulations

  4. Actual connection of two or more models together obeying the law of physics (e.g. conservation law) advanced time management: ability to connect modules with different time scales and internal time management strategies support for connecting models of different space scale Composability and reusability of existing models of different scale finding existing models needed and connecting them either together or to the new models ease of plugging in and unplugging them from the running system standarized models’ connections + many users sharing their models = more chances for general solutions Multiscale Simulations - Requirements

  5. To wrap simulations into recombinant components that can be selected and assembled in various combinations to satisfy requirements of multiscale simulations Need for a special component model that: provides machanisms specyfic for distributed multiscale simulations adaptation of one of the existing solutions for distributed simulations – our choice – High Level Architecture (HLA) supports long running simulations - setup and steering of components should be possible also during runtime gives a possibility to wrap legacy simulation kernels into components Need for an infrastructure that facilitates cross-domain exchange of components among scientists need for support for the component model using Grid solutions (e-infrastructures) for crossing administrative domains Motivation

  6. Model Couling Toolkit applies a message passing(MPI) style of communication between simulation models. oriented towardsdomain data decomposition of the simulated problem provides a supportfor advanced data transformations between different models J. Larson, R. Jacob, E. Ong ”The Model Coupling Toolkit: A New Fortran90Toolkit for Building Multiphysics Parallel Coupled Models.” 2005: Int. J. HighPerf. Comp. App.,19(3), 277-292. Multiscale Multiphysics Scientific Environment (MUSE) a software environment for astrophysical applications scripting approach (Python) is used to couple models together. models include: stellar evolution, hydrodynamics, stellar dynamics and radiative transfer sequential execution S. Portegies Zwart, S. McMillan, at al. A Multiphysics and Multiscale Software Environment for Modeling Astrophysical Systems, New Astronomy, volume 14, issue 4, year 2009, pp. 369 - 378 The Multiscale Coupling Library and Environment (MUSCLE) provides a software framework to build simulations according to the complex automatatheory introduces concept of kernels that communicate by unidirectional pipelinesdedicated to pass a specific kind of data from/to a kernel (asynchronous communication) J. Hegewald, M. Krafczyk, J. Tlke, A. G. Hoekstra, and B. Chopard. An agent-based coupling platform for complex automata. ICCS, volume 5102 of Lecture Notes in Computer Science, pages 227233. Springer, 2008. Related work

  7. Introduces the concept of simulation systems (federations) built from distributed elements (federates) Supports joining models of different time scale - ability to connect simulations with different internal time management in one system Supports data management (publish/subscribe mechanism) Separates actual simulation from communication between fedarates Partial support for interoperability and reusability (Simulation Object Model (SOM), Federation Object Model (FOM), Base Object Model (BOM)) Well-known IEEE and OMT standard Reference implementation – HLA Runtime Infrastructure (HLA RTI) Open source implementations available – e.g. CERTI, ohla Why High Level Architecture (HLA) ?

  8. HLA Component Model • Model differs from common models (e.g. CCA) – no direct connections, no remote procedure call (RPC) • Components run concurrently and communicate using HLA mechanisms • Components use HLA facilities (e.g. time and data management) • Differs from original HLA mechanism: • interactions can be dynamically changed at runtime by a user • change of state is triggered from outside of any federate CCA model HLA model

  9. Transfer of control between many layers requests from the Gridlayer outside the component simulation code layer HLA RTI layer. The component should be able to efficiently process concurrently: actual simulation that communicates with other simulation components via RTI layer external requests ofchanging state of simulation in HLA RTI layer . HLA components design challenges Grid platform (H2O) Component HLA Simulation Code External requests: start/stop join/resign set time policy publish/subscribe CompoHLA library HLA RTI Grid platform (H2O) Component HLA

  10. Use concurrent access exception handling available in HLA Transparent to developer Synchronous mode - requests processedas they come simulation is running in a separate thread Dependent on implementation of concurrencycontrol in used HLA RTI Concurrency difficult to handle effectively e.g starvation of requests thatcauses overhead in simulation execution Preliminary solution - Mechanism of HLA RTI Concurrent Access Control Grid platform (H2O) Component HLA Simulation Code External requests CompoHLA library HLA RTI (concurrent access control) Grid platform (H2O) Component HLA

  11. Requires to call a single routine in a simulationloop Asynchronous mode - separates invocationfrom execution Requests processed when scheduler iscalled from simulation loop Independent on behavior of HLA implementation Concurrency easy to handle JNI used for communication between Simulation Code, Scheduler and CompoHLA library Advanced Solution - Use Active Object Pattern Grid platform (H2O) Component HLA Simulation Code External requests Scheduler CompoHLA library Queue HLA RTI Grid platform (H2O) Component HLA

  12. Interactions between components in example experiment • Modules taken from Multiscale Multiphysics Scientific Environment (MUSE) • Multiscale simulation of dense stellar systems • Two modules of different time scale: • stellar evolution (macro scale) • stellar dynamics - N-body simulation (meso scale) • Data management • mass of changed stars are sent from evolution (macro scale) to dynamics (meso scale) • no data is needed from dynamics to evolution • data flow affects whole dynamics simulation • Dynamics takes more steps than evolution to reach the same point of simulation time • Time management - Regulating federate (evolution) regulate the progress in timeof constrained federate (dynamics) • The maximalpoint in time which the constrained federate can reach (LBTS) at certain moment iscalculated dynamically according to the position of regulating federate on thetime axis

  13. Asks chosen components to join into a simulation system (called federation in HLA terminology) Asks chosen components to publish or subscribe to certain data objects (e.g. Stars) Asks components to set their time policy Usage example – MUSE application Component user Component Client join federation join federation subscribe publish be constrained be regulating Evolution HLAComponent Evolution HLAComponent Dynamics HLAComponent Dynamics HLAComponent H2O kernel H2O kernel Grid side A Grid side B HLA federation

  14. Asks components to start Alter the publications/subscriptions/time policy during runtime Star data object Star data object Usage example – MUSE application Component user Component Client start start unpublish Evolution HLAComponent Dynamics HLAComponent H2O kernel H2O kernel Grid side A Grid side B HLA federation

  15. Comparision of: Concurrent execution, conservative approach of dynamics and evolution as HLA components Sequential execution (MUSE) Timing of: Request processing (through grid and component layer) Request realisation (scheduler) H2O v2.1 as a Grid platform and HLA CERTI v 3.2.4 – open source Experiment run on DAS3 grid nodes in: Delft (MUSE sequential version and dynamics component) Amsterdam UvA (evolution component) Leiden (component client) Amsterdam VU (RTIexec control process) Each grid node is a cluster of two 1-GHz Pentium-IIIs nodes connected with internal Myrinet-2000 network 10Gb ethernet used as the external network between Grid nodes Experiment Results

  16. Modules that can be reused: IDE for Experiment Script Execution Engine Registry Scenario Repository Extensions needed: Support for HLA component descriptions that include events/objects produced/consumed by a component Component Description Assember will guide the user in joining component descriptions into simulation description (Federation Object Model–like files). Possible Integration with GridSpace VL

  17. A description language for connecting HLA components: Currently used: HLA FOM - definition of structures of data objects andevents that need to be passed between HLA components Needs to contain more information especially relatedto modules’ scale. Needs to support different data types e.g. arrays often used in legacy implementations ofsimulation models etc. Interactivity: the support forcomponents that are sources of data streams - often a long running simulations - produce partial results that should be streamed to the user before thesimulation actually stops. the ability to interpret commands givento HLA components in the interactive mode Future work

  18. Presented HLA component modelenables theuser to dynamically compose/decompose distributed simulations from multiscaleelements residing on the Grid Architecture ofthe HLA component supports steering of interactions with other componentsduring simulation runtime The presented approach differs from that inoriginal HLA, where all decisions about actual interactions are made by federatesthemselves. The functionality of the prototype is shown on the example ofmultiscale simulation of a dense stellar system – MUSE environment. Experiment results show that that grid and component layers do not introducemuch overhead. In the future we plan to fully integrate the HLA componentswith GridSpace Virtual Laboratory Summary

  19. K. Rycerz, M. Bubak, and P. M. A. Sloot, Using HLA and Grid for Distributed Multiscale Simulations, in: R. Wyrzykowski, J. Dongarra, K. Karczewski, and J. Wasniewski (Eds.), Proceedings of 7-th International Conference, PPAM 2007, Gdansk, Poland, September 2007, LNCS 4967, Springer 2008, pp.780-787 K. Rycerz, M. Bubak and P.M.A. Sloot, Dynamic Interactions in HLA Component Model for Multiscale Simulations, ICCS, volume 5102 of Lecture Notes in Computer Science, pages 217-226. Springer, 2008. K. Rycerz, M. Bubak, P. M. A. Sloot: HLA Component Based Environment For Distributed Multiscale Simulations In: T. Priol and M. Vanneschi (Eds.), From Grids to Service and Pervasive Computing, Springer, 2008, pp. 229-239 K. Rycerz, M. Bubak, P. M. A. Sloot : Collaborative Environment for HLA Component-Based Distributed Multiscale Simulations (in preparation) Grid Space webpage http://gs.cyfronet.pl/ PL-Grid Project, http://www.plgrid.pl/en References

  20. Modules taken from Multiscale Multiphysics Scientific Environment (MUSE) Multiscale simulation of dense stellar systems Two modules of different time scale: stellar evolution (macro scale) stellar dynamics - N-body simulation (meso scale) Data management mass of changed stars are sent from evolution (macro scale) to dynamics (meso scale) no data is needed from dynamics to evolution data flow affects whole dynamics simulation Dynamics takes more steps than evolution to reach the same point of simulation time Time management - Regulating federate (evolution) regulate the progress in timeof constrained federate (dynamics) The maximalpoint in time which the constrained federate can reach (LBTS) at certain moment iscalculated dynamically according to the position of regulating federate on thetime axis Interactions between components in example experiment Regulating federate (evolution) Federate’s current logical time. Federate’s effective logical time. Lookahead Federate may not publish messages within this interval t=0 Federate’s current logical time. LBTS - Other federates will not send messages before this time. Federate may only advance time within this interval Constrained federate(dynamics)

More Related