1 / 16

Journée Présentation de l’ANR

Journée Présentation de l’ANR. In conjunction with Perpi’2006 RenPar'17 / SympA'2006 / CFSE'5 / JC'2006. 3 octobre 2006. Teams. LIP/INRIA: Projet GRAAL Anne Beno ît Raphaël Bolze Yves Caniou Eddy Caron Pushpinder Kaur Chouhan Fr édéric Desprez Jean-Sébastien Gay Cédric Tedeschi

jerome-paul
Download Presentation

Journée Présentation de l’ANR

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Journée Présentation de l’ANR In conjunction with Perpi’2006 RenPar'17 / SympA'2006 / CFSE'5 / JC'2006 3 octobre 2006

  2. Teams LIP/INRIA: Projet GRAAL Anne Benoît Raphaël Bolze Yves Caniou Eddy Caron Pushpinder Kaur Chouhan Frédéric Desprez Jean-Sébastien Gay Cédric Tedeschi IRISA/INRIA: Projet PARIS Gabriel Antoniu Luc Bougé Hinde Bouziane Loïc Cudennec Mathieu Jan Sébastien Monnet Christian Perez Thierry Priol LaBRI/INRIA: Projet RUNTIME Olivier Aumage Alexandre Denis ENSEEIHT: IRIT Michel Daydé Marc Pantel Daniel Hagimont CERFACS Eric Maisonnave ENS-Lyon: CRAL Hélène Courtois Julien Devriendt Romain Teyssier

  3. The Concept Sparse Solvers Ocean-atmosphere Simulations Cosmological Simulations • LEGO: League for Efficient Grid Operation.
 Workflow Middleware Scheduling Components Data management Deployment Communications

  4. Advanced Component Model B A data_ref Components Data worker master • Components and data-sharing service • Composition based on data-access • Data port • Use of JuxMem • Components and master-worker paradigm • Collection + request scheduling • Use of Diet • Components and workflow • What’s mean dependency for the component model? • Components and legacy code • No code re-writing • Mechanism to deal between application and scheduler

  5. DIET Architecture Middleware Client Master Agent MA MA MA MA MA CORBA or JXTA Server front end LA LA LA LA Local Agent

  6. JuxMem: a Grid Data-Sharing Service Datamanagement • A peer-to-peer architecture for a data-sharing service in memory • Persistence and data coherency mechanism • Transparent data localization • Toolbox for the development of P2P applications • Set of protocols • One peer • Unique ID • Several communication protocols (TCP, HTTP, …) Peer ID Peer ID Peer ID Peer ID Peer ID Peer ID Peer ID Peer ID Peer Peer Peer Peer Peer Peer Peer Peer Peer TCP/IP Firewall Peer Peer Peer Firewall Peer Peer HTTP

  7. Communication Brick Ethernet Myrinet SCI Quadrics … Applicationprocesses Serviceinvocation(RPC, RMI) Programmingenvironments Messagepassing Distributedshared memory Genericcommunication support Madeleine Network Communications • Communication for multi-paradigm programming model • Message passing • Remote procedure calls • Distributed/Shared memory • Cluster view: High-speed network • Hardware heterogeneity • Myrinet, Quadrics, Infiniband, SCI • Gigabit Ethernet • Software heterogeneity • GM, MX • Elan, Elan4 • Sisci • Sockets • Contribution • Madeleine library

  8. Grid Communication: PadicoTM Communications • Grid communications between site • Wide communications • Specific communications • Connectivity: firewalls, none-routed network, etc. • Performance: High latency, low bandwidth • Security: protection, accounting • Middleware and applicationsintegration • Middleware upgrading for Madeleine? • Existing code? • Contribution • A high-performancecommunication framework for Grids: PadicoTM • PARIS project (2000-2004) andRUNTIME (since 2004)

  9. Scheduling Brick: into DIET Scheduling • Plug-in Scheduler • Existing plug-in scheduling facilities • Application-specific definition of appropriate performance metrics • An extensible measurement system • Tuneable comparison/aggregation routines for scheduling • Composite requirements enables various selection methods • basic resource availability • processor speed, memory • database contention • future requests • CORI • Collector: an easy interface to gathering performance and load information for a specific SeD • Two modules (currently): CoRI-Easy and FAST • Possible to extend (new modules): Ganglia, Nagios, R-GMA, Hawkeye, INCA, MDS, …

  10. Scheduling Brick: Workflow Workflow • Workflow management using component model • Workflow and DIET • Simple and high level API for the client • Workflow description based on XML • Use of different scheduling algorithms (RR, HEFT, etc.) • Ability for the client to use its own workflow scheduler • Automatic rescheduling mechanisms • Support multi-workflows scheduling • DIET hierarchy extended with a special agent: MADAG • Two execution modes of the MADAG • Complete scheduling provided : task priorities and resources mapping • Partial scheduling provided : only task priorities Exemple from Cosmological Application

  11. Deployment Brick: ADAGE Deployment • Automatic deployment tool for grid environment • Only one command to deploy • 3 kinds of input information • Resource description • application description • control parameter • Planning model (random, round-robin), … • Plug-in for each application • Description convector • Configuration of application • CCM, MPICH-P4, MPICH-G2, JXTA • Plug-in: from 400 to 1200 C++ lines

  12. Ocean-atmosphere Numerical Simulations Application • Energy transport: Equator  Pole • World climate behavior • Platform • Supercomputer approach • large simulation (1000 years) • Grid approach • parameterization design • independent and simultaneous simulations • Code coupling • ARPEGE v4.5 (atmospheric modelisation) • OPA v9 +LIM (ocean modelisation) • OASIS v3 (code coupling)

  13. Cosmological Simulation Simulation 1st part, 1 submission from the client: generating low resolution IC RAMSES post-processing with GALICS, results are sent back to the user 2nd part, n submissions from the client: generating high resolution IC centered on the wanted part of the universe RAMSES post-processing with GALICS, results are sent back to the user Application • RAMSES • Computes the evolution of dark matter particles starting from the early universe's structure • GALICS • Performs structure detection (halos of dark matter) • Builds the evolution tree of the particles • Generates galaxies

  14. Sparse Direct Solvers Application • Sparse direct solvers in a client-server environment (DIET) • Provide remote access to the algorithms we develop (e.g. MUMPS) • Easy to use from a light client • Data persistency on the servers is crucial • Application: an expertise site for sparse linear algebra: • ACI GRID TLSE (coordinated by ENSEEIHT-IRIT, Toulouse) • On a user’s specific problem, compare execution time / accuracy / memory usage / … of various solvers: • public domain … as well as commercial, • sequential … as well as parallel • Find best parameter values / reordering heuristics on a given problem • Also bibliography, matrix collections, … • All elementary requests executed on the/a GRID through DIET • Must be highly evolving (new solvers with new parameters, new scenarii)

  15. Conclusion Programming model brick Component model Grid middleware brick GridRPC environment: DIET. Data management brick Data-sharing system: JuxMeM Communication bricks Intra-cluster: Madeleine Grid communication: PadicoTM Scheduling brick DIET’s Plug-in scheduler Workflow bricks DIET’s DAG management Component management Deployment brick ADAGE Applications brick Ocean-atmosphere Numerical Simulations Cosmological Simulation Sparse Direct Solvers

  16. Questions? http://graal.ens-lyon.fr/LEGO

More Related