1 / 48

The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal bal@cs.vu.nl Vrije Universiteit Amsterdam

The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal bal@cs.vu.nl Vrije Universiteit Amsterdam. The ‘Promise of the Grid’. Efficient and transparent (i.e. easy-to-use) wall-socket computing over a distributed set of resources [Sunderam ICCS’2004, based on Foster/Kesselman].

tab
Download Presentation

The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal bal@cs.vu.nl Vrije Universiteit Amsterdam

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Ibis Project:Simplifying Grid Programming & DeploymentHenri Balbal@cs.vu.nlVrije Universiteit Amsterdam

  2. The ‘Promise of the Grid’ Efficient and transparent (i.e. easy-to-use) wall-socket computing over a distributed set of resources [Sunderam ICCS’2004, based on Foster/Kesselman]

  3. Parallel computing on grids • Mostly limited to • trivially parallel applications • parameter sweeps, master/worker • applications that run on one cluster at a time • use grid to schedule application on a suitable cluster • Our goal: run real parallel applications on a large-scale grid, using co-allocated resources

  4. Efficient wide-area algorithms • Latency-tolerant algorithms with asynchronous communication • Search algorithms (Awari-solver [CCGrid’08]) • Model checkers (DiVinE [PDMC’08]) • Algorithms with hierarchical communication • Divide-and-conquer • Broadcast trees • …..

  5. Reality: ‘Problems of the Grid’ • Performance & scalability • Heterogeneous • Low-level & changingprogramming interfaces • writing & deploying grid applications is hard • Connectivity issues • Fault tolerance • Malleability ! User Wide-Area Grid Systems

  6. The Ibis Project • Goal: • drastically simplify grid programming/deployment • write and go!

  7. Approach (1) • Write & go: minimal assumptions about execution environment • Virtual Machines (Java) deal with heterogeneity • Use middleware-independent APIs • Mapped automatically onto middleware • Different programming abstractions • Low-level message passing • High-level divide-and-conquer

  8. Approach (2) • Designed to run in dynamic/hostile grid environment • Handle fault-tolerance and malleability • Solve connectivity problems automatically (SmartSockets) • Modular and flexible: can replace Ibis components by external ones • Scheduling: Zorilla P2P system or external broker

  9. Global picture

  10. Applications Satin: divide & conquer Communication layer (IPL) SmartSockets Zorilla P2P JavaGAT Rest of talk

  11. Outline • Grid programming • IPL • Satin • SmartSockets • Grid deployment • JavaGAT • Zorilla • Applications and experiments

  12. Ibis Design

  13. Ibis Portability Layer(IPL) • Java-centric “run-anywhere” library • Sent along with the application (jar-files) • Point-to-point, multicast, streaming, …. • Efficient communication • Configured at startup, based on capabilities (multicast, ordering, reliability, callbacks) • Bytecode rewriter avoids serialization overhead

  14. Serialization • Based on bytecode-rewriting • Adds (de)serialization code to serializable types • Prevents reflection overheadduring runtime JVM Javacompiler bytecoderewriter source bytecode bytecode JVM JVM

  15. Membership Model • JEL (Join-Elect-Leave) model • Simple model for tracking resources, supports malleability & fault-tolerance • Notifications of nodes joining or leaving • Elections • Supports all common programming models • Centralized and distributed implementations • Broadcast trees, gossiping

  16. Programming models • Remote Method Invocation (RMI) • Group Method Invocation (GMI) • MPJ (MPI Java 'standard') • Satin (Divide & Conquer)

  17. Satin: divide-and-conquer • Divide-and-conquer isinherently hierarchical • More general thanmaster/worker • Cilk-like primitives (spawn/sync) in Java • Supports malleability and fault-tolerance • Supports data-sharing between different branches through Shared Objects

  18. Satin implementation • Load-balancing is done automatically • Cluster-aware Random Stealing (CRS) • Combines Cilk’s Random Stealing with asynchronous wide-area steals • Self-adaptive malleability and fault-tolerance • Add/remove machines on the fly • Survive crashes by efficientrecomputations/checkpointing

  19. Self-adaptation with Satin • Adapt #CPUs to level of parallelism • Migrate work from overloaded to idle CPUs • Remove CPUs with poor network connectivity • Add CPUs dynamically when • Level of parallelism increases • CPUs were removed or crashed • Can also remove/add entire clusters • E.g., for network problems [Wrzesinska et al., PPoPP’07 ]

  20. Approach • Weighted Average Efficiency (WAE): 1/#CPUs * Σspeedi * (1 – overheadi ) overheadis fraction idle+communication time speedi= relative speed of CPUi (measured periodically) • General idea: Keep WAE between Emin (30%) and Emax(50%)

  21. Overloaded network link Iteration duration Iteration • Uplink of 1 cluster reduced to 100 KB/s • Remove badly connected cluster, get new one

  22. Connectivity Problems • Firewalls & Network Address Translation (NAT) restrict incoming traffic • Addressing problems • Machines with >1 network interface (IP address) • Machine on a private network (e.g., NAT) • No direct communication allowed • E.g., between compute nodes and external world

  23. SmartSockets library • Detects connectivity problems • Tries to solve them automatically • With as little help from the user as possible • Integrates existing and several new solutions • Reverse connection setup, STUN, TCP splicing, SSH tunneling, smart addressing, etc. • Uses network of hubs as a side channel

  24. Example

  25. Example [Maassen et al., HPDC’07 ]

  26. Zorilla P2P JavaGAT Overview

  27. JavaGAT • GAT: Grid Application Toolkit • Makes grid applications independent of the underlying grid infrastructure • Used by applications to access grid services • File copying, resource discovery, job submission & monitoring, user authentication • API is currently standardized (SAGA) • SAGA implemented on JavaGAT

  28. Grid Applications with GAT Grid Application File.copy(...)‏ submitJob(...)‏ GAT Remote Files Monitoring Info service Resource Management GAT Engine GridLab Globus Unicore SSH P2P Local Intelligentdispatching globus gridftp [van Nieuwpoort et al., SC’07 ]

  29. Zorilla: Java P2P supercomputing middleware

  30. Zorilla components • Job management • Handling malleability and crashes • Robust Random Gossiping • Periodic information exchange between nodes • Robust against Firewalls, NATs, failing nodes • Clustering: nearest neighbor • Flood scheduling • Incrementally search for resources at more and more distant nodes [Drost et al., HPDC’07 ]

  31. Overview

  32. Ibis applications • e-Science (VL-e) • Brain MEG-imaging • Mass spectroscopy • Multimedia content analysis • Various parallel applications • SAT-solver, N-body, grammar learning, … • Other programming systems • Workflow engine for astronomy (D-grid), grid file system, ProActive, Jylab, …

  33. Overview experiments • DAS-3: Dutch Computer Science grid • Satin applications on DAS-3 • Zorilla desktop grid experiment • Multimedia content analysis • High resolution video processing

  34. DAS-3 272 nodes(AMD Opterons) 792 cores 1TB memory LAN: Myrinet 10G Gigabit Ethernet WAN (StarPlane): 20-40 Gb/s OPN Heterogeneous: 2.2-2.6 GHz Single/dual-core Delft no Myrinet

  35. Gene sequence comparison in Satin (on DAS-3) Speedup on 1 cluster Run times on 5 clusters • Divide&conquer scales much better than master-worker • 78% efficiency on 5 clusters (with 1462 WAN-msgs/sec)

  36. Barnes-Hut (Satin) on DAS-3 Speedup on 1 cluster Run times on 5 clusters • Shared object extension to D&C model improves scalability • 57% efficiency on 5 clusters (with 1371 WAN-msgs/sec)

  37. Zorilla Desktop Grid Experiment • Small experimental desktop grid setup • Student PCs running Zorilla overnight • PCs with 1 CPU, 1GB memory, 1Gb/s Ethernet • Experiment: gene sequence application • 16 cores of DAS-3 with Globus • 16 core desktop grid with Zorilla • Combination, using Ibis-Deploy

  38. 877 sec 3574 sec 1099 sec Ibis-Deploy deployment tool • Easy deployment with Zorilla, JavaGAT & Ibis-Deploy

  39. Multimedia content analysis • Analyzes video streams to recognize objects • Extract feature vectors from images • Describe properties (color, shape) • Data-parallel task implemented with C++/MPI • Compute on consecutive images • Task-parallelism on a grid

  40. MMCA application Ibis (Java) Client (Java) Parallel Horus Server Parallel Horus Servers Servers (C++) (local desk-top machine) Broker (Java) (grid) (any machine world-wide)

  41. MMCA with Ibis • Initial implementation with TCP was unstable • Ibis simplifies communication, fault tolerance • SmartSockets solves connectivity problems • Clickable deployment interface • Demonstrated at manyconferences (SC’07) • 20 clusters on 3 continents, 500-800 cores • Frame rate increased from 1/30 to 15 frames/sec [Seinstra et al., IEEE Multimedia’07 ]

  42. ‘Most Visionary Research’ award at AAAI 2007, (Frank Seinstra et al.) MMCA

  43. High Resolution Video Processing Realtime processing of CineGrid movie data 3840x2160 (4xHD) @ 30 fps = 1424 MB/sec Multi-cluster processing pipeline Using DAS-3, StarPlane and Ibis

  44. CineGrid with Ibis Use of StarPlane requires no configuration StarPlane is connected to local Myrinet network Detected & used automatically by SmartSockets Easy setup of application pipeline Connection administration of application is simplified by the IPL election mechanism Simple multi-cluster deployment (Ibis-Deploy) Uses Ibis serialization for high throughput

  45. Summary • Goal: Simplify grid programming/deployment • Key ideas in Ibis • Virtual machines (JVM) deal with heterogeneity • High-level programming abstractions (Satin) • Handle fault-tolerance, malleability, connectivity problems automatically (Satin, SmartSockets) • Middleware-independent APIs (JavaGAT) • Modular

  46. Acknowledgements Past members John Romein Gosia Wrzesinska • Rutger Hofman • Maik Nijhuis • Olivier Aumage • Fabrice Huet • Alexandre Denis • Current members • Rob van Nieuwpoort • Jason Maassen • Thilo Kielmann • Frank Seinstra • Niels Drost • Ceriel Jacobs • Kees Verstoep • Roelof Kemp • Kees van Reeuwijk

  47. More information • Ibis can be downloaded from • http://www.cs.vu.nl/ibis • Papers: • Satin [PPoPP’07], SmartSockets [HPDC’07], Gossiping [HPDC’07], JavaGAT [SC’07],MMCA [IEEE Multimedia’07] • Ibis tutorials • Next one at CCGrid 2008 (19 May, Lyon)

More Related