slide1
Download
Skip this Video
Download Presentation
The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal [email protected] Vrije Universiteit Amsterdam

Loading in 2 Seconds...

play fullscreen
1 / 48

The Ibis Project: Simplifying Grid Programming Deployment Henri Bal balcs.vu.nl Vrije Universiteit Amsterdam - PowerPoint PPT Presentation


  • 153 Views
  • Uploaded on

The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal [email protected] Vrije Universiteit Amsterdam. The ‘Promise of the Grid’. Efficient and transparent (i.e. easy-to-use) wall-socket computing over a distributed set of resources [Sunderam ICCS’2004, based on Foster/Kesselman].

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'The Ibis Project: Simplifying Grid Programming Deployment Henri Bal balcs.vu.nl Vrije Universiteit Amsterdam' - tab


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
the promise of the grid
The ‘Promise of the Grid’

Efficient and transparent (i.e. easy-to-use) wall-socket computing over a distributed set of resources [Sunderam ICCS’2004, based on Foster/Kesselman]

parallel computing on grids
Parallel computing on grids
  • Mostly limited to
    • trivially parallel applications
      • parameter sweeps, master/worker
    • applications that run on one cluster at a time
      • use grid to schedule application on a suitable cluster
  • Our goal: run real parallel applications on a large-scale grid, using co-allocated resources
efficient wide area algorithms
Efficient wide-area algorithms
  • Latency-tolerant algorithms with asynchronous communication
    • Search algorithms (Awari-solver [CCGrid’08])
    • Model checkers (DiVinE [PDMC’08])
  • Algorithms with hierarchical communication
    • Divide-and-conquer
    • Broadcast trees
  • …..
reality problems of the grid
Reality: ‘Problems of the Grid’
  • Performance & scalability
  • Heterogeneous
  • Low-level & changingprogramming interfaces
  • writing & deploying grid applications is hard
  • Connectivity issues
  • Fault tolerance
  • Malleability

!

User

Wide-Area Grid Systems

the ibis project
The Ibis Project
  • Goal:
    • drastically simplify grid programming/deployment
    • write and go!
approach 1
Approach (1)
  • Write & go: minimal assumptions about execution environment
    • Virtual Machines (Java) deal with heterogeneity
  • Use middleware-independent APIs
    • Mapped automatically onto middleware
  • Different programming abstractions
    • Low-level message passing
    • High-level divide-and-conquer
approach 2
Approach (2)
  • Designed to run in dynamic/hostile grid environment
    • Handle fault-tolerance and malleability
    • Solve connectivity problems automatically (SmartSockets)
  • Modular and flexible: can replace Ibis components by external ones
    • Scheduling: Zorilla P2P system or external broker
rest of talk

Applications

Satin: divide & conquer

Communication layer (IPL)

SmartSockets

Zorilla P2P

JavaGAT

Rest of talk
outline
Outline
  • Grid programming
    • IPL
    • Satin
    • SmartSockets
  • Grid deployment
    • JavaGAT
    • Zorilla
  • Applications and experiments
ibis portability layer ipl
Ibis Portability Layer(IPL)
  • Java-centric “run-anywhere” library
    • Sent along with the application (jar-files)
    • Point-to-point, multicast, streaming, ….
  • Efficient communication
    • Configured at startup, based on capabilities (multicast, ordering, reliability, callbacks)
    • Bytecode rewriter avoids serialization overhead
serialization
Serialization
  • Based on bytecode-rewriting
    • Adds (de)serialization code to serializable types
    • Prevents reflection overheadduring runtime

JVM

Javacompiler

bytecoderewriter

source

bytecode

bytecode

JVM

JVM

membership model
Membership Model
  • JEL (Join-Elect-Leave) model
  • Simple model for tracking resources, supports malleability & fault-tolerance
    • Notifications of nodes joining or leaving
    • Elections
  • Supports all common programming models
  • Centralized and distributed implementations
    • Broadcast trees, gossiping
programming models
Programming models
  • Remote Method Invocation (RMI)
  • Group Method Invocation (GMI)
  • MPJ (MPI Java \'standard\')
  • Satin (Divide & Conquer)
satin divide and conquer
Satin: divide-and-conquer
  • Divide-and-conquer isinherently hierarchical
  • More general thanmaster/worker
  • Cilk-like primitives (spawn/sync) in Java
  • Supports malleability and fault-tolerance
  • Supports data-sharing between different branches through Shared Objects
satin implementation
Satin implementation
  • Load-balancing is done automatically
    • Cluster-aware Random Stealing (CRS)
    • Combines Cilk’s Random Stealing with asynchronous wide-area steals
  • Self-adaptive malleability and fault-tolerance
    • Add/remove machines on the fly
    • Survive crashes by efficientrecomputations/checkpointing
self adaptation with satin
Self-adaptation with Satin
  • Adapt #CPUs to level of parallelism
  • Migrate work from overloaded to idle CPUs
  • Remove CPUs with poor network connectivity
  • Add CPUs dynamically when
    • Level of parallelism increases
    • CPUs were removed or crashed
  • Can also remove/add entire clusters
    • E.g., for network problems

[Wrzesinska et al., PPoPP’07 ]

approach
Approach
  • Weighted Average Efficiency (WAE):

1/#CPUs * Σspeedi * (1 – overheadi )

overheadis fraction idle+communication time

speedi= relative speed of CPUi (measured periodically)

  • General idea:

Keep WAE between Emin (30%) and Emax(50%)

overloaded network link
Overloaded network link

Iteration duration

Iteration

  • Uplink of 1 cluster reduced to 100 KB/s
  • Remove badly connected cluster, get new one
connectivity problems
Connectivity Problems
  • Firewalls & Network Address Translation (NAT) restrict incoming traffic
  • Addressing problems
    • Machines with >1 network interface (IP address)
    • Machine on a private network (e.g., NAT)
  • No direct communication allowed
    • E.g., between compute nodes and external world
smartsockets library
SmartSockets library
  • Detects connectivity problems
  • Tries to solve them automatically
    • With as little help from the user as possible
  • Integrates existing and several new solutions
    • Reverse connection setup, STUN, TCP splicing, SSH tunneling, smart addressing, etc.
  • Uses network of hubs as a side channel
example26
Example

[Maassen et al., HPDC’07 ]

javagat
JavaGAT
  • GAT: Grid Application Toolkit
    • Makes grid applications independent of the underlying grid infrastructure
  • Used by applications to access grid services
    • File copying, resource discovery, job submission & monitoring, user authentication
  • API is currently standardized (SAGA)
    • SAGA implemented on JavaGAT
grid applications with gat
Grid Applications with GAT

Grid Application

File.copy(...)‏

submitJob(...)‏

GAT

Remote

Files

Monitoring

Info

service

Resource

Management

GAT Engine

GridLab

Globus

Unicore

SSH

P2P

Local

Intelligentdispatching

globus

gridftp

[van Nieuwpoort et al., SC’07 ]

zorilla components
Zorilla components
  • Job management
    • Handling malleability and crashes
  • Robust Random Gossiping
    • Periodic information exchange between nodes
    • Robust against Firewalls, NATs, failing nodes
  • Clustering: nearest neighbor
  • Flood scheduling
    • Incrementally search for resources at more and more distant nodes

[Drost et al., HPDC’07 ]

ibis applications
Ibis applications
  • e-Science (VL-e)
    • Brain MEG-imaging
    • Mass spectroscopy
  • Multimedia content analysis
  • Various parallel applications
    • SAT-solver, N-body, grammar learning, …
  • Other programming systems
    • Workflow engine for astronomy (D-grid), grid file system, ProActive, Jylab, …
overview experiments
Overview experiments
  • DAS-3: Dutch Computer Science grid
  • Satin applications on DAS-3
  • Zorilla desktop grid experiment
  • Multimedia content analysis
  • High resolution video processing
d a s 3
DAS-3

272 nodes(AMD Opterons)

792 cores

1TB memory

LAN:

Myrinet 10G

Gigabit Ethernet

WAN (StarPlane):

20-40 Gb/s OPN

Heterogeneous:

2.2-2.6 GHz

Single/dual-core

Delft no Myrinet

gene sequence comparison in satin on das 3
Gene sequence comparison in Satin (on DAS-3)

Speedup on 1 cluster

Run times on 5 clusters

  • Divide&conquer scales much better than master-worker
  • 78% efficiency on 5 clusters (with 1462 WAN-msgs/sec)
barnes hut satin on das 3
Barnes-Hut (Satin) on DAS-3

Speedup on 1 cluster

Run times on 5 clusters

  • Shared object extension to D&C model improves scalability
  • 57% efficiency on 5 clusters (with 1371 WAN-msgs/sec)
zorilla desktop grid experiment
Zorilla Desktop Grid Experiment
  • Small experimental desktop grid setup
    • Student PCs running Zorilla overnight
    • PCs with 1 CPU, 1GB memory, 1Gb/s Ethernet
  • Experiment: gene sequence application
    • 16 cores of DAS-3 with Globus
    • 16 core desktop grid with Zorilla
    • Combination, using Ibis-Deploy
ibis deploy deployment tool

877 sec

3574 sec

1099 sec

Ibis-Deploy deployment tool
  • Easy deployment with Zorilla, JavaGAT & Ibis-Deploy
multimedia content analysis
Multimedia content analysis
  • Analyzes video streams to recognize objects
  • Extract feature vectors from images
    • Describe properties (color, shape)
    • Data-parallel task implemented with C++/MPI
  • Compute on consecutive images
    • Task-parallelism on a grid
mmca application
MMCA application

Ibis

(Java)

Client

(Java)

Parallel

Horus

Server

Parallel

Horus

Servers

Servers

(C++)

(local desk-top machine)

Broker

(Java)

(grid)

(any machine world-wide)

mmca with ibis
MMCA with Ibis
  • Initial implementation with TCP was unstable
  • Ibis simplifies communication, fault tolerance
  • SmartSockets solves connectivity problems
  • Clickable deployment interface
  • Demonstrated at manyconferences (SC’07)
  • 20 clusters on 3 continents, 500-800 cores
    • Frame rate increased from 1/30 to 15 frames/sec

[Seinstra et al., IEEE Multimedia’07 ]

high resolution video processing
High Resolution Video Processing

Realtime processing of CineGrid movie data

3840x2160 (4xHD) @ 30 fps = 1424 MB/sec

Multi-cluster processing pipeline

Using DAS-3, StarPlane and Ibis

cinegrid with ibis
CineGrid with Ibis

Use of StarPlane requires no configuration

StarPlane is connected to local Myrinet network

Detected & used automatically by SmartSockets

Easy setup of application pipeline

Connection administration of application is simplified by the IPL election mechanism

Simple multi-cluster deployment (Ibis-Deploy)

Uses Ibis serialization for high throughput

summary
Summary
  • Goal: Simplify grid programming/deployment
  • Key ideas in Ibis
    • Virtual machines (JVM) deal with heterogeneity
    • High-level programming abstractions (Satin)
    • Handle fault-tolerance, malleability, connectivity problems automatically (Satin, SmartSockets)
    • Middleware-independent APIs (JavaGAT)
    • Modular
acknowledgements
Acknowledgements

Past members

John Romein

Gosia Wrzesinska

  • Rutger Hofman
  • Maik Nijhuis
  • Olivier Aumage
  • Fabrice Huet
  • Alexandre Denis
  • Current members
  • Rob van Nieuwpoort
  • Jason Maassen
  • Thilo Kielmann
  • Frank Seinstra
  • Niels Drost
  • Ceriel Jacobs
  • Kees Verstoep
  • Roelof Kemp
  • Kees van Reeuwijk
more information
More information
  • Ibis can be downloaded from
    • http://www.cs.vu.nl/ibis
  • Papers:
    • Satin [PPoPP’07], SmartSockets [HPDC’07], Gossiping [HPDC’07], JavaGAT [SC’07],MMCA [IEEE Multimedia’07]
  • Ibis tutorials
    • Next one at CCGrid 2008 (19 May, Lyon)
ad