LPPN – Technology Choices
1 / 1

Distribution Efforts in Kepler / PTII - PowerPoint PPT Presentation

  • Uploaded on

LPPN – Technology Choices. Distribution Efforts in Kepler / PTII. C++ for core libraries Actor, Port, Token as C++ classes Parallel Virtual Machine (PVM) for parallelization Thin layer on top of machine clusters (pool of hosts) Message passing Implemented simple RPC on top of this

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about ' Distribution Efforts in Kepler / PTII' - lis

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

LPPN – Technology Choices

Distribution Efforts in Kepler / PTII

  • C++ for core libraries

    • Actor, Port, Token as C++ classes

  • Parallel Virtual Machine (PVM) for parallelization

    • Thin layer on top of machine clusters (pool of hosts)

    • Message passing

    • Implemented simple RPC on top of this

  • SWIG for adding higher-languages above core

    • Perl/Python interfaces for writing actors

    • Perl interfaces for composing and starting workflow

    • Java interface for composing, starting, monitoring workflows

  • Remote execution of a complete workflow

    • Hydrant (Tristan King)

    • Web service for remote execution (Jianwu Wang)

    • Parameter sweeps with Nimrod/K (Colin Enticott, David Abramson, Ilkay Altintas)

  • Distribution within actors

    • “Plumping Workflows” with ad-hoc ssh-control (Nortbert Podhorszki)

    • Globus actors in Kepler: GlobusJob, GlobusProxy, GridFTP, GridJob.

    • GLite actors available through ITER

    • Webservice executions by actors

  • Distribution of few or all actors

    • Distributed SDF Director (Daniel Cuadrado)

    • Pegasus Director (Daniel Cuadrado and Yang Zhao)

    • Master-Slave Distributed Execution (Chad Berkley and Lucas Gilbert) with DistributedCompositeActor

    • PPN Director (Daniel Zinn and Xuan Li)

Thanks to Jianwu for help with overview

Lightweight Parallel PN Engine (LPPN)

  • Motivation

    • PN as inherently parallel MoC

    • Build simple, efficient distributed PN-engine

  • Design Requirements

    • KISS

    • Avoid centralization as much as possible

    • Provide Actor and Port abstractions

    • Allow actors being written in different languages

    • “Experimentation Platform” for scheduling, data routing, …

  • Design Principles

    • One actor = one process

    • Communication between actors

    • Central component only for setup, termination detection, …

PPN Director – Architecture Overview

PPN Director – Design Decisions

  • Proxy-Actors in Kepler represent Actors in LPPN

    • Repository of available LPPN Actors in XML file

      • Actor-name

      • Parameters

      • Ports

    • Generic PPN-Actor is configured using this information

    • Monitor actor state

    • Send data from Kepler Actors to LPPN actors and vice versa

  • PPN Director

    • Start Actors with parameters, deployment info

    • Connect Actors according to Kepler workflow

    • Start and stop workflow execution


Future Directions

Kepler PPN Director

Communication with Regular PN Actors

  • Adding Black-box (Java) actors as actors in LPPN

  • Detailed measurements when actors need time for what

  • Automatic movement of actors for CPU congestions (deploying spring/mass model)

  • Automatic data parallelism (actor cloning and scatter+gather)

  • Overhaul of LPPN, maybe in Java, RMI, JNI

  • Better resource management

  • Idea: Use Kepler as sophisticated GUI

    • Create, run and monitor LPPN workflows

  • Marrying LPPN and Kepler – The PPN Director

    • Drag’n’drop workflow creation (1:1 mapping for actors)

    • Parameter support

    • Hints for deployment from user

    • Monitor token sending and receiving

    • Monitor actor status

Monitoring Support

  • PPN Actors periodically probe LPPN actors for info

    • Number of tokens sent and received

    • Current actor state:

      • Working

      • Block on receive

      • Block on write

      • Sending BLOB tokens

  • Displayed on actor while workflow is running


  • Sending data from regular Kepler

  • Actors to LPPN and vice versa

Parallel Virtual Machines in KeplerDaniel Zinn Xuan Li Bertram LudäscherUniversity of California at Davis