1 / 17

The Grid-Occam Project

The Grid-Occam Project. Dipl.-Inf. Bernhard Rabe Dipl.-Inf. Peter Tröger Dr. Martin von Löwis Prof. Dr. rer. nat. habil. Andreas Polze Operating Systems & Middleware Group Hasso-Plattner-Institute, University of Potsdam. Outline. Introduction into Occam The Grid-Occam Idea

erwin
Download Presentation

The Grid-Occam Project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Grid-Occam Project Dipl.-Inf. Bernhard Rabe Dipl.-Inf. Peter Tröger Dr. Martin von Löwis Prof. Dr. rer. nat. habil. Andreas Polze Operating Systems & Middleware Group Hasso-Plattner-Institute, University of Potsdam

  2. Outline • Introduction into Occam • The Grid-Occam Idea • The Grid-Occam lecture • Grid-Occam programs • Conclusions

  3. Occam History • Based on Sir T. Hoare’s ideas of Communicating Sequential Processes (CSP) • Parallel processing language • Abstraction from underlying hardware, software and network environment • Developed by INMOS for transputer systems • Distributed memory system • 4 high-speed hardware links • Routing of messages for unconnected processors (virtual channel router)

  4. Occam Language • Primitive actions • Variable assignment • Channel output • Channel input • SKIP • STOP • Sequential process combination (SEQ) • Parallel process combination (PAR) PROC hello() INT x,y: CHAN OF INT c,d: PAR SEQ c ! 117 d ? x SEQ c ? y d ! 118 :

  5. ALT and Arrays • Alternative process combination (ALT) • Arrays ALT keyboard ? var1 to.computer ! var1 from.computer ? var1 screen ! var1 VALUE num.of.fields IS 500: [num.of.fields]REAL64 a,b: [num.of.fields]CHAN OF REAL64 c:

  6. Rules in Occam • Rendezvous behavior of channels • Receiver blocks until the sender wrote the value • Sender continues after the receiver read the value • Variables can only be written by one process in parallel • Likewise, only a single process can read from a channel, and another single process can write to the channel

  7. GridSphere CoG Execution Portal Grid Application Grid-Occam Cactus High-Level Grid Abstractions GridLab GAT GO Runtime Remote Procedure Call Control-Parallel Programming MessagePassing Grid RPC MPICH-G2 AMWAT WSRF Satin PACX MPI NetSolve Globus Grid Infrastructure (Resource Discovery, Job Execution, Intra/Inter-Node Communication) Unicore Building a Grid Application

  8. The Grid-Occam Idea • Bring parallelism as first-level language construct to modern distributed environments • Consistent programming model for different granularities of distribution (threads, cluster nodes, grid nodes) • Support for heterogeneous execution platforms • .NET implementation on Rotor (MacOS X, Windows, FreeBSD) • Integration of legacy source code (e.g. Fortran) • Clear distinction of language compiler and infrastructure-dependent runtime library • Multithreaded runtime library • MPI runtime library • Grid runtime library • Support nested nature of granularity levels

  9. Compiler and Libraries • Occam compiler • Generates C# code • Code utilizes common interface for all runtime library implementations • Multithreaded (MT) runtime library • Channels with interlocked shared memory, rendezvous behavior through multiple semaphore locks • Shared memory for global variables • .NET threads for parallel Occam processes • MPI runtime library • Minimal topology information (RANK, SIZE) • Fully interconnected node topology • Fine-granular parallel execution on one node by using the MT library • Rendezvous channels through synchronized send operation

  10. Grid Runtime Library Idea • Grid-Occam as coordination language • External code represented as Occam procedure • Runtime library submits external executable to a grid resource • Usage of standard job submission API’s (DRMAA, GAT, COG) • Best-effort process placement • Utilize infrastructure information (e.g. MDS) • Consideration of channel bandwidth information (NWS) • (Partial) task graph generated by the compiler • Mapping algorithms from cluster research community • Distributed computation in the Grid • Based on Grid-enabled MPI (Mpich-G2) • Based on WSRF services (Occam channel service) • Proof of concept • Expose web services / WSRF services as Occam channel

  11. Grid-Occam Execution Environment

  12. The Accompanying Lecture • Run in Summer semester 2004 • 5 groups of 3-4 students • All groups produced .NET-based distributed runtime environments for Occam • Runtime environments • Support for SEQ, PAR, ALT • “Hello World” using 2 processes/2 channels as initial assignment • pthread-implementation of runtime was given to students • Compiler construction • Focus on tools: ANTLR, Coco/R, lex/yacc, kimwitu, • All compilers support a common Occam subset and different extensions

  13. Translated into C# public class MainClass { [ProgramEntry] public static void _main5() { Occam.Channel _c_1 = new Occam.Channel(), _d_2 = new Occam.Channel(); int _x_3 = 0; Parallel0 parallel0 = (Parallel0)Runtime.CreateSequence(typeof(Parallel0 ), _x_3, _d_2, _c_1); int _y_4 = 0; Parallel1 parallel1 = (Parallel1)Runtime.CreateSequence(typeof(Parallel1 ), _y_4, _d_2, _c_1); Occam.Runtime.Parallel(parallel0, parallel1); parallel0.Result(out _x_3); parallel1.Result(out _y_4); } } } namespace Occam.UserCode{ public class Parallel0 : Occam.DistributedSequence { int _x_3; Occam.Channel _d_2; Occam.Channel _c_1; public Parallel0(int _x_3, Occam.Channel _d_2, Occam.Channel _c_1) { this._x_3 = _x_3; this._d_2 = _d_2; this._c_1 = _c_1; } public override void Run() { _c_1.Write((int)(117)); _x_3 = (int)_d_2.Read(); } public void Result(out int _x_3) { _x_3 = this._x_3; } } public class Parallel1 : Occam.DistributedSequence { … public override void Run() { _y_4 = (int)_c_1.Read(); _d_2.Write((int)(118)); } public void Result(out int _y_4) { _y_4 = this._y_4; } }

  14. Translated into C# (2nd approach) public virtual void SEQ1() { WriteChannel c1__=cf.getWriteChannel("c1"); c1__.write(117); ReadChannel d1__=cf.getReadChannel("d2"); x3_ = ((System.Int32)d1__.read() ); } public virtual void SEQ2() { ReadChannel c2__=cf.getReadChannel("c1"); y4_ = ((System.Int32)c2__.read() ); WriteChannel d2__=cf.getWriteChannel("d2"); d2__.write(118); } [OccamMainProcess()] public virtual void PAR1() { ArrayList occamProcesses = new ArrayList(); occamProcesses.Add(new OccamProcess(SEQ1)); occamProcesses.Add(new OccamProcess(SEQ2)); env.execPAR(occamProcesses); } } } [OccamProgram()] public class MyOccamProgram { private IExecutionEnvironment env; private ChannelFactory cf; private IVariableStore vs; private int x3_; private int y4_; public MyOccamProgram(IExecutionEnvironment executionEnvironment) { env=executionEnvironment; cf=env.getChannelFactory(); } public MyOccamProgram(MyOccamProgram originalProcess) { env=originalProcess.env; cf=originalProcess.cf; x3_=originalProcess.x3_; y4_=originalProcess.y4_; } public static void Main(string[] args) { BootstrapLoader loader = new BootstrapLoader(args, Assembly.GetCallingAssembly()); }

  15. Optimizing Datamanagement PROC hello() INT x,y: CHAN OF INT c,d: PAR SEQ c ! 117 d ? x SEQ c ? y d ! 118 : PROC hello() CHAN OF INT c,d: PAR INT x: SEQ c ! 117 d ? x INT y: SEQ c ? y d ! 118 :

  16. Conclusions • First implementation of Occam in common intermediate language, and also the first implementation that implements Webservice channels in Occam. • Investigation of paradigms, design patterns and implementation techniques for enhancing middleware technology for predictable computing. • Linking grid computing and parallel computing techniques. • Teaching compiler construction, concurrent programming, (weakly consistent) distributed shared memory models.

  17. Thank you, any questions ? http://www.dcl.hpi.uni-potsdam.de

More Related