programming with grid superscalar n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Programming with GRID superscalar PowerPoint Presentation
Download Presentation
Programming with GRID superscalar

Loading in 2 Seconds...

play fullscreen
1 / 117

Programming with GRID superscalar - PowerPoint PPT Presentation


  • 157 Views
  • Uploaded on

Programming with GRID superscalar. Rosa M. Badia Toni Cortes Pieter Bellens, Vasilis Dialinos, Jesús Labarta, Josep M. Pérez, Raül Sirvent. Tutorial Detailed Description. Introduction to GRID superscalar (55%) 9:00AM-10:30AM GRID superscalar objective Framework overview

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Programming with GRID superscalar' - alijah


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
programming with grid superscalar

Programming with GRID superscalar

Rosa M. Badia

Toni Cortes

Pieter Bellens, Vasilis Dialinos, Jesús Labarta, Josep M. Pérez, Raül Sirvent

CLUSTER 2005 Tutorial

Boston, 26th September 2005

tutorial detailed description
Tutorial Detailed Description

Introduction to GRID superscalar (55%) 9:00AM-10:30AM

  • GRID superscalar objective
  • Framework overview
  • A sample GRID superscalar code
  • Code generation: gsstubgen
  • Automatic configuration and compilation: Deployment center
  • Runtime library features

Break 10:30-10:45am

Programming with GRID superscalar (45%) 10:45AM-Noon

  • Users interface:
    • The IDL file
    • GRID superscalar API
    • User resource constraints and performance cost
    • Configuration files
    • Use of checkpointing
  • Use of the Deployment center
  • Programming Examples

CLUSTER 2005 Tutorial

Boston, 26th September 2005

introduction to grid superscalar
Introduction to GRID superscalar
  • GRID superscalar objective
  • Framework overview
  • A sample GRID superscalar code
  • Code generation: gsstubgen
  • Automatic configuration and compilation: Deployment center
  • Runtime library features

CLUSTER 2005 Tutorial

Boston, 26th September 2005

1 grid superscalar objective

Grid

1. GRID superscalar Objective
  • Ease the programming of GRID applications
  • Basic idea:

ns  seconds/minutes/hours

CLUSTER 2005 Tutorial

Boston, 26th September 2005

1 grid superscalar objective1
1. GRID superscalar Objective
  • Reduce the development complexity of Grid applications to the minimum
    • writing an application for a computational Grid may be as easy as writing a sequential application
  • Target applications: composed of tasks, most of them repetitive
    • Granularity of the tasks of the level of simulations or programs
    • Data objects are files

CLUSTER 2005 Tutorial

Boston, 26th September 2005

2 framework overview
2. Framework overview
  • Behavior description
  • Elements of the framework

CLUSTER 2005 Tutorial

Boston, 26th September 2005

2 1 behavior description
2.1 Behavior description

for (int i = 0; i < MAXITER; i++) {

newBWd = GenerateRandom();

subst (referenceCFG, newBWd, newCFG);

dimemas (newCFG, traceFile, DimemasOUT);

post (newBWd, DimemasOUT, FinalOUT);

if(i % 3 == 0) Display(FinalOUT);

}

fd = GS_Open(FinalOUT, R);

printf("Results file:\n"); present (fd);

GS_Close(fd);

Input/output files

CLUSTER 2005 Tutorial

Boston, 26th September 2005

2 1 behavior description1

Subst

Subst

Subst

DIMEMAS

Subst

DIMEMAS

Subst

Subst

DIMEMAS

EXTRACT

DIMEMAS

EXTRACT

DIMEMAS

DIMEMAS

EXTRACT

EXTRACT

EXTRACT

EXTRACT

Display

CIRI Grid

Display

Subst

DIMEMAS

GS_open

EXTRACT

2.1 Behavior description

CLUSTER 2005 Tutorial

Boston, 26th September 2005

2 1 behavior description2

CIRI Grid

2.1 Behavior description

Subst

Subst

Subst

DIMEMAS

Subst

DIMEMAS

Subst

Subst

DIMEMAS

Subst

EXTRACT

DIMEMAS

EXTRACT

DIMEMAS

DIMEMAS

EXTRACT

DIMEMAS

EXTRACT

EXTRACT

EXTRACT

EXTRACT

Display

Display

GS_open

CLUSTER 2005 Tutorial

Boston, 26th September 2005

2 2 elements of the framework
2.2 Elements of the framework
  • Users interface
  • Code generation: gsstubgen
  • Deployment center
  • Runtime library

CLUSTER 2005 Tutorial

Boston, 26th September 2005

2 2 elements of the framework1
2.2 Elements of the framework
  • Users interface
    • Assembly language for the GRID
      • Well defined operations and operands
      • Simple sequential programming on top of it (C/C++, Perl, …)
    • Three components:
      • Main program
      • Subroutines/functions
      • Interface Definition Language (IDL) file

CLUSTER 2005 Tutorial

Boston, 26th September 2005

2 2 elements of the framework2
2.2 Elements of the framework

2. Code generation: gsstubgen

  • Generates the code necessary to build a Grid application from a sequential application
    • Function stubs (master side)
    • Main program (worker side)

CLUSTER 2005 Tutorial

Boston, 26th September 2005

2 2 elements of the framework3
2.2 Elements of the framework

3. Deployment center

  • Designed for helping user
    • Grid configuration setting
    • Deployment of applications in local and remote servers

CLUSTER 2005 Tutorial

Boston, 26th September 2005

2 2 elements of the framework4
2.2 Elements of the framework

4. Runtime library

  • Transparent access to the Grid
  • Automatic parallelization between operations at run-time
    • Uses architectural concepts from microprocessor design
    • Instruction window (DAG), Dependence analysis, scheduling, locality, renaming, forwarding, prediction, speculation,…

CLUSTER 2005 Tutorial

Boston, 26th September 2005

3 a sample grid superscalar code
3. A sample GRID superscalar code
  • Three components:
    • Main program
    • Subroutines/functions
    • Interface Definition Language (IDL) file
  • Programming languages:
    • C/C++, Perl
    • Prototype version for Java and shell script

CLUSTER 2005 Tutorial

Boston, 26th September 2005

3 a sample grid superscalar code1
3. A sample GRID superscalar code
  • Main program: A Typical sequential program

for (int i = 0; i < MAXITER; i++) {

newBWd = GenerateRandom();

subst (referenceCFG, newBWd, newCFG);

dimemas (newCFG, traceFile, DimemasOUT);

post (newBWd, DimemasOUT, FinalOUT);

if(i % 3 == 0) Display(FinalOUT);

}

fd = GS_Open(FinalOUT, R);

printf("Results file:\n"); present (fd);

GS_Close(fd);

CLUSTER 2005 Tutorial

Boston, 26th September 2005

3 a sample grid superscalar code2
3. A sample GRID superscalar code
  • Subroutines/functions

void dimemas(in File newCFG, in File traceFile, out File DimemasOUT)

{

char command[500];

putenv("DIMEMAS_HOME=/usr/local/cepba-tools");

sprintf(command, "/usr/local/cepba-tools/bin/Dimemas -o %s %s", DimemasOUT, newCFG );

GS_System(command);

}

void display(in File toplot)

{

char command[500];

sprintf(command, "./display.sh %s", toplot);

GS_System(command);

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

3 a sample grid superscalar code3
3. A sample GRID superscalar code
  • Interface Definition Language (IDL) file
    • CORBA-IDL Like Interface:
      • In/Out/InOut files
      • Scalar values (in or out)
    • The subroutines/functions listed in this file will be executed in a remote server in the Grid.

interface MC {

void subst(in File referenceCFG, in double newBW, out File newCFG);

void dimemas(in File newCFG, in File traceFile, out File DimemasOUT);

void post(in File newCFG, in File DimemasOUT, inout File FinalOUT);

void display(in File toplot)

};

CLUSTER 2005 Tutorial

Boston, 26th September 2005

3 a sample grid superscalar code4
3. A sample GRID superscalar code
  • GRID superscalar programming requirements
    • Main program (master side):
      • Begin/finish with calls GS_On, GS_Off
      • Open/close files with: GS_FOpen, GS_Open, GS_FClose, GS_Close
      • Possibility of explicit synchronization: GS_Barrier
      • Possibility of declaration of speculative areas: GS_Speculative_End(func)
    • Subroutines/functions (worker side):
      • Temporal files on local directory or ensure uniqueness of name per subroutine invocation
      • GS_System instead of system
      • All input/output files required must be passed as arguments
      • Possibility of throwing exceptions: GS_Throw

CLUSTER 2005 Tutorial

Boston, 26th September 2005

4 code generation gsstubgen

client

server

4. Code generation: gsstubgen

app.idl

gsstubgen

app-stubs.c

app.h

app.c

app-worker.c

app-functions.c

app_constraints.cc

app_constraints_wrapper.cc

app_constraints.h

CLUSTER 2005 Tutorial

Boston, 26th September 2005

4 code generation gsstubgen1
4. Code generation: gsstubgen

app-stubs.c IDL function stubs

app.h IDL functions headers

app_constraints.cc User resource constraints and perfomance cost

app_constraints.h

app_constraints_wrapper.cc

app-worker.c Main program for the worker side (calls to user functions)

CLUSTER 2005 Tutorial

Boston, 26th September 2005

4 code generation gsstubgen2
4. Code generation: gsstubgen

Sample stubs file

#include <stdio.h>

int gs_result;

void Subst(file referenceCFG, double seed, file newCFG)

{

/* Marshalling/Demarshalling buffers */

char *buff_seed;

/* Allocate buffers */

buff_seed = (char *)malloc(atoi(getenv("GS_GENLENGTH"))+1);

/* Parameter marshalling */

sprintf(buff_seed, "%.20g", seed);

Execute(SubstOp, 1, 1, 1, 0, referenceCFG, buff_seed, newCFG);

/* Deallocate buffers */

free(buff_seed);

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

4 code generation gsstubgen3
4. Code generation: gsstubgen

Sample worker main file

#include <stdio.h>

int main(int argc, char **argv) {

enum operationCode opCod = (enum operationCode)atoi(argv[2]);

IniWorker(argc, argv);

switch(opCod) {

case SubstOp: {

double seed;

seed = strtod(argv[4], NULL);

Subst(argv[3], seed, argv[5]); }

break;

}

EndWorker(gs_result, argc, argv);

return 0;

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

4 code generation gsstubgen4
4. Code generation: gsstubgen

Sample constraints skeleton file

#include "mcarlo_constraints.h"

#include "user_provided_functions.h"

string Subst_constraints(file referenceCFG, double seed, file newCFG) {

string constraints = "";

return constraints;

}

double Subst_cost(file referenceCFG, double seed, file newCFG) {

return 1.0;

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

4 code generation gsstubgen5
4. Code generation: gsstubgen

Sample constraints wrapper file (1)

#include <stdio.h>

typedef ClassAd (*constraints_wrapper) (char **_parameters);

typedef double (*cost_wrapper) (char **_parameters);

// Prototypes

ClassAd Subst_constraints_wrapper(char **_parameters);

double Subst_cost_wrapper(char **_parameters);

// Function tables

constraints_wrapper constraints_functions[4] = {

Subst_constraints_wrapper,

};

cost_wrapper cost_functions[4] = {

Subst_cost_wrapper,

};

CLUSTER 2005 Tutorial

Boston, 26th September 2005

4 code generation gsstubgen6
4. Code generation: gsstubgen

Sample constraints wrapper file (2)

ClassAd Subst_constraints_wrapper(char **_parameters) {

char **_argp;

// Generic buffers

char *buff_referenceCFG; char *buff_seed;

// Real parameters

char *referenceCFG; double seed;

// Read parameters

_argp = _parameters;

buff_referenceCFG = *(_argp++); buff_seed = *(_argp++);

//Datatype conversion

referenceCFG = buff_referenceCFG; seed = strtod(buff_seed, NULL);

string _constraints = Subst_constraints(referenceCFG, seed);

ClassAd _ad;

ClassAdParser _parser;

_ad.Insert("Requirements", _parser.ParseExpression(_constraints));

// Free buffers

return _ad;

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

4 code generation gsstubgen7
4. Code generation: gsstubgen

Sample constraints wrapper file (3)

double Subst_cost_wrapper(char **_parameters) {

char **_argp;

// Generic buffers

char *buff_referenceCFG;

char *buff_referenceCFG; char *buff_seed;

// Real parameters

char *referenceCFG; double seed;

// Allocate buffers

// Read parameters

_argp = _parameters;

buff_referenceCFG = *(_argp++);

buff_seed = *(_argp++);

//Datatype conversion

referenceCFG = buff_referenceCFG;

seed = strtod(buff_seed, NULL);

double _cost = Subst_cost(referenceCFG, seed);

// Free buffers

return _cost;

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

4 code generation gsstubgen8

app-worker.c

app-worker.c

app-functions.c

app-functions.c

serveri

serveri

4. Code generation: gsstubgen

Binary building

app.c

app_constraints_wrapper.cc

app_constraints.cc

.

.

.

app-stubs.c

GRID superscalar

runtime

GT2

client

Globus services: gsiftp, gram

CLUSTER 2005 Tutorial

Boston, 26th September 2005

4 code generation gsstubgen9
4. Code generation: gsstubgen

Putting all together: involved files

User provided files

app.idl

app-functions.c

app.c

Files generated from IDL

app-stubs.c

app.h

app-worker.c

app_constraints.cc

app_constraints_wrapper.cc

app_constraints.h

Files generated by deployer

config.xml

(projectname).xml

CLUSTER 2005 Tutorial

Boston, 26th September 2005

4 code generation gsstubgen10
4. Code generation: gsstubgen

GRID superscalar applications architecture

CLUSTER 2005 Tutorial

Boston, 26th September 2005

5 deployment center
5. Deployment center
  • Java based GUI. Allows:
    • Specification of grid computation resources: host details, libraries location…
    • Allows selection of Grid configuration
  • Grid configuration checking process:
    • Aliveness of host (ping)
    • Globus service is checked by submitting a simple test
    • Sends a remote job that copies the code needed in the worker, and compiles it
  • Automatic deployment
    • sends and compiles code in the remote workers and the master
  • Configuration files generation

CLUSTER 2005 Tutorial

Boston, 26th September 2005

5 deployment center1
5. Deployment center

Automatic deployment

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 runtime library features
6. Runtime library features
  • Initial prototype over Condor and MW
  • Current version over Globus 2.4, Globus 4.0, ssh/scp, Ninf-G2
  • File transfer, security and other features provided by the middleware (Globus, …)

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 runtime library features1
Data dependence analysis

File Renaming

Task scheduling

Resource brokering

Shared disks management and file transfer policy

Scalar results collection

Checkpointing at task level

API functions implementation

6. Runtime library features

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 1 data dependence analysis

Subst

Subst

Subst

DIMEMAS

Subst

DIMEMAS

DIMEMAS

EXTRACT

EXTRACT

EXTRACT

Display

6.1 Data-dependence analysis
  • Data dependence analysis
    • Detects RaW, WaR, WaW dependencies based on file parameters
  • Oriented to simulations, FET solvers, bioinformatic applications
    • Main parameters are data files
  • Tasks’ Directed Acyclic Graph is built based on these dependencies

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 2 file renaming
6.2 File renaming

While (!end_condition())

{

T1 (…,…, “f1”);

T2 (“f1”, …, …);

T3 (…,…,…);

}

WaR

  • WaW and WaR dependencies are avoidable with renaming

T1_1

T1_2

T1_N

“f1_2”

“f1_1”

“f1”

“f1”

“f1”

WaW

T2_1

T2_2

T1_N

T3_1

T3_2

T1_N

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 3 task scheduling
6.3 Task scheduling
  • Distributed between the Execute call, the callback function and the GS_Barrier call
  • Possibilities
    • The task can be submitted immediately after being created
    • Task waiting for resource
    • Task waiting for data dependency
  • Task submission composed of
    • File transfer
    • Task submission
    • All specified in Globus RSL (for Globus case)

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 3 task scheduling1
6.3 Task scheduling
  • Temporal directory created in the server working directory for each task
  • Calls to globus:
    • globus_gram_client_job_request
    • globus_gram_client_callback_allow
    • globus_poll_blocking
  • End of task notification: Asynchronous state-change callbacks monitoring system
    • globus_gram_client_callback_allow()
    • callback_func function
  • Data structures update in Execute function, GRID superscalar primitives and GS_Barrier
  • GS_Barrier primitive before ending the program that waits for all tasks (performed inside GS_Off)

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 4 resource brokering
6.4 Resource brokering
  • When a task is ready for execution, the scheduler tries to allocate a resource
  • Broker receives a request
    • The classAd library is used to match resource ClassAds with task ClassAds
    • If more than one resources fulfils the constraint, the resource which minimizes this formula is selected:
      • FT = File transfer time to resource r
      • ET = Execution time of task t on resource r (using user provided cost function)

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 5 shared disks management and file transfer policy
6.5 Shared disks management and file transfer policy

File transfers policy

f1

T1

Working

directories

f1

f4

f4 (temp.)

T1

T2

server1

f7

f1

f7

f4

f7

T2

client

server2

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 5 shared disks management and file transfer policy1
6.5 Shared disks management and file transfer policy

Shared working directories (NFS)

Working

directories

T1

f1

server1

f7

f7

f1

f4

client

T2

server2

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 5 shared disks management and file transfer policy2
6.5 Shared disks management and file transfer policy

Shared input disks

Input

directories

server1

client

server2

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 6 scalar results collection
6.6 Scalar results collection
  • Collection of output parameters which are not files
    • Main code cannot continue until the scalar result value is obtained
      • Partial barrier synchronization

grid_task_1 (“file1.txt”, “file2.cfg”, var_x);

if (var_x>10){

  • Socket and file mechanisms provided

output variable

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 7 task level checkpointing
6.7 Task level checkpointing
  • Inter-task checkpointing
  • Recovers sequential consistency in the out-of-order execution of tasks

0

1

2

3

3

4

5

6

Completed

Successful execution

Running

Committed

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 7 task level checkpointing1
6.7 Task level checkpointing
  • Inter-task checkpointing
  • Recovers sequential consistency in the out-of-order execution of tasks

0

1

2

3

3

4

5

6

Finished correctly

Completed

Failing execution

Running

Cancel

Committed

Failing

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 7 task level checkpointing2
6.7 Task level checkpointing
  • Inter-task checkpointing
  • Recovers sequential consistency in the out-of-order execution of tasks

0

1

2

3

3

4

5

6

Finished correctly

Completed

Restart execution

Running

Committed

Execution continues normally!

Failing

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 7 task level checkpointing3
6.7 Task level checkpointing
  • On fail: from N versions of a file to one version (last committed version)
  • Transparent to application developer

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 8 api functions implementation
6.8 API functions implementation
  • Master side
    • GS_On
    • GS_Off
    • GS_Barrier
    • GS_FOpen
    • GS_FClose
    • GS_Open
    • GS_Close
    • GS_Speculative_End(func)
  • Worker side
    • GS_System
    • GS_Throw

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 8 api functions implementation1
6.8 API functions implementation
  • Implicit task synchronization – GS_Barrier
    • Inserted in the user main program when required
    • Main program execution is blocked
    • globus_poll_blocking() called
    • Once all tasks are finished the program may resume

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 8 api functions implementation2
6.8 API functions implementation
  • GRID superscalar file management API primitives:
    • GS_FOpen
    • GS_FClose
    • GS_Open
    • GS_Close
  • Mandatory for file management operations in main program
  • Opening a file with write option
    • Data dependence analysis
    • Renaming is applied
  • Opening a file with read option
    • Partial barrier until the task that is generating that file as output file finishes
  • Internally file management functions are handled as local tasks
    • Task node inserted
    • Data-dependence analysis
    • Function locally executed
  • Future work: offer a C library with GS semantic (source code with typical calls could be used)

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 8 api functions implementation3
6.8 API functions implementation
  • GS_Speculative_End(func) / GS_Throw
  • Any worker can call to GS_Throw at any moment
  • Task that rises the GS_Throw is the last valid task (all sequential tasks after that must be undone)
  • The speculative part is considered from the task that throws the exception until the GS_Speculative_End
  • Possibility of calling a local function when the exception is detected.

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 runtime library features2
6. Runtime library features

Calls sequence without GRID superscalar

app.c

app-functions.c

LocalHost

CLUSTER 2005 Tutorial

Boston, 26th September 2005

6 runtime library features3
6. Runtime library features

Calls sequence with GRID superscalar

app.c

app-stubs.c

GRID superscalar

runtime

GT2

app-worker.c

app_constraints_wrapper.cc

app-functions.c

app_constraints.cc

RemoteHost

LocalHost

CLUSTER 2005 Tutorial

Boston, 26th September 2005

tutorial detailed description1
Tutorial Detailed Description

Introduction to GRID superscalar (55%) 9:00AM-10:30AM

  • GRID superscalar objective
  • Framework overview
  • A sample GRID superscalar code
  • Code generation: gsstubgen
  • Automatic configuration and compilation: Deployment center
  • Runtime library features

Break 10:30-10:45am

Programming with GRID superscalar (45%) 10:45AM-Noon

  • Users interface:
    • The IDL file
    • GRID superscalar API
    • User resource constraints and performance cost
    • Configuration files
    • Use of checkpointing
  • Use of the Deployment center
  • Programming Examples

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 users interface
7. Users interface
  • The IDL file
  • GRID superscalar API
  • User resource constraints and performance cost
  • Configuration files
  • Use of checkpointing

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 1 the idl file
7.1 The IDL file
  • GRID superscalar uses a simplified interface definition language based on the CORBA IDL standard
  • The IDL file describes the headers of the functions that will be executed on the GRID

interface MYAPPL {

void myfunction1(in File file1, in scalar_type scalar1, out File file2);

void myfunction2(in File file1, in File file2, out scalar_type scalar1);

void myfunction3(inout scalar_type scalar1, inout File file1);

};

  • Requirement
    • All functions must be void
  • All parameters defined as in, out or inout.
  • Types supported
    • filenames: special type
    • integers, floating point, booleans, characters and strings
  • Scalar_type can be:
    • short, int, long, float, double, boolean, char, and string

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 1 the idl file1
7.1 The IDL file

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 1 the idl file2
7.1 The IDL file
  • Example:
    • Initial call

void subst (char *referenceCFG, double newBWd, char *newCFG);

    • IDL interface

void subst (in File referenceCFG, in double newBWd, out File newCFG);

Input filename

Input parameter

Output filename

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 1 the idl file3
7.1 The IDL file
  • Example:
    • Initial call:

void subst (char *referenceCFG, double newBWd, int *outval);

    • IDL interface

void subst (in File referenceCFG, in double newBWd, out int outval);

    • Although output parameter type changes in IDL file, not changes are required in function code

Input filename

Input parameter

Output integer

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 2 grid superscalar api
7.2 GRID superscalar API
  • Master side
    • GS_On
    • GS_Off
    • GS_Barrier
    • GS_FOpen
    • GS_FClose
    • GS_Open
    • GS_Close
    • GS_Speculative_End(func)
  • Worker side
    • GS_Throw
    • GS_System

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 2 grid superscalar api1
7.2 GRID superscalar API
  • Initialization and finalization

void GS_On();

void GS_Off(int code);

    • Mandatory
    • GS_On(): at the beginning of main program code or at least before any call to functions listed in the IDL file (task call)
      • Initializations
    • GS_Off (code): at the end of main program code or at least after any task call
      • Finalizations
      • Waits for all pending tasks
      • Code:

0: normal end

-1: error. Will store necessary checkpoint information to enable later restart.

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 2 grid superscalar api2
7.2 GRID superscalar API
  • Synchronization

void GS_Barrier();

    • Can be called at any point of the main program code
    • Waits until all tasks called previously had finished
    • Can Reduce performance!

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 2 grid superscalar api3
7.2 GRID superscalar API
  • File primitives:

FILE * GS_FOpen (char *filename, int mode);

int GS_FClose(FILE *f);

int GS_Open(char *pathname, int mode);

int GS_Close(int fd);

  • Modes: R (reading), W (writing) and A (append)
  • Examples:

FILE * fp;

char STRAUX[20];

fp = GS_FOpen(“myfile.ext”, R);

// Read something

fscanf( fp, "%s", STRAUX);

GS_FClose (fp);

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 2 grid superscalar api4
7.2 GRID superscalar API
  • Examples:

int filedesc;

filedesc= GS_Open(“myfile.ext”, W);

// Write something

write(filedesc,”abc”, 3);

GS_Close (filedesc);

  • Behavior:
    • At user level, the same as fopen or open (or fclose and close)
  • Where to use it:
    • In then main program (not required in worker code)
  • When to use it:
    • Always when opening/closing files between GS_On() and GS_Off calls

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 2 grid superscalar api5
7.2 GRID superscalar API
  • Exception handling

void GS_Speculative_End(void (*fptr)());

GS_Throw

    • Enables exception handling from tasks functions to main program
    • A speculative area can be defined in the main program which is not executed when an exception is thrown
    • The user can provide a function that it is executed in the main program when an exception is raised

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 2 grid superscalar api6
7.2 GRID superscalar API

task code

void Dimemas(char * cfgFile, char * traceFile, double goal, char * DimemasOUT)

{

putenv("DIMEMAS_HOME=/aplic/DIMEMAS");

sprintf(aux, "/aplic/DIMEMAS/bin/Dimemas -o %s %s", DimemasOUT, cfgFile);

gs_result = GS_System(aux);

distance_to_goal = distance(get_time(DimemasOUT), goal);

if (distance_to_goal < goal*0.1) {

printf("Goal Reached!!! Throwing exception.\n");

GS_Throw;

}

}

Main program code

while (j<MAX_ITERS){

getRanges(Lini, BWini, &Lmin, &Lmax, &BWmin, &BWmax);

for (i=0; i<ITERS; i++){

L[i] = gen_rand(Lmin, Lmax);

BW[i] = gen_rand(BWmin, BWmax);

Filter("nsend.cfg", L[i], BW[i], "tmp.cfg");

Dimemas("tmp.cfg", "nsend_rec_nosm.trf", Elapsed_goal, "dim_ou.txt");

Extract("tmp.cfg", "dim_out.txt", "final_result.txt");

}

getNewIniRange("final_result.txt",&Lini, &BWini);

j++;

}

GS_Speculative_End(my_func);

Function executed when a exception is thrown

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 2 grid superscalar api7
7.2 GRID superscalar API
  • Wrapping legacy code in tasks’ code

int GS_System(char *command);

    • At user level has the same behaviour as a system() call.

void dimemas(in File newCFG, in File traceFile, out File DimemasOUT)

{

char command[500];

putenv("DIMEMAS_HOME=/usr/local/cepba-tools");

sprintf(command, "/usr/local/cepba-tools/bin/Dimemas -o %s %s",

DimemasOUT, newCFG );

GS_System(command);

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 3 user resource constraints and performance cost
7.3 User resource constraints and performance cost
  • For each task specified in the IDL file the user can provide:
    • A resource constraints function
    • A performance cost modelling function
  • Resource constraints function:
    • Specifies constraints on the resources that can be used to executed the given task

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 3 user resource constraints and performance cost1
7.3 User resource constraints and performance cost

Attributes currently supported:

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 3 user resource constraints and performance cost2
7.3 User resource constraints and performance cost
  • Resource constraints specification
    • A function interface is generated for each IDL task by gsstubgen in file {appname}_constraints.cc
    • The name of the function is {task_name}_constraints
    • The function initially generated is a default function (always evaluates true)

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 3 user resource constraints and performance cost3
7.3 User resource constraints and performance cost
  • Example:
    • IDL file (mc.idl)

interface MC {

void subst(in File referenceCFG, in double newBW, out File newCFG);

void dimemas(in File newCFG, in File traceFile, out File DimemasOUT);

void post(in File newCFG, in File DimemasOUT, inout File FinalOUT);

void display(in File toplot)

};

    • Generated function in mc_constraints.cc

string Subst_constraints(file referenceCFG, double seed) {

return “true”;

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 3 user resource constraints and performance cost4
7.3 User resource constraints and performance cost
  • Resource constraints specification (ClassAds strings)

string Subst_constraints(file referenceCFG, double seed) {

return "(other.Arch == \"powerpc\")“;

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 3 user resource constraints and performance cost5
7.3 User resource constraints and performance cost
  • Performance cost modelling function
    • Specifies a model of the performance cost of the given task
    • As with the resource constraint function, a default function is generated that always returns “1”

double Subst_cost(file referenceCFG, double newBW)

{

return 1.0;

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 3 user resource constraints and performance cost6
7.3 User resource constraints and performance cost
  • Built-in functions:

int GS_Filesize(char *name);

double GS_GFlops();

  • Example

double Subst_cost(file referenceCFG, double newBW)

{

double time;

time = GS_filesize(referenceCFG)/1000000) * GS_GFlops();

return(time);

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 4 configuration files
7.4 Configuration files
  • Two configuration files:
    • Grid configuration file

$HOME/.gridsuperscalar/config.xml

    • Project configuration file

{project_name}.gsdeploy

  • Both are xml files
  • Both generated by deployment center

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 4 configuration files1
7.4 Configuration files
  • Grid configuration file
    • Saved automatically by the deployment center
  • Contains information about
    • Available resources in the Grid (server hosts)
    • Characteristics of the resource
      • Processor architecture
      • Operating system
      • Processor performance
      • Number of CPUs
      • Memory available
      • Queues available

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 4 configuration files2
7.4 Configuration files
  • Grid configuration file

<?xml version="1.0" encoding="UTF-8"?>

<config hostname="khafre.cepba.upc.es" NetKbps="100000">

<software>

<package name="DIMEMAS"/>

<package name="GAMESS"/>

</software>

<hosts>

<host fqdn="kadesh.cepba.upc.es">

<disks/>

</host>

<host fqdn="kandake.cepba.upc.es">

<disks/>

</host>

<host fqdn="khafre.cepba.upc.es">

<disks/>

</host>

</hosts>

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 4 configuration files3
7.4 Configuration files

<workers>

<worker fqdn="kadesh.cepba.upc.es" globusLocation="/usr/gt2" gssLocation="/aplic

/GRID-S/HEAD" minPort="20340" maxPort="20460" bandwidth="1250000" architecture="

Power3" operatingSystem="AIX" gFlops="1.5" memorySize="512" cpuCount="16">

<queues>

<queue name="large"/>

<queue name="medium"/>

<queue name="short" isDeploymentQueue="yes"/>

</queues>

<software>

<package name="DIMEMAS"/>

<package name="GAMESS"/>

</software>

<environment/>

<diskmappings/>

</worker>

</workers>

</config>

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 4 configuration files4
7.4 Configuration files
  • Project configuration file
    • Generated by the deployment center to save all the information required to run a given application
  • Contains information about
    • Resources selected for execution
    • Concrete characteristics selected by the user
      • Queues
      • Number of concurrent tasks in each server
    • Project information
      • Location of binaries in localhost
      • Location of binaries in servers

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 4 configuration files5
7.4 Configuration files
  • Resources information

<?xml version="1.0" encoding="UTF-8"?>

<project name="matmul" masterSourceDir="/aplic/GRID-S/GT4/doc/examples/matmul" workerSourceDir="/aplic/GRID-S/GT4/doc/examples/matmul" masterBuildScript="" workerBuildScript="" masterName="khafre.cepba.upc.es" masterInstallDir="/home/ac/rsirvent/matmul-master" masterBandwidth="100000" isSimple="yes">

<disks> ... </disks>

<directories> ... </directories>

<workers>

<worker name="pcmas.ac.upc.edu" installDir="/home/rsirvent/matmul-worker" deploymentStatus="deployed" Queue="none" LimitOfJobs=“4" NetKbps="100000" Arch="i686" OpSys="Linux" GFlops=“5.9" Mem="16" NCPUs=“4" Quota=“15000000">

<directories> ... </directories>

...

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 4 configuration files6
7.4 Configuration files

Shared disks information:

<?xml version="1.0" encoding="UTF-8"?>

<project ... >

<disks>

<disk name="_MasterDisk_"/>

<disk name="_WorkingDisk_pcmas_ac_upc_edu_"/>

<disk name="_SharedDisk0_"/>

</disks>

<directories>

<directory path="/home/ac/rsirvent/matmul-master" disk="_MasterDisk_" isWorkingPath="yes"/>

</directories>

<workers>

<worker name="pcmas.ac.upc.edu" ... >

<directories>

<directory path="/home/rsirvent/matmul-worker" disk="_WorkingDisk_pcmas_ac_upc_edu_" isWorkingPath="yes"/>

<directory path="/app/data" disk="_SharedDisk0_"/>

</directories>

...

CLUSTER 2005 Tutorial

Boston, 26th September 2005

7 5 use of checkpointing
7.5 Use of checkpointing
  • When running a GRID superscalar application, information for the checkpointing is automatically stored in file “.tasks.chk”
  • The checkpointing file simply lists the tasks that have finished
  • To recover: just restart application as usual
  • To start again from the beginning: erase “.tasks.chk” file

CLUSTER 2005 Tutorial

Boston, 26th September 2005

8 use of the deployment center
8. Use of the deployment center
  • Initialization of the deployment center

CLUSTER 2005 Tutorial

Boston, 26th September 2005

8 use of the deployment center1
8. Use of the deployment center
  • Adding a new host in the Grid configuration

CLUSTER 2005 Tutorial

Boston, 26th September 2005

8 use of the deployment center2
8. Use of the deployment center
  • Create a new project

CLUSTER 2005 Tutorial

Boston, 26th September 2005

8 use of the deployment center3
8. Use of the deployment center
  • Selection of hosts for a project

CLUSTER 2005 Tutorial

Boston, 26th September 2005

8 use of the deployment center4
8. Use of the deployment center
  • Deployment of main program (master) in localhost

CLUSTER 2005 Tutorial

Boston, 26th September 2005

8 use of the deployment center5
8. Use of the deployment center
  • Execution of application after deployment

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 programming examples
9. Programming examples
  • Programming experiences
    • Ghyper: computation of molecular potential energy hypersurfaces
    • fastDNAml: likelihood of phylogenetic trees
  • Simple programming examples
    • Matrix multiply
    • Mean calculation
    • Performance modelling

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences
9.1 Programming experiences
  • GHyper
    • A problem of great interest in the physical-molecular field is the evaluation of molecular potential energy hypersurfaces
    • Previous approach:
      • Hire a student
      • Execute sequentially a set of N evalutions
    • Implemented with GRID superscalar as a simple program
      • Iterative structure (simple for loop)
      • Concurrency automatically exploited and run in the Grid with GRID superscalar

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences1
9.1 Programming experiences
  • GHyper
    • Aplication: computation of molecular potential energy hypersurfaces
    • Run 1
      • Total execution time: 17 hours
      • Number of executed tasks: 1120
      • Each task between 45 and 65 minutes

BSC (Barcelona):

28 processors

IBM Power4

Univ. de Puebla (Mexico)

14 processors

AMD64

UCLM (Ciudad Real):

11+11 processors

AMD + Pentium IV

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences2
9.1 Programming experiences
  • Run 2
  • Total execution time: 31 hours
  • Number of executed tasks: 1120

BSC (Barcelona):

8 processors

IBM Power4

Univ. de Puebla (Mexico)

8 processors

AMD64

UCLM (Ciudad Real):

8+8 processors

AMD + Pentium IV

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences3
9.1 Programming experiences

Two-dimensional potential energy hypersurface for acetone as a function of the 1, and 2 angles

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences4
9.1 Programming experiences
  • fastDNAml
  • Starting point: code from Olsen et al.
    • Sequential code
    • Biological application that evaluates maximum likelihood phylogenetic inference
  • MPI version by Indiana University
    • Used with PACX-MPI for the HPC-Challenge context in SC2003

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences5
9.1 Programming experiences

Structure of the sequential application:

  • Iterative algorithm that builds the solution incrementally
    • Solution: maximum likelihood phylogenetic tree
  • In each iteration, the tree is extended by adding a new taxon
  • In each iteration 2i-5 possible trees are evaluated
  • Additionally, each iteration performs a local arrangement phase with 2i-6 additional evaluations
  • In the sequential algorithm, although these evaluations are independent, are performed sequentially

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences6
9.1 Programming experiences
  • GRID superscalar implementation
    • Selection of IDL tasks:
      • Evaluation function:
    • Tree information stored in TreeFile before calling GSevaluate
    • GS_FOpen and GS_FClose used for files related with evaluation
    • Automatic parallelization is achieved

interface GSFASTDNAML{

GSevaluate (in File InputData, in File TreeFile, out File EvaluatedTreeFile );

};

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences7

Tree evaluation

Barrier

9.1 Programming experiences
  • Task graph automatically generated by GRID superscalar runtime:

i-1

i

i+1

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences8
9.1 Programming experiences
  • With some data set, evaluation is a fast task
  • Optimization 1: tree clustering
    • Several tree evaluations are grouped into a single evaluation task
    • Reduces task initialization and Globus overhead
    • Reduces parallelism
  • Optimization 2: local executions
    • Initial executions are executed locally
  • Both optimizations are combined

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences9
9.1 Programming experiences

Policies

  • DEFAULT
    • The number of evaluations grows with the iterations. All evaluations have the same number of trees (MAX_PACKET_SIZE)
  • UNFAIR
    • Same as DEFAULT, but with a maximum of NUM_WORKER evaluations
  • FAIR
    • Each iteration has the same fixed number of evaluations (NUM_WORKER)

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences10
9.1 Programming experiences
  • Heterogeneous computational Grid:
    • IBM based machine 816 Nighthawk Power3 processors and 94 p630 Power4 processors (Kadesh),
    • IBM xSeries 250 with 4 Intel Pentium III (Khafre)
    • Bull Novascale 5160 with 8 Itanium2 processors (Kharga)
    • Parsytec CCi-8D with 16 Pentium II processors (Kandake)
    • some of the authors laptops
  • For some of the machines the production queues were used
  • All machines located in Barcelona

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences11
9.1 Programming experiences
  • All results for the HPC-Challenge data set
  • PACX-MPI gets better performance with similar configurations (less than 10800 s)
  • However it is using all the resources during all the execution time!

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences12
9.1 Programming experiences
  • An SSH/SCP GRID superscalar version has been developed
  • Specially interesting for large clusters
  • New heterogeneous computation Grid configuration:
    • Machines from previous results
    • Machines from a site in Madrid, basically Pentium III and Pentium IV based

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 1 programming experiences13
9.1 Programming experiences
  • Even using a larger distance computational Grid, the performance is doubled
  • PACX-MPI version ellapsed time: 9240s (second configuration case)

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 2 simple programming examples
9.2 Simple programming examples

Matrix multiply:

=

Hypermatrices: each element of the matrix is a matrix

Each internal matrix stored in a file

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 2 simple programming examples1
9.2 Simple programming examples
  • Program structure:
    • Inner product:
    • Each A[i][k]∙B[k][j] is a matrix multiplication itself:
      • Encapsulated in a function

matmul(A_i_k, B_k_j, C_i_j);

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 2 simple programming examples2
9.2 Simple programming examples

Sequential code: main program

int main(int argc, char **argv)

{

char *f1, *f2, *f3;

int i, j, k;

IniMatrixes();

for (i = 0; i < MSIZE; i++) {

for (j = 0; j < MSIZE; j++) {

for (k = 0; k < MSIZE; k++) {

f1 = getfilename("A", i, k);

f2 = getfilename("B", k, j);

f3 = getfilename("C", i, j);

matmul(f1, f2, f3);

}

}

}

return 0;

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 2 simple programming examples3
9.2 Simple programming examples

Sequential code: matmul function code

void matmul(char *f1, char *f2, char *f3){

block *A,*B,*C;

A = get_block(f1, BSIZE, BSIZE); B = get_block(f2, BSIZE, BSIZE);

C = get_block(f3, BSIZE, BSIZE);

block_mul(A, B, C);

put_block(C, f3);

delete_block(A); delete_block(B); delete_block(C);

}

static block *block_mul(block *A, block *B, block *C) {

int i, j, k;

for (i = 0; i < A->rows; i++) {

for (j = 0; j < B->cols; j++) {

for (k = 0; k < A->cols; k++) {

C->data[i][j] += A->data[i][k] * B->data[k][j];

}

}

}

return C;

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 2 simple programming examples4
9.2 Simple programming examples

GRID superscalar code: IDL file

interface MATMUL {

void matmul(in File f1, in File f2, inout File f3);

};

GRID superscalar code: main program

int main(int argc, char **argv){

char *f1, *f2, *f3;

int i, j, k;

GS_On();

IniMatrixes();

for (i = 0; i < MSIZE; i++) {

for (j = 0; j < MSIZE; j++) {

for (k = 0; k < MSIZE; k++) {

f1 = getfilename("A", i, k);

f2 = getfilename("B", k, j);

f3 = getfilename("C", i, j);

matmul(f1, f2, f3);

}

}

}

GS_Off(0);

return 0;

}

NO CHANGES REQUIRED TO FUNCTIONS!

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 2 simple programming examples5
9.2 Simple programming examples
  • Mean calculation
    • Simple iterative example executed LOOPS times
    • In each iteration a given number of random numbers are generated
    • The mean of the random numbers is calculated

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 2 simple programming examples6
9.2 Simple programming examples

Sequential code: main program

int main( int argc, char * argv[] ) {

FILE *results_fp;

int i, mn;

for ( i = 0; i < LOOPS; i ++ )

{

gen_random( “random.txt” );

mean( “random.txt”, “results.txt”);

}

results_fp = fopen( “results.txt”, "r" ); {

for( i = 0; i < LOOPS; i ++ )

{

fscanf( results_fp, "%d", &mn );

printf( "mean %i : %d\n", i, mn );

}

fclose( results_fp);

return 0;

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 2 simple programming examples7
9.2 Simple programming examples

Sequential code: gen_random function code

void gen_random( char * rnumber_file )

{

FILE *rnumber_fp;

int i;

rnumber_fp = fopen(rnumber_file, "w");

for ( i = 0; i < MAX_RANDOM_NUMBERS; i++ )

{

int r = 1 + (int) (RANDOM_RANGE*rand()/(RAND_MAX+1.0));

fprintf( rnumber_fp, "%d ", r );

}

fclose(rnumber_fp);

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 2 simple programming examples8
9.2 Simple programming examples

Sequential code: mean function code

void mean( char * rnumber_file, char * results_file )

{

FILE *rnumber_fp, *results_fp;

int r, sum = 0;

div_t div_res; int i;

rnumber_fp = fopen(rnumber_file, "r");

for ( i = 0; i < MAX_RANDOM_NUMBERS; i++ )

{

fscanf( rnumber_fp, "%d ", &r );

sum += r;

}

fclose(rnumber_fp);

results_fp = fopen(results_file, "a");

div_res = div( sum, MAX_RANDOM_NUMBERS );

fprintf( results_fp, "%d ", div_res.quot );

fclose(results_fp);

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 2 simple programming examples9
9.2 Simple programming examples

GRID superscalar code: IDL file

interface MEAN {

void gen_random( out File rnumber_file );

void mean( in File rnumber_file, inout File results_file );

};

GRID superscalar code: main program

int main( int argc, char * argv[] )

{

FILE *results_fp;

int i, mn;

GS_On();

for ( i = 0; i < LOOPS; i ++ )

{

gen_random( “random.txt” );

mean( “random.txt”, “results.txt” );

}

results_fp = GS_FOpen( “results.txt”, R );

for( i = 0; i < LOOPS; i ++ )

{

fscanf( results_fp, "%d", &mn );

printf( "mean %i : %d\n", i, mn );

}

GS_FClose( results_fp );

GS_Off(0);

}

NO CHANGES REQUIRED TO FUNCTIONS!

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 2 simple programming examples10
9.2 Simple programming examples

Performance modelling: IDL file

interface MC {

void subst(in File referenceCFG, in double newBW, out File newCFG);

void dimemas(in File newCFG, in File traceFile, out File DimemasOUT);

void post(in File newCFG, in File DimemasOUT, inout File FinalOUT);

void display(in File toplot)

};

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 2 simple programming examples11
9.2 Simple programming examples

Performance modelling: main program

GS_On();

for (int i = 0; i < MAXITER; i++) {

newBWd = GenerateRandom();

subst (referenceCFG, newBWd, newCFG);

dimemas (newCFG, traceFile, DimemasOUT);

post (newBWd, DimemasOUT, FinalOUT);

if(i % 3 == 0) Display(FinalOUT);

}

fd = GS_Open(FinalOUT, R);

printf("Results file:\n"); present (fd);

GS_Close(fd);

GS_Off(0);

CLUSTER 2005 Tutorial

Boston, 26th September 2005

9 2 simple programming examples12
9.2 Simple programming examples

Performance modelling: dimemas function code

void dimemas(in File newCFG, in File traceFile, out File DimemasOUT)

{

char command[500];

putenv("DIMEMAS_HOME=/usr/local/cepba-tools");

sprintf(command, "/usr/local/cepba-tools/bin/Dimemas -o %s %s", DimemasOUT, newCFG );

GS_System(command);

}

CLUSTER 2005 Tutorial

Boston, 26th September 2005

more information
More information
  • GRID superscalar home page:

http://www.cepba.upc.edu/grid

  • Rosa M. Badia, Jesús Labarta, Raül Sirvent, Josep M. Pérez, José M. Cela, Rogeli Grima, “Programming Grid Applications with GRID Superscalar”, Journal of Grid Computing, Volume 1 (Number 2): 151-170 (2003).
  • Vasilis Dialinos, Rosa M. Badia, Raul Sirvent, Josep M. Perez y Jesus Labarta, "Implementing Phylogenetic Inference with GRID superscalar", Cluster Computing and Grid 2005 (CCGRID 2005), Cardiff, UK, 2005

grid-superscalar@ac.upc.edu

CLUSTER 2005 Tutorial

Boston, 26th September 2005