Lezione 9a 16 dicembre 2009
Download
1 / 92

Lezione 9a - 16 Dicembre 2009 - PowerPoint PPT Presentation


  • 262 Views
  • Uploaded on

Lezione 9a - 16 Dicembre 2009.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Lezione 9a - 16 Dicembre 2009' - Melvin


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Lezione 9a 16 dicembre 2009 l.jpg

Lezione 9a - 16 Dicembre 2009

Il materiale didattico usato in questo corso è stato mutuato da quello utilizzato da Paolo Veronesi per il corso di Griglie Computazionali per la Laurea Specialistica in Informatica tenutonell’anno accademico 2008/09 presso l’Università degli Studi di Ferrara.

Paolo Veronesi

[email protected], [email protected]

http://www.cnaf.infn.it/~pveronesi/unife/

Università degli Studi di Bari – Corso di Laurea Specialistica in Informatica

“Tecnologia dei Servizi “Grid e cloud computing”

A.A. 2009/2010

Giorgio Pietro [email protected], http://www.ba.infn.it/~maggi


Outline l.jpg
Outline

UI

Computing Element

Workload Management System


Today s focus information services l.jpg
Today’s focus: Information Services

  • Data Services

  • Common access facilities

  • Efficient & reliable transport

  • Replication services

  • Execution Management

  • Job description & submission

  • Scheduling

  • Resource provisioning

  • Resource Management

  • Discovery

  • Monitoring

  • Control

  • Self-Management

  • Self-configuration

  • Self-optimization

  • Self-healing

OGSA

  • Information Services

  • Registry

  • Notification

  • Logging/auditing

  • Security

  • Cross-organizational users

  • Trust nobody

  • Authorized access only

DONE

DONE

OGSA “profiles”

Web services foundation

DONE


User interface ui l.jpg
User Interface (UI)

UI is the user’s interface to the Grid - Command-line interface to

Attribute/Proxy certificate

Job operations

To submit a job

Monitor its status

Retrieve output

Data operations

Upload file to SE

Create replica

Discover replicas

Other grid services

To run a job user creates a JDL (Job Description Language) file


Part i computing element definition l.jpg

PART IComputing Element Definition


Computing services in the layered grid architecture l.jpg
Computing Services in the Layered Grid Architecture

Application

Application

Internet Protocol Architecture

“Sharing single

resources”:

negotiating access,

controlling use”

Resource

Connectivity

Transport

Internet

Fabric

Link

Grid

Architecture

Internet

Architecture

Collective

Computing

Element


Computing resources at the fabric layer l.jpg
Computing resources at the fabric layer

Cluster.  A cluster is a container that groups together

Subclusters: Subcluster elements represent “homogeneous” collections of computational nodes;

Nodes: unique nodes, such as individual computing nodes.  A cluster may be referenced by more then one computing services at the “Resource” layer.

SubCluster.  A subcluster represents a “homogeneous” collection of nodes, where the homogeneity is defined by a collection whose required node attributes all have the same value.  For example, a subcluster represents a set of nodes with the same CPU, memory, OS, network interfaces, etc.  Strictly speaking, subclusters are not necessary, but they provide a convenient way of representing useful collections of nodes.   A subcluster captures a node count and the set of attributes for which homogeneous values are being asserted. 

Host.  Represents a physical computing element.  This element characterizes the physical configuration of a computing node:processors, software, storage elements, etc.



Computing element 1 3 l.jpg
Computing Element (1/3)

The CE service needs an abstract representation in order to save information about CE service instances in a Grid Information Service and perform discovery.

The schema expresses an abstraction (for example, the CE properties and functionality) in a structured, machine-processable form.

The “GLUE” CE schema contains a minimum but necessary set of qualifying attributes needed to distinguish different service instances and to perform discovery.

Typically some of the attributes are static, or rarely change, while other attributes, for example the ones about the status of the CE, are dynamic.

The schema described in the following slides, is based on the following CE abstraction: an entry point into a queuing system. 

There is one computing element per queue.  <hostname_LRMS>:<port>/<LRMS-queue>

Queuing systems with multiple queues are represented by creating one computing element per queue.

The information associated with a computing element is limited only to information relevant to the queue. 

All information about the physical resources access by a queue are represented by the Cluster information element. 


Computing element 2 3 l.jpg
Computing Element (2/3)

The Computing Element (CE) is the service representing a computing resource and comprises a set of functionalities related to computing.

Functionalities:

job management (job submission, job control, etc.),

provision of information about the resource characteristics and status,

resource reservation enforcement,

resource reservation usage monitoring,

Accounting: must measure user activities on the CE resources, providing resource usage information. This information, after having been properly translated in an appropriate format, is then forwarded to the Grid Accounting System.


Computing element 3 3 l.jpg
Computing Element (3/3)

A CE refers to a set, or cluster of computational resources, managed by a Local Resource Management Systems (LRMS). This cluster can encompass resources that are heterogeneous in their hardware and software configuration.

When a CE encompasses heterogeneous resources, it is not sufficient to let the underlying LRMS dispatch jobs to any worker nodes. Instead, when a job has been submitted to a CE, the underlying resource management system must be instructed to submit the job to a resource matching the requirements specified by the user.

The interface with the underlying LRMS must be very well specified (possibly according to existing standards), to ease the integration of new resource management systems (even by third party entities) as needed. The definition and provision of common interfaces to different resource management systems is still an open issue, but there are proposed recommendations currently under discussion (such as the Distributed Resource Management Architecture API, DRMAA, currently discussed within the Global Grid Forum).


Job types l.jpg
Job types

Sequential, batch jobs

Parallel (MPI) jobs

Checkpointable jobs

Interactive jobs

DAG jobs (set of jobs with inter-dependencies modeled with Directed Cycle-Free Graphs)

Partitionable jobs

Jobs to be partitioned within the CE


Push vs pull model l.jpg
Push vs pull model

A given CE can work both in push as pull mode.

PUSH: the job is pushed to a CE for its execution. When a job is pushed to a CE, it gets accepted only if there are resources matching the requirements specified by the user, and which are usable according to the local policies set by the local administrator. The jobs gets then dispatched to a worker node matching all these constraints.

PULL: the CE is asking the Workload Management Service (i.e. the job scheduler) for jobs. When a CE is willing to receive a job (according to policies specified by the local administrator, e.g. when the CE local queue is empty or it is getting empty), it requests a job from a known Workload Management Service. This notification request must include the characteristics and the policies applied to the available resources, so that this information can be used by the Workload Management Service to select a suitable job to be executed on the considered resource.


Pull model getting a job from the workload manager service l.jpg
Pull model: getting a job from the Workload Manager Service

Approach 1: the CE requests a job from all known Workload Management Services. If two or more Workload Management Services offer a job, only the first one to arrive is accepted by the CE, while the others are refused.

Approach 2: the CE requests a job from just one Workload Management Service. The CE then gets ready to accept a job from this Workload Management Service. If the contacted Workload Management Service has no job to offer within a certain time frame, another Workload Management Service is notified. Such a mechanism would allow supporting priorities on resource usage: a CE belonging to a certain VO would contact first a Workload Management Service referring to that VO, and only if it does not have jobs to be executed, the Workload Management Services of other VOs are notified, according to policies defined by the owner of the resource.


1 job management l.jpg
1. Job Management

Job Management is the main functionality provided by the CE. It allows to:

run jobs (which includes also the staging of all the required files). Characteristics and requirements of jobs that must be executed are specified by using a given language, for example the Job Description Language (JDL) (which is also used within the whole job scheduler - Workload Management System);

get an assessment of the foreseen quality of service for a given job to be submitted:

existence of resources matching the requirements and available according to the local policies

local queue traversal time (the time elapsed since the job entered the queue of the LRMS until it starts execution).

cancel previously submitted jobs;

suspend / resume jobs, if the LRMS allows these operations;

send signals to jobs.

get the status of some specified jobs, or of all the active jobs ``belonging'' to the user issuing the request;

be notified on job status, for example when a job changes its status or when a certain status is reached.


2 information provisioning l.jpg
2. Information Provisioning

A CE must also provide information describing itself.

In the push model this information is published in the information Service, and it is used during resource discovery (through the match-making engine in the workload manager) which matches available resources to queued jobs.

In the pull model the CE information is embedded in the ``CE availability'‘ message, which is sent by the CE to a Workload Management Service. The matchmaker then uses this information to find a suitable job for the CE.

The information that each CE should provide will include:

the characteristics of the CE (e.g. the types and numbers of existing resources, their hardware and software configurations, etc.);

the status of the CE (e.g. the number of in use and available resources, the number of running and pending jobs, etc.);

the policies enforced on the CE resources (e.g. the list of users and/or VOs authorized to run jobs on the resources of the CE, etc.).

resource usage: must measure user activities on the CE resources, providing resource usage information. This information, after having been properly translated in an appropriate format, has to be forwarded to the Grid Accounting System.


Part ii ce information schema l.jpg

PART IICE Information Schema


Ce information schema structure 1 3 l.jpg
CE Information Schema Structure (1/3)

The Computing Element is a container and can include the following objects:

Info (required):

UniqueID: unique identifier for the computing element.

Example: CE-hn:CE-port/jobmanager-CE-lrms-CE-queue  

InformationServiceURL: URL of the local information service providing for info about this entity.

Name: a name for this service

State (optional):

LRMSType: Name of local resource management system

LRMSVersion: Version of local resource manager

GRAMVersion: the GRAM (Grid Resource Access Manager) version

HostName: fully qualified host name for host exposing the CE interface under consideration.

GatekeeperPort: Port number on which the CE service is listening. 

TotalCPUsNum: Number of CPUs available to the jobs submitted to the CE. NB: this number should not be used to total available resources as more then one job queue may be pointed to the same physical resources


Ce information schema structure 2 3 l.jpg
CE Information Schema Structure (2/3)

Policy (optional) :

MaxWallClockTime: the maximum wall clock time allowed for jobs submitted to the CE in mins (0=not specified)

MaxCPUTime: the maximum CPU time allowed for jobs submitted to the CE in mins (0=not specified)

MaxTotalJobs: the maximum allowed number of jobs in the CE (0=not specified)

MaxRunningJobs: the maximum number of jobs allowed to be running (0=not specified)

Priority: info about the Queue Priority 

State (optional) :

RunningJobs: Number of currently running jobs

TotalJobs:  number of jobs in the CE (=RunningJobs+WaitingJobs)

Status: queue status which can be

1. Queueing: the queue can accept job submission, but can’t be served by the scheduler

2. Production: the queue can accept job submissions and is served by a scheduler

3. Closed: The queue can’t accept job submission and can’t be served by a scheduler

4. Draining: the queue can’t accept job submission, but can be served by a scheduler

WaitingJobs: number of jobs that are in a state different than running

WorstResponseTime: Worst time between job submission till when job starts its execution in sec

EstimatedResponseTime: Estimated time between job submission till when job starts its execution in sec

FreeCPUs: Number of free CPUs available to a scheduler (generally used with Condor)


Ce information schema structure 3 3 l.jpg
CE Information Schema Structure (3/3)

Job (optional, and not used in production):

LocalOwner: Owner local username

GlobalOwner: Owner GSI subject name

LocalID: Job local id

GlobalID: Job global id

Status: Job status {SUBMITTED, WAITING, READY, SCHEDULED, RUNNING, ABORTED, DONE, CLEARED, CHECKPOINTED}

SchedulerSpecific: Scheduler specific info

AccessControlBase (optional):

Rule: A rule that grants/denies access to the Computing Element service.


Slide21 l.jpg

CE

Generalization

Info

Aggregation

Composition (strong

Aggregation)

State

Policy

Job

Access

Control

Base


Computing element ce l.jpg
Computing Element (CE)

A CE is a grid batch queuewith a “grid gate” front-end:

Information system

Job request

L&B

Logging

Resource BDII

Gatekeeper

Accounting

Grid gate node

Local resource management system:Condor / PBS / LSF master

Set of worker nodes (WNs)



Ce and workload management system l.jpg
CE and Workload Management System

Application

Resource

Connectivity

Fabric

Grid

Architecture

Workload

Management

System

Collective

Computing

Element


Workload management system wms l.jpg
Workload Management System (WMS)

The user interacts with Grid via a Workload Management System for job submission.

The Goal of WMS is the distributed job scheduling and resource management in a Grid environment.

What does it allow Grid users to do?

Find the list of resources suitable to run a specific job

Submit a job/DAG for execution on a remote Computing Element

Check the status of a submitted job/DAG

Cancel one or more submitted jobs/DAGs

Retrieve the output files of a completed job/DAG (output sandbox)

Retrieve and display bookkeeping information about submitted jobs/DAGs

Retrieve and display logging information about submitted jobs/DAGs

Retrieve checkpoint states of a submitted checkpointable job

Start a local listener for an interactive job

The WMS tries to optimize the usage of resources


Submission l.jpg
Submission

For a computation job there are two main types of request: submission and cancellation. In particular the meaning of the submission request is to pass the responsibility of the job to the WM.

The WM will then pass the job to an appropriate CE for execution, taking into account the requirements and the preferences expressed in the job description.

The decision of which resource should be used is the outcome of a matchmaking process between the submission requests and the available resources.

The availability of resources for a particular task depends not only on the state of the resources, but also on the utilization policies that the resource administrators and/or the administrator of the VO the user belongs to have put in place.


Wms components l.jpg
WMS Components

WMS is currently composed of the following parts:

User Interface (UI) : access point for the user to the WMS; this is the place where the user interacts with WMS

Workload Management System(WMS) : the broker of GRID resources, responsible to find the “best” resources where to submit jobs

Job Submission Service (JSS) : provides a reliable submission system, i.e. delivers jobs to the computing elements chosen by the resource broker, resubmission is attempted in case of failure according to the job owner request.

Information cache : a repository of resource information that is available in read only mode to the matchmaking engine and whose update is the result of either the arrival of notifications or active polling of resources or some arbitrary combination of both.

Logging and Bookkeeping services (LB) : service provides support for the job monitoring functionality: it stores logging and bookkeeping information concerning events generated by the various components of the WMS. Using this information, the LB service keeps a state machine view of each job.

Task Queue

Proxy renewal: a Proxy Renewal Service is available to assure that, for all the lifetime of a job, a valid user proxy exists within the WMS, and this proxy renewal service relies on theMyProxy service for renewing credentials associated to the request.


Task queue l.jpg
Task queue

It gives the possibility to keep a submission request for a while if no resources are immediately available that match the job requirements.

Non-matching requests will be retried either periodically (in an eager scheduling approach) or as soon as notifications of available resources appear in the ISM (in a lazy scheduling – i.e. pull - approach).

Alternatively such situations could only lead to an immediate abort of the job for lack of a matching resource.


Proxy certificate l.jpg
Proxy certificate

A job gets associated a valid proxy certificate (the submitting user’s one) when it is submitted by the WMS-User Interface.

Validity of such a certificate is set by default to 12 hours unless a longer validity is explicitly requested by the user when generating the proxy. Problems could occur if the job spends on CE (in a queue or running) more time than lifetime of its proxy certificate.

In order to submit long-running jobs, users can either generate proxy credentials with an appropriate lifetime or (more safely) rely on the features of the MyProxy server. The underlying idea is that the user registers in a MyProxy server a valid long-term certificate proxy that will be used by the WMS to perform a periodic credential renewal for the submitted job; in this way the user is no longer obliged to create very long lifetime proxies when submitting jobs lasting for a great amount of time.

The MyProxy credential repository system consists of a server and a set of client tools that can be used to delegate and retrieve credentials to and from a server. Normally, a user would

1. start by using the myproxy_init client program along with the permanent credentials necessary to contact the server

2. delegate a set of proxy credentials to the server along with authentication information and retrievalrestrictions.


Job preparation l.jpg

Information to be specified

Job characteristics (e.g. executable, stdin, etc.)

Requirements and Preferences of the computing system (e.g. CPU speed, multi-processor machines, …)

Software dependencies (i.e. needed software to be installed already on machines which will eventually execute the job)

Job Data requirements(e.g. input data, output storage element, etc.)

Optimizationcreteria

All this is specified using a Job Description Language (JDL)

Job Preparation


Job description language jdl 1 5 l.jpg
Job Description Language (JDL) 1/5

Based upon Condor’s CLASSified ADvertisement language (ClassAd): a simple expression-based language to specify both resources and requests.

ClassAd is a fully extensible language

ClassAd is constructed with the classad construction operator []

It is a sequence of attributes separated by semi-colons.

An attribute is a pair (key, value), where value can be a Boolean, an Integer, a list of strings, …

<attribute> = <value>;

e.g. [

attr1 = value1;

attr2 = value2;

...

attrn = valuen;

]

So, the JDL allows to define a set of attribute, the WMS takes into account when making its scheduling decision


Job description language jdl 2 5 l.jpg
Job Description Language (JDL) 2/5

The supported attributes are grouped in two categories:

Job (Attributes)

Define the job itself

Resources

Taken into account by the WMS for carrying out the matchmaking algorithm

Computing Resource (Attributes)

Used to build expressions of Requirements and/or Rank attributes by the user

Have to be prefixed with “other.”

Data and Storage resources (Attributes)

Input data to process, SE where to store output data, protocols spoken by application when accessing SEs


Job description language jdl relevant attributes 3 5 l.jpg
Job Description Language (JDL): relevant attributes 3/5

Executable (mandatory)

The command name

Arguments (optional)

Job command line arguments

StdInput, StdOutput, StdErr (optional)

Standard input/output/error of the job

Environment (optional)

List of environment settings

InputSandbox (optional)

List of files on the UI local disk needed by the job for running

The listed files will automatically staged to the remote resource

OutputSandbox (optional)

List of files, generated by the job, which have to be retrieved


Job description language jdl relevant attributes 4 5 l.jpg
Job Description Language (JDL): relevant attributes 4/5

Requirements

Job requirements on computing resources

Specified using attributes of resources published in the Information Service

If not specified, default value defined in UI configuration file is considered

Default: other.Active (the resource has to be able to accept jobs)

Rank

Expresses preference (how to rank resources that have already met the Requirements expression)

Specified using attributes of resources published in the Information Service

If not specified, default value defined in the UI configuration file is considered

Default: -other.EstimatedTraversalTime (the lowest estimated traversal time)


Job description language jdl data attributes 5 5 l.jpg
Job Description Language (JDL): “data” attributes 5/5

InputData (optional)

Refers to data used as input by the job: these data are published in the Replica Catalog and stored in the SEs)

PFNs and/or LFNs

ReplicaCatalog (mandatory if InputData has been specified with at least one Logical File Name)

The Replica Catalog Identifier

DataAccessProtocol (mandatory if InputData has been specified)

The protocol or the list of protocols which the application is able to speak with for accessing InputData on a given SE

OutputSE (optional)

The Uniform Resource Identifier of the output SE

WMS uses it to choose a CE that is compatible with the job and is close to SE


Examples jdl file 1 l.jpg
Examples JDL File (1)

Executable = “gridTest”;

StdError = “stderr.log”;

StdOutput = “stdout.log”;

InputSandbox = {“home/veronesi/test/gridTest”};

OutputSandbox = {“stderr.log”, “stdout.log”};

InputData = “LF:testbed0-00019”;

ReplicaCatalog = “ldap://sunlab2g.cnaf.infn.it:2010/ \ lc=test, rc=WP2 INFN Test, dc=infn, dc=it”;

DataAccessProtocol = “gridftp”;

Requirements = other.Architecture==“INTEL” && \ other.OpSys==“LINUX” && other.FreeCpus >=4;

Rank = “other.MaxCpuTime”;


Examples jdl file 2 l.jpg
Examples JDL File (2)

Such a JDL would make the myexe executable be transferred on a remote CE whose queue is managed by the PBS batch system and be run taking the myinput.txt file (also copied form the UI node) as input.


Dag description l.jpg
DAG description

where n1.jdl, n2.jdl and n3.jdl are in turn job descriptions representing the nodes of the DAG and the dependencies attributes states that nodeB and nodeC cannot start before nodeA has been successfully executed


Input output sandbox l.jpg
Input/output sandbox

It is important to note that the input and output sandboxes are intended for relatively small files (few megabytes) like scripts, standard input, and standard output streams.

If you are using large input files or generating large output files, you should instead directly read from or write to a storage element.

As each submitting user is assigned by the WMS with a limited quota on the WMS machine disk, abuse of the input and output sandboxes will shortly make the quota fill-up and the WMS not accept further jobs submission for the given user.

Input sandbox:

let’s suppose to have a job that needs for the execution a certain set of files having a small size and available on the submitting machine. Let’s also suppose that for performance reasons it is preferable not going through the data transfer services for the staging of these files on the executing node. Then the user can use the InputSandbox attribute to specify the files that have to be copied from the submitting machine to the execution CE via the WMS.

Output sandbox:

For the standard output and error of the job the user shall instead always specify just file names (without any directory path) through the StdOutput and StdError JDL attributes. To have them copied back on the WMS-UI machine it suffices to list them in the OutputSandbox and use after job completion the job-output command described later in this document.


Requirement and rank l.jpg
Requirement and rank

The parameters Requirements and Rank control the resource matching for the job.

The expression given for the requirements specifies the constraints necessary for a job to run.

If more than one resource matches the job requirements, then the rank is used to determine which is the most desirable resource i.e. the one to which the job is submitted (the higher the rank value the better is the resource).

Both, the Requirements and the rank attributes, can be arbitrary expressions which use the parameters published by the resources in the Information System

Examples:

to express that a job requires at least 25 minutes of CPU time and 100 minutes of real time, the expression is:

Requirements = other.GlueCEPolicyMaxCPUTime >= 1500 && other.GlueCEPolicyMaxWallClockTime >= 6000;

GlueHostApplicationSoftwareRunTimeEnvironment is usually used to describe application software packages which are installed on a site. For example:

Requirements = Member(other.GlueHostApplicationSoftwareRunTimeEnvironment ,"ALICE-3.07.01");

Rank = - other.GlueCEStateEstimatedResponseTime

Rank = other.GlueCEStateFreeCPUs ;


Wms main commands 1 2 l.jpg
WMS main commands (1/2)

job-list-match: Displays the list of identifiers of the resources (and the corresponding ranks - if requested) on which the user is authorized and satisfying the job requirements included in the JDL. This only works for jobs; for DAGs you have to issue this commands on the single nodes JDLs.

job-submit submits a job/DAG to the grid. It requires a JDL file as input and returns a job/DAG Identifier.

job-status This command prints the status of a job/DAG previously submitted using glite-job-submit.

The job status request is sent to the LB (Logging and Bookkeeping service) that provides the requested information.

When issued for a DAG it provides the status information for the DAG itself and all of its nodes. It is also possible to retrieve the status of individual nodes of a DAG simply passing their own identifiers to the command.

The LB service using the job/DAG related events sent by each WMS component handling the request, keeps a state machine view of each job/DAG.


Wms main commands 2 2 l.jpg
WMS main commands (2/2)

job-output The glite-job-output command can be used to retrieve the output files of a job/DAG that has been submitted with a job description file including the Output Sandbox attribute.

After the submission, when the job/DAG has terminated its execution, the user can upload the files generated by the job/DAG and temporarily stored on the WMSmachine as specified by the OutputSandbox attribute, issuing the job-output with as input the ID returned by the job submission command

As a DAG does not have its own output sandbox, when the command is issues for such a request retrieves the output sandboxes of all the DAG nodes.

job-cancel This command cancels a job previously submitted using glite-job-submit. Before cancellation, it prompts the user for confirmation.

It is not allowed to issue a cancel request for a node of a DAG: you have to cancel the whole DAG using the provided handle instead.


Job states l.jpg
Job States

SUBMITTED: the user has submitted the job via UI

WAITING. the WMS has received the job

READY: A CE, which matches job requirements, has been selected, and the job is transferred to the JSS

SCHEDULED: the JSS has sent the job to the CE

RUNNING: the job is running on the CE

DONE: this state has different meanings:

DONE (ok) : the execution has terminated on the CE (WN) with success

DONE (failure) : the execution has terminated on the CE (WN) with some problems

DONE (cancelled) : the job has been cancelled with success

OUTPUTREADY: the output sandbox is ready to be retrieved by the user

reflects the time difference between end of computation on CE and the moment WMS got necessary notification about job termination.

CLEARED: the user has retrieved all output files successfully, and the job bookkeeping information is purged some time after the job enters in this state.

ABORTED: the job has failed

The job may fail for several reasons one of them is external to its execution (no resource found).


State diagram l.jpg
State Diagram

SUBMITTED

WAITING

READY

SCHEDULED

ABORTED

DONE(cancelled)

RUNNING

DONE(failed)

DONE(ok)

OUTPUTREADY

CLEARED


Job identifier l.jpg
Job identifier

The Job (and DAG) Identifiers produced by the workload management software are of the form:

https://edt003.cnaf.infn.it:9000/NyIYrqE\_a8igk4f0CLXNKA

The first part of the Id (https://edt003.cnaf.infn.it:9000 in the example above) is the endpoint URL of the LB server holding the job/DAG logging and bookkeeping information and this allows the WMS to know which LB server has to be contacted for monitoring a given job/DAG.

The second part (NyIYrqE_a8igk4f0CLXNKA) generated by the WMS-UI taking into account some client local information ensures instead grid-wide uniqueness of the identifier.


Job submission scenario l.jpg
Job Submission Scenario

UI

JDL

Logical File

Catalog

(LFC)

Information

Service (IS)

WMS

Storage

Element

(SE)

Logging &

Bookkeeping

(LB)

Job Submission

Service (JSS)

Compute

Element CE)


A job submission example l.jpg
A Job Submission Example

Input Sandbox

UI

JDL

Job Submit

Event

Logical File

Catalog

(LFC)

Information

Service (IS)

Job Status

submitted

(WMS)

Storage

Element

(SE)

Logging &

Bookkeeping

(LB)

Job Submission

Service (JSS)

Compute

Element (CE)


A job submission example48 l.jpg
A Job Submission Example

UI

JDL

waiting

Logical File

Catalog

(LFC)

Information

Service (IS)

Job Status

submitted

WMS

Storage

Element

(SE)

Logging &

Bookkeeping

(LB)

Job Submission

Service (JSS)

Compute

Element (CE)


A job submission example49 l.jpg
A Job Submission Example

UI

JDL

ready

Logical File

Catalog

(LFC)

Information

Service (IS)

Job Status

submitted

waiting

WMS

Storage

Element

(SE)

Logging &

Bookkeeping

(LB)

Job Submission

Service (JSS)

Compute

Element (CE)


A job submission example50 l.jpg
A Job Submission Example

UI

JDL

scheduled

BrokerInfo

Logical File Catalog

(LFC)

Information

Service (IS)

Job Status

submitted

waiting

ready

(WMS)

Storage

Element (SE)

Logging &

Bookkeeping

(LB)

Job Submission

Service

(JSS)

Compute

Element (CE)


A job submission example51 l.jpg
A Job Submission Example

UI

JDL

Input Sandbox

running

Job Status

submitted

Logical File

Catalog

(LFC)

Information

Service (IS)

waiting

ready

scheduled

(WMS)

Storage

Element

(SE)

Logging &

Bookkeeping

(LB)

Job Submission

Service (JSS)

Compute

Element (CE)


A job submission example52 l.jpg
A Job Submission Example

UI

JDL

running

Job Status

Job Status

submitted

Logical File

Catalog

(LFC)

Information

Service (IS)

waiting

ready

scheduled

(WMS)

Storage

Element

(SE)

Logging &

Bookkeeping

(LB)

Job Submission

Service (JSS)

Computing

Element (CE)


A job submission example53 l.jpg
A Job Submission Example

UI

JDL

done

Job Status

Job Status

submitted

Logical File

Catalog

(LFC)

Information

Service

waiting

ready

scheduled

WMS

running

Storage

Element

Logging &

Bookkeeping

Job Submission

Service

Compute

Element


A job submission example54 l.jpg
A Job Submission Example

UI

JDL

outputready

Output Sandbox

Job Status

Job Status

submitted

Logical File

Catalog

Information

Service

waiting

ready

scheduled

WMS

running

Storage

Element

done

Logging &

Bookkeeping

Job Submission

Service

Compute

Element


A job submission example55 l.jpg
A Job Submission Example

UI

JDL

Output Sandbox

cleared

Job Status

submitted

Logical File

Catalog

(LFC)

Information

Service (IS)

waiting

ready

scheduled

WMS

running

Storage

Element

(SE)

done

Logging &

Bookkeeping

(LB)

Job Submission

Service (JS)

outputready

Compute

Element (CE)


Job resubmission l.jpg
Job resubmission

If something goes wrong, the WMS tries to reschedule and resubmit the job (possibly to a different resource)

Maximum number of resubmissions (considering all the resources matching the requirements): min(RetryCount, submission_retries)

RetryCount: JDL attribute

submission_retries: attribute in the WMS configuration file

E.g., to disable job resubmission for a particular job: RetryCount=0; in the JDL file


User interface configuration file l.jpg
User Interface configuration file

Can be set if user is not happy with default one

Most relevant attributes:

WMSes

When submitting a job, the first specified WMS is tried, if the operation fails the second one is considered, etc.

LBserver(s)

The LB to be used for a job is chosen by the WMS

So when a dg-job-status <dg-jobid> is issued, the LB to contact is specified in the dg-jobid

This list specifies the LB(s) that must be contacted when issuing a dg-job-status –all / dg-job-get-logging-info –all (to have information for all the jobs belonging to that user)

Default JDL Requirements

other.active

Default JDL Rank

- other.EstimatedTraversalTime


Job submission phases l.jpg
Job Submission Phases

  • User logs in on the User Interface

  • User issues a voms-proxy-init and enters his certificate’s password, getting a valid proxy certificate

  • User configures the Job Description Language file describing the submission profile

  • User issues a: job-submit HelloWorld.jdl

    • and gets back from the system a unique Job Identifier (JobId)

  • User issues a: job-status JobId

    • to get logging information about the current status of his Job

  • When the “OutputReady” status is reached, the user can issue a job-get-output JobId

    to retrieve the output generated. The system returns the name of the temporary directory where the job output can be found on the User Interface machine.


Scheduling 1 3 l.jpg
Scheduling (1/3)

Scheduling is the core functionality of the WMS. It has to find the best suitable computing resource (CE) where the job will be executed.

It interacts with Data Management service and Information Service. They supply the job scheduler with all the information required for the resolution of the matches.

The CE chosen by the job scheduler has to match the job requirements (e.g. runtime environment, data access requirements, and so on).

If two or more CEs satisfy all the requirements, the one with the best Rank is chosen.

The job scheduler has to deal with three possible scenarios


Scheduling 2 3 l.jpg
Scheduling (2/3)

Scenario 1: Direct Job Submission

Job is scheduled on a given CE (specified in the job-submit command via –r option)

WMS doesn’t perform any matchmaking algorithm

Scenario 2: Job Submission without data-accessRequirements

Neither CEnor input data are specified.

The scheduler starts the matchmaking algorithm, which consists of two phases:

Requirements check (WMS contacts the local cache or the Information Service to check which CEs satisfy all the requirements)

If more than one CE satisfies the job requirements, the CE with the best rank is chosen by the scheduler


Scheduling 3 3 l.jpg
Scheduling (3/3)

Scenario 3: Job Submission with also data-access Requirements

CE is not specified in the JDL

The scheduler interacts with Data Management service to find out the most suitable CE taking into account also the SEs where both input data sets are physically stored and output data sets should be staged on completion of job execution

The scheduler strategy consists of submitting jobs close to data

The main two phases of the match making algorithm remain unchanged:

Requirements check

Rank computation

What changes with respect to the second scenario?

Now, the scheduler restricts the search to the CEs that satisfy the data-access requirements (for example, which are close to data, i.e. part of the same Grid site)



Slide63 l.jpg

Job management

requests (submission,

cancellation) expressed

via a Job Description

Language (JDL)


Slide64 l.jpg

Keeps submission

Requests

Requests are kept

for a while, waiting for

being dispatched

If there is no matching

resource available


Slide65 l.jpg

Repository of resource

information

Updated via notifications

and/or active

polling on sources

Provide matchmaker

With information to decide

best resources for request.


Slide66 l.jpg

Finds an appropriate

CE or resource for job request according to the information from ISM.

Taking into account job preferences, resource status, policies on resources




Workload manager proxy wmproxy l.jpg
Workload Manager Proxy WMProxy

Provides access to WMS functionality through a Web Services based interface

Each job submitted to a WMProxy Service is given the delegated credentials of the user who submitted it.

These credentials can then be used to perform operations requiring interactions with other services

WMProxy advantages:

web service, SOAP

job collections, DAG jobs, shared and compressed

sandboxes

WMProxy caveats:

needs delegated credentials

Delegate once,submit many


Slide70 l.jpg

Workload Manager (WM)

Is responsible for

Calls Matchmaker to find the resource which best matches the job requirements.

Interacting with Information System and File catalog.

Calculates the ranking of all the matchmaked resource

Information Supermarket (ISM)

is responsible for

basically consists of a repository of resource information that is available in read only mode to the matchmaking engine

Job Adapter

is responsible for

making the final touches to the JDL expression for a job, before it is passed to CondorC for the actual submission

creating the job wrapper script that creates the appropriate execution environment in the CE worker node

transfer of the input and of the output sandboxes


Slide71 l.jpg

Job Controller (JC)

Is responsible for

Converts the condor submit file into ClassAd

hands over the job to CondorC

Condor

responsible for

performing the actual job management operations: job submission, removal

Log Monitor

is responsible for

watching the Condor log file

intercepting interesting events concerning active jobs

events affecting the job state machine

triggering appropriate actions.


Slide72 l.jpg

Task Queue

Gives the possibility to keep track of the requests if no resources are immediatelly avalaible

Non-matching requests will be retried periodically (eager scheduling)

Or wait for notification of avalaible resources (lazy scheduling)


Slide73 l.jpg

Computing Element is built on a homogeneous farm of computing nodes (calledWorker Nodes)

Also there are many components inside CE such as gatekeeper, globus-jobmanager, ..


Slide74 l.jpg

Gatekeeper

Grants access to the CE and map grid user to a local user id.


Slide75 l.jpg

Batch System

A cluster of compute nodes controlled by a head node.

handles the job execution Example:

Torque (Open PBS), PBS


Slide76 l.jpg

Location of files

UI

LFC

Network

Daemon

Characteristics of resources

Workload

Manager

Inform.

Service

Job Contr.

-

CondorG

CE characts

& status

SE characts

& status

WMS

Computing

Element

Storage

Element


Slide77 l.jpg

Daemon responsible for accepting

incoming requests

submitted

waiting

UI

LFC

Network

Daemon

JDL

Input

Sandbox

files

Workload

Manager

Inform.

Service

WMS

storage

glite-wms-job-submit myjob.jdl

Job Contr.

-

CondorG

CE characts

& status

SE characts

& status

WMS

Computing

Element

Storage

Element


Slide78 l.jpg

submitted

waiting

UI

Job Status

LFC

Network

Daemon

Job

Workload

Manager

Inform.

Service

WMS

storage

WM: responsible to take

the appropriate actions to

satisfy the request

Job Contr.

-

CondorG

CE characts

& status

SE characts

& status

WMS

Computing

Element

Storage

Element


Slide79 l.jpg

submitted

waiting

UI

LFC

Network

Daemon

Match-Maker/

Broker

Workload

Manager

Inform.

Service

WMS

storage

Where this

job can be

executed ?

Job Contr.

-

CondorG

CE characts

& status

SE characts

& status

WMS

Computing

Element

Storage

Element


Slide80 l.jpg

submitted

waiting

UI

LFC

Network

Daemon

Matchmaker: responsible

to find the “best” CE

where to submit a job

Match-

Maker/

Broker

Workload

Manager

Inform.

Service

WMS

storage

Job Contr.

-

CondorG

CE characts

& status

SE characts

& status

WMS

Computing

Element

Storage

Element


Slide81 l.jpg

submitted

waiting

UI

Where is the needed InputData ?

LFC

Network

Daemon

Match-

Maker/

Broker

Workload

Manager

Inform.

Service

WMS

storage

What is the

status of the

Grid ?

Job Contr.

-

CondorG

CE characts

& status

SE characts

& status

WMS

Computing

Element

Storage

Element


Slide82 l.jpg

submitted

waiting

UI

LFC

Network

Daemon

Match-

Maker/

Broker

Workload

Manager

Inform.

Service

WMS

storage

CE choice

Job Contr.

-

CondorG

CE characts

& status

SE characts

& status

WMS

Computing

Element

Storage

Element


Slide83 l.jpg

submitted

waiting

UI

LFC

Network

Daemon

Workload

Manager

Inform.

Service

WMS

storage

Job

Adapter

Job Contr.

-

CondorG

CE characts

& status

JA: responsible for the final “touches”

to the job before performing submission

(e.g. creation of wrapper script, etc.)

SE characts

& status

WMS

Computing

Element

Storage

Element


Slide84 l.jpg

submitted

waiting

UI

ready

LFC

Network

Daemon

Workload

Manager

Inform.

Service

WMS

storage

Job

Job Contr.

-

CondorG

JC: responsible for the

actual job management

operations (done via CondorG)

CE characts

& status

SE characts

& status

WMS

Computing

Element

Storage

Element


Slide85 l.jpg

submitted

waiting

UI

ready

scheduled

LFC

Network

Daemon

Workload

Manager

Inform.

Service

WMS

storage

Job Contr.

-

CondorG

Input

Sandbox

files

CE characts

& status

SE characts

& status

WMS

Job

Computing

Element

Storage

Element


Slide86 l.jpg

submitted

waiting

UI

ready

scheduled

running

Job

LFC

Network

Daemon

Workload

Manager

Inform.

Service

WMS

storage

Job Contr.

-

CondorG

Input

Sandbox

WMS

“Grid enabled”

data transfers/

accesses

Computing

Element

Storage

Element


Slide87 l.jpg

submitted

waiting

UI

ready

scheduled

running

done

LFC

Network

Daemon

Workload

Manager

Inform.

Service

WMS

storage

Job Contr.

-

CondorG

Output

Sandbox

files

WMS

Computing

Element

Storage

Element


Slide88 l.jpg

submitted

waiting

UI

ready

scheduled

running

done

LFC

Network

Daemon

Workload

Manager

Inform.

Service

WMS

storage

glite-wms-get-output <jobID>

Job Contr.

-

CondorG

Output

Sandbox

WMS

Computing

Element

Storage

Element


Slide89 l.jpg

UI

submitted

LFC

Network

Daemon

waiting

ready

Output

Sandbox

files

Workload

Manager

Inform.

Service

WMS

storage

scheduled

Job Contr.

-

CondorG

running

done

WMS

cleared

Computing

Element

Storage

Element


Slide90 l.jpg

UI

glite-wms-job-status <jobID>

glite-wms-job-logging-info <jobID>

Network

Daemon

LB: receives and stores

job events; processes

corresponding job status

LB proxy

Workload

Manager

Job

status

Logging &

Bookkeeping

Job Contr.

-

CondorG

WMS

Log of

job events

Computing

Element


Other core grid services l.jpg
Other core Grid services

VOMS

MyProxy

FTS (File Transfer Service)

LFC (Logical File Catalog)


References l.jpg
References

EGEE User’s Guide: WMS Service, EGEE-JRA1-TEC-572489 (https://edms.cern.ch/document/572489/1), see Section 1 and 2.

EGEE Middleware Architecture and planning, EU Deliverable DJRA1.4, EGEE-DJRA1.1-594698-v1.0, (https://edms.cern.ch/document/594698/), see Section 8

ClassAd (https://www.cs.wisc.edu/condor/classad)


ad