slide1 n.
Download
Skip this Video
Download Presentation
other servers

Loading in 2 Seconds...

play fullscreen
1 / 1

other servers - PowerPoint PPT Presentation


  • 100 Views
  • Uploaded on

GRID Analysis Environment for LHC Particle Physics. 3 rd party applications. CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1. Online System. Experiment. Service. Tier 0 +1. Tier 1. Tier 2. Tier2 Center. Tier2 Center. Tier2 Center. Tier2 Center.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'other servers' - zorita-battle


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

GRID Analysis Environment for LHC Particle Physics

3rd party applications

CERN/Outside Resource Ratio ~1:2Tier0/( Tier1)/( Tier2) ~1:1:1

Online System

Experiment

Service

Tier 0 +1

Tier 1

Tier 2

Tier2 Center

Tier2 Center

Tier2 Center

Tier2 Center

Tier2 Center

Tier 3

Tens of Petabytes by 2007-8.An Exabyte ~5-7 Years later.

Client

Client

Client

Client

Other Clients

Web browser

ROOT (analysis tool)

Python

Cojac (detector viz.)/

IGUANA (cms viz tool)

“Analysis Flight Deck”

JobMon Client

JobStatus Client

MCPS Client

Global view of the system

The GAE Architecture

  • Analysis clients talk standard protocols to the Clarens Grid Service Portal
  • Enabling Selection of Workflows (e.g. Monte Carlo simulation, data transfer, analysis)
  • Jobs generated submitted to scheduler, which creates a plan based on monitor information
  • Submission of jobs and feedback on job status
  • Discovery,
  • Acl management,
  • Certificate based access
  • HTTP,
  • SOAP,
  • XML-RPC,
  • JSON, RMI

Grid Services

Web Server

Monitoring Clients

MonALISA Clients

Tier2 Site

Clarens

MCPS

Workflow

Execution

Workflow

Definitions

JobStatus

Grid

  • MonALISA based monitoring services provide global views of the system
  • MonALISA based components proactively manage sites and networks based on Monitoring information

Runjob

Catalogs

Compute Site

Scheduler

Metadata

Applications

DCache

Fully-

Abstract

Planner

Storage

ROOT

Sphinx

Virtual

Data

FAMOS

Partially-

Abstract

Planner

  • The Clarens portal and MonALISA clients hides the complexity of the Grid services from the client, but can expose it in as much detail as required for e.g. monitoring.

Data

Management

ORCA

Replica

BOSS

Monitoring

JobMon

MonALISA

Fully-

Concrete

Planner

MonALISA

Network

MonALISA

Global

Command & Control

Implementations, developed within Physics and CS community associated with GAE components

Reservation

Monitoring

Planning

Proactive in minimizing Grid traffic jams

Execution

Priority

Manager

Grid Wide

Execution

Service

BOSS

IGUANA (viz. app.)

Clarens portal

MonaLisa (monitoring)

ROOT (analysis)

LHC Data Grid Hierarchy: developed at Caltech

Scientific Exploration at the High Energy Physics Frontier

~PByte/sec

~100-1500 MBytes/sec

Physics experiments consist of large collaborations: CMS and ATLAS each encompass 2000 physicists from approximately 150 institutes (300-400 physicists in 30 institutes in the US)

CERN Center PBs of Disk; Tape Robot

~10-40 Gbps

FNAL Center

IN2P3 Center

INFN Center

RAL Center

2.5-10 Gbps

~2.5-10 Gbps

HEP Challenges: Frontiers of Information Technology

  • Rapid access to PetaByte/ExaByte data stores
  • Secure, efficient, transparent access to heterogeneous worldwide distributed computing and data
  • A collaborative scalable distributed environment for thousands of physicists to enable physics analysis
  • Tracking the state and usage patterns of computing and data resources, to make possible rapid turnaround and efficient utilization of resources

Institute

Institute

Institute

Institute

Physics data cache

0.1 to 10 Gbps

Tier 4

Workstations

Emerging Vision: A Richly Structured, Global Dynamic System

Grid Analysis Environment (GAE)

GAE development (services)

  • The “Acid Test” for Grids; crucial for LHC experiments
    • Large, diverse, distributed community of users
    • Support for 100s to 1000s of analysis tasks, shared among dozen of sites
    • Widely varying task requirements and priorities
    • Need forpriority schemes, robust authentication and security
  • Operates in a severely resource limited and policy constrained global system
    • Dominated by collaboration policy and strategy
    • Requires real-time monitoring; task and workflow tracking; decisions often based on a global system view
  • Where physicists learn to collaborate on analysis across the country, and across world regions
  • Focus is on the LHC CMS experiment but architecture and services can potentially be used in other (physics) analysis environments
  • MCPS. Policy based Job submission and workflow management portal, developed in collaboration with FNAL and UCSD
  • JobStatus. Access to Job Status information through Clarens and MonALISA, developed in collaboration with NUST
  • JobMon. implements a secure and authenticated method for users to access running Grid jobs, developed in collaboration with FNAL
  • BOSS. Uniform job submission layer developed in collaboration with INFN
  • SPHINX. Grid scheduler developed at UFL
  • CAVES. Analysis code sharing environment developed at UFL
  • Core services (Clarens): Discovery, Authentication, Proxy, Remote file access, Access control management, Virtual Organization management

The Clarens Web Service Framework

  • A portal system providing a common infrastructure for deploying Grid enabled web services
  • Features:
    • Access control to services
    • Session management
    • Service discovery and invocation
    • Virtual Organization management
    • PKI based security
    • Good performance (over 1400 calls per second)
  • Role in GAE:
    • Connects clients to Grid or analysis applications
    • Acts in concert with other Clarens servers to form a P2P network of service providers
  • Two implementations:
    • Python/C using Apache web server
    • Java using Tomcat servlets
  • A distributed monitoring service system using JINI/JAVA and WSDL/SOAP technologies. A
  • Acts as a dynamic service system and provides the functionality to be discovered and used by any other services or clients that require such information.
  • Can integrate existing monitoring tools and procedures to collect parameters describing computational nodes, applications and network performance.

Web server

http/https

Monitoring SC04 BWC, 101 GBs

  • Provides the monitoring information from large and distributed systems to a set of loosely coupled "higher level services" in a flexible, self describing way. This is part of a loosely coupled service architectural model to perform effective resource utilization in large, heterogeneous distributed centers.

other servers

Java client, ROOT (analysis tool), IGUANA (CMS viz. tool), ROOT-CAVES client (analysis sharing tool), … any app that can make XML-RPC/SOAP calls

GRID Enabled Analysis:

User view of a collaborative desktop

Clarens provides a ROOT Plug-In that allows the ROOT user to gain access to Grid services via the portal, for example to access ROOT files at remote locations

Policy based access to workflows

VO Management

Clarens Grid Portal:

Secure cert-based access to services through browser

(remote) File Access

Authentication

Authorization

Logging

Shell

Key Escrow

More information:

GAE web page: http://ultralight.caltech.edu/web-site/gae

Clarens web page: http://clarens.sourceforge.net

MonaLisa : http://monalisa.cacr.caltech.edu/

SPHINX: http://sphinx.phys.ufl.edu/

This work is partly supported by the Department of Energy as part of the Particle Physics DataGrid project (DOE/DHEP and MICS) and be the National Science Foundation (NFS/MPS and CISE). Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Department of Energy or the National Science Foundation