The alice framework at gsi
This presentation is the property of its rightful owner.
Sponsored Links
1 / 22

The ALICE Framework at GSI PowerPoint PPT Presentation


  • 194 Views
  • Uploaded on
  • Presentation posted in: General

The ALICE Framework at GSI. Kilian Schwarz ALICE Meeting August 1, 2005. Overview. ALICE framework What part of ALICE framework is installed where at GSI and how can it be accessed/used ALICE Computing model (Tier architecture) Resource consumption of individual tasks

Download Presentation

The ALICE Framework at GSI

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


The alice framework at gsi

The ALICE Framework at GSI

Kilian Schwarz

ALICE Meeting

August 1, 2005


Overview

Overview

  • ALICE framework

  • What part of ALICE framework is installed where at GSI and how can it be accessed/used

  • ALICE Computing model (Tier architecture)

  • Resource consumption of individual tasks

  • Resources at GSI and GridKa


Alice framework

ALICE Framework

G4

G3

FLUKA

ISAJET

HIJING

AliRoot

AliEn

Virtual MC

EVGEN

MEVSIM

HBTAN

STEER

PYTHIA6

PDF

PMD

EMCAL

TRD

ITS

PHOS

TOF

ZDC

RICH

HBTP

STRUCT

CRT

START

FMD

MUON

TPC

RALICE

ROOT

F. Carminati, CERN


Software installed at gsi aliroot

Software installed at GSI: AliRoot

  • Installed at: /d/alice04/PPR/AliRoot

  • Newest version: AliRoot v4-03-03

  • Environment setup via:

    > . gcc32login

    > . alilogin dev/new/pro/version-number

     gcc295-04 not supported anymore

     corresponding ROOT version initialized, too

    * responsible person: Kilian Schwarz


Software installed at gsi root aliroot is heavily based on root

Software installed at GSI: ROOT(AliRoot is heavily based on ROOT)

  • Installed at: /usr/local/pub/debian3.0/gcc323-00/rootmgr

  • Newest version: 502-00

  • Environment setup via

    > . gcc32login / alilogin or rootlogin

  • Responsible persons:

    - Joern Adamczewski / Kilian Schwarz

  • See also: http://www-w2k.gsi.de/root


Software installed at gsi geant3 needed for simulation accessed via vmc

Software installed at GSI: geant3(needed for simulation: accessed via VMC)

  • Installed at: /d/alice04/alisoft/PPR/geant3

  • Newest version: v1-3

  • Environment setup via gcc32login/alilogin

  • Responsible person: Kilian Schwarz


Software at gsi geant4 fluka simulation accessed via vmc

Software at GSI: geant4/Fluka(simulation: accessed via VMC)

  • Both so far not heavily used from ALICE

  • Geant4: standalone versions up to G4.7.1

  • newest VMC version: geant4_vmc_1.3

  • Fluka: not installed so far by me

  • Environment setup via

    > . gsisimlogin [-vmc] dev/new/prod/version

  • See also http://www-linux.gsi.de/~gsisim/g4vmc.html

  • Responsible person: Kilian Schwarz


Software at gsi event generators task simulation

Software at GSI: event generators(task: simulation)

  • Installed at: /d/alice04/alisoft/PPR/evgen

  • Available:

    - Pythia5

    - Pythia6

    - Venus

  • Responsible person: Kilian Schwarz


Software at gsi alien the alice grid environment

Software at GSI: AliEnThe ALICE Grid Environment

  • Currently being set up in the version2 (AliEn2)

  • Installed at: /u/aliprod/alien

  • Idea: global production and analysis

  • Environment setup via . .alienlogin

  • Copy certs from /u/aliprod/.globus or register own certs

  • Usage: /u/aliprod/bin/alien (proxy-init/login)

  • Then: register files and submit grid-jobs

  • Or: directly from ROOT !!!

  • Status: global AliEn2 production testbed currently being set up.

  • Will be used for LCG SC3 in September

  • Individual analysis of globally distributed Grid data at the latest during LCG SC4 2006 via AliEn/LCG/PROOF

  • Non published analysis possible already now:

    - create AliEn-ROOT Collection (xml file readable via AliEn)

    - analyse via ROOT/PROOF (TFile::Open(“alien://alice/cern.ch/production/…”)

    - Web Frontend being created via ROOT/QT

  • Responsible person: Kilian Schwarz


Alien2 services see http alien cern ch

ALICE VO – central services

User authentication

File Catalogue

Workload management

Job submission

Configuration

Job Monitoring

Central Task Queue

Accounting

Storage Element(s) DB

AliEn Site services

Computing Element

Data Transfer

Disk and MSS

Local scheduler

Storage Element

Cluster Monitor

Existing site components

ALICE VO – Site services integration

AliEn2 services(see http://alien.cern.ch)


Software at gsi globus

Software at GSI: Globus

  • Installed at: /usr/local/globus2.0 and

    /usr/local/grid/globus

  • Versions globus2.0 and 2.4

  • Idea: can be used to send batch jobs to GridKa (far more resources available than at GSI)

  • Environment setup via: . globuslogin

  • Usage:

    > grid-proxy-init (Grid certificate needed !!!)

    > globus-job-run/submit alice.fzk.de Grid/Batch job

  • Responsible person: Victor Penso/Kilian Schwarz


Germangrid ca

GermanGrid CA

How to get a certificate in detail:

See http://wiki.gsi.de/Grid/DigitalCertificates


Software at gsi lcg

Software at GSI: LCG

  • Installed at: /usr/local/grid/lcg

  • Newest version: LCG2.5

  • Idea: global batch farm

  • Environment setup: . lcglogin

  • Usage:

    > grid-proxy-init (Grid certificate needed !!!)

    > edg-job-submit batch/grid job (jdl-file)

  • See also: http://wiki.gsi.de/Grid

  • Responsible person: Victor Penso, Anar Manafov, Kilian Schwarz


Lcg the lhc grid computing project with ca 11k cpus world s largest grid testbed

LCG: the LHC Grid Computing project(with ca. 11k CPUs world’s largest Grid Testbed)


Software at gsi proof

Software at GSI: PROOF

  • Installed at: /usr/local/pub/debian3.0/gcc323-00/rootmgr

  • Newest version: ROOT 502-00

  • Idea: parallel analysis of larger data sets for quick/interactive results

  • Personal PROOF Cluster at GSI, integrated in batch farm, can be set up via

    > prooflogin <parameters> (e.g. number of slaves, data to be analysed, -h (help))

  • See also: http://wiki.gsi.de/Grid/TheParallelRootFacility

  • Later personal PROOF Cluster including GSI and GridKa via Globus possible

  • Later global PROOF Cluster via AliEn/D-Grid possible

  • Responsible person: Carsten Preuss, Robert Manteufel, Kilian Schwarz


Parallel analysis of event data

stdout/obj

proof

ana.C

proof

TFile

TFile

TFile

proof

TNetFile

proof

proof

proof = master server

proof = slave server

Parallel Analysis of Event Data

#proof.conf

slave node1

slave node2

slave node3

slave node4

Local PC

Remote PROOF Cluster

root

*.root

node1

ana.C

*.root

$ root

root [0] tree.Process(“ana.C”)

$ root

root [0] tree.Process(“ana.C”)

root [1] gROOT->Proof(“remote”)

$ root

root [0] tree.Process(“ana.C”)

root [1] gROOT->Proof(“remote”)

root [2] dset->Process(“ana.C”)

$ root

node2

*.root

node3

*.root

node4


Lhc computing model monarc and cloud

LHC Computing Model(Monarc and Cloud)

One Tier 0 site at CERN for data taking

ALICE (Tier 0+1) in 2008:

500 TB disk (8%), 2 PB tape, 5.6 MSI2K (26%)

Multiple Tier 1 sites for reconstruction and scheduled analysis

3 PB disk (46%), 3.3 PB tape

9.1 MSI2K (42%)

Tier 2 sites for simulation and user analysis

3 PB disk(46%), 7.2 MSI2K (33%)


Alice computing model more in detail

ALICE Computing model more in detail:

  • T0 (CERN): long term storage for raw data, calibration and first reconstruction

  • T1 (5, in Germany GridKa): long term storage of second copy of raw data, 2 subsequent reconstructions, scheduled analysis tasks, reconstruction of MC Pb-Pb data, long term storage of data processed at T1s and T2s

  • T2 (many, in Germany GSI): generate and reconstruct simulated MC data and chaotic analysis

  • T0/T1/T2: short term storage in multiple copies of active data

  • T3 (many, in Germany Münster, Frankfurt, Heidelberg, GSI) chaotic analysis


Cpu requirements and event size

CPU requirements and Event size


Alice tier resources

ALICE Tier resources


The alice framework at gsi

GridKa (1 of 5 T1s)IN2P3, CNAF, GridKa, NIKHEF, (RAL), Nordic, USA (effective ~5)ramp up time: due to shorter runs and reduced luminosity at the beginning not full resources needed: 20% 2007, 40% 2008, 100% end of 2008


Gsi t3 support for the 10 german alice members

GSI + T3(support for the 10% German ALICE members)

T3: Münster, Frankfurt, Heidelberg, GSI


  • Login