1 / 34

ARDA-ALICE activity in 2005 and tasks in 2006

This document provides an overview of the goals, tasks, and results of the ALICE distributed computing test, including the configuration and testing of Russian sites. It also discusses the planned tasks for 2006, such as data transfers, production, and analysis challenges.

nyates
Download Presentation

ARDA-ALICE activity in 2005 and tasks in 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CERN_Russian JWG, CERN, 6.03.06 ARDA-ALICE activity in 2005 and tasks in 2006 G.Shabratova Joint Institute for Nuclear Research

  2. Goals of ALICE-SC3 • Verification of ALICE distributed computing infrastructure • Production of meaningful physics • Signals and statistics requested by the ALICE PWG • Not so much a test of the complete computing model as described in the TDR • Function of central AliEn services and interaction with the remote centres services

  3. ARDA+ALICE visits in 2005 GOAL : Configuration and test Russian sites for participation in the ALICE PDC'05/ LCG SC3 • November- December 2005 < than 1 month#Installation and test VO-boxes, interaction with LCG, installation of application software at VO-boxes #Connection to central services, stability, job submission to RB: with AliEn v.2-5 • Mikalai Kutouski from JINR

  4. ARDA+ALICE visits in 2005 What have been done: • VO boxes at ITEP and JINR sites. • Installation of application software(AliEn , ROOt, AliRoot, Geat3). • Test of Tier1<->Tier2 links: FZK (dCache) ITEP (DPM) JINR (DPM)

  5. Running job profile 2450 jobs Negative slope: see results(4)

  6. Results • 15 sites CPU utilization (80% T1/ 20%T2) • T1’s: CERN: 8%, CCIN2P3: 12%, CNAF: 20%, GridKA: 41% • T2’s: Bari: 0.5%, GSI: 2%, Houston: 2%, Muenster: 3.5%, NIHAM: 1%, OSC: 2.5%, Prague: 4%, Torino: 2%, ITEP: 1%, SARA: 0.1%, Clermont: 0.5% • Number of jobs: 98% of target number • Special thanks to K.Schwarz & GridKa team for making 1200 CPUs available for the test • Duration: 12 hours (1/2 of the target duration) • Jobs done: 2500 (33% of target number) • Storage: 33% of target • Number of running jobs (2450) – 25% more than the entire installed lxbatch capacity at CERN

  7. Results (2) • VO-box behaviour • No problems with services running, no interventions necessary • Load profile on VO-boxes – in average proportional to the number of jobs running on the site, nothing special CERN GridKA

  8. Physics Data Challenge 12500 jobs; 100M SI2K/hours; 2 TB output This represents 20% more computing power than the entire CERN batch capacity FZK 43%, CNAF 23%, CCIN2P3 10%, CERN 8%, Muenster 5%, remaining centres 10%

  9. Tasks in 2006 • February • data transfers T0->T1 (CCIN2P3, CNAF, Grid.Ka, RAL) • March • bulk production at T1/T2; data back to T0 • April • first push out of simulated data; reconstruction at T1s. • May -June • July - August • Reconstruction at CERN and remote centers • September • Scheduled + unscheduled (T2s?) analysis challenges Extensive testing on PPS by all VOs Deployment of gLite 3.0 at major sites for SC4 production

  10. ARDA+ALICE visits in 2006 GOAL : Participation in the preparation and operation of the distributed data production and user analysis in the ALICE PDC'06/ LCG SC4 • 15.05.06 - 31.93.06 =>1.53 months #Familiarization with the ALICE distributed computing environment with the agents and services running at the computing centers # Familiarization with the LCG VO-box environment • 20.07.06 - 20.10.06 => 3.0 months #Responsible for the installation, debugging and operation of the analysis environment #Coordination of on-site activities with the local system

  11. Site CPU (MKSI2K) Disk (TB) Tape (TB) BW to CERN/T1 (Gb/s) USA 513 (285%) 21 (54%) 25 (100%) FZU Prague 60 (100%) 14 (100%) 0 1 RDIG 240 (48%) 10 (6%) 0 1 French T2 130 (146%) 28 (184%) 0 0.6 GSI 100 (100%) 30 (100%) 0 1 U. Muenster 132 (100%) 10 (100%) 0 1 Polish T2* 198 (100%) 7.1 (100%) 0 0.6 Slovakia 25 (100%) 5 (100%) 0 0.6 Total 4232 (134%) 913 (105%) 689 (100%) Tier2s resources available in 2006

  12. Number of events Number of jobs CPU work [CPU/days] Duration [days] Data [TB] BW [MB/s] 100 M pp 2 M 59,500 18 28 RAW 3 ESD 33 1 M PbPb 1 M 172,000 51 210 RAW 2 ESD 33 Total 3 M 231,500 68 238 RAW 5 ESD 33 What we can do Assuming 85% CPU efficiency

  13. ARDA+ALICE in 2006(Russia) At this time we have : VO boxes Appl. software at VO boxes ITEP (Moscow) + IHEP (Protvino) INR (Troitsk) JINR (Dubna) + KI (Moscow) SPtSU (S.Petersburg) Under installation: at PNPI(Gatchina), SINP (Moscow)

  14. ALICE in 2006 Computing resources

  15. AliRoot: Execution Flow Initialization Event Generation Particle Transport Hits AliSimulation Clusters Digits/ Raw digits Event merging Summable Digits AliReconstruction Tracking PID ESD Analysis

  16. Analysis • Documentation pre-released: http://project-arda-dev.web.cern.ch/project-arda-dev/alice/apiservice/ • Services opened to experts users, beginning of February • Debugging progressing • Good performance in terms of functionalities, not fully stable yet, very good user support • Services opened to more non expert users, end of last week

  17. ALICE Analysis Basic Concepts • Analysis Models • Prompt analysis at T0 using PROOF(+file catalogue) infrastructure • Batch Analysis using GRID infrastructure • Interactive Analysis using PROOF(+GRID) infrastructure • User Interface • ALICE User access any GRID Infrastructure via AliEn or ROOT/PROOF UIs • AliEn • Native and “GRID on a GRID” (LCG/EGEE, ARC, OSG) • integrate as much as possible common components • LFC, FTS, WMS, MonALISA ... • PROOF/ROOT • single- + multitier static and dynamic PROOF cluster • GRID API class TGrid(virtual)->TAliEn(real)

  18. Output file 1 Distributed analysis File Catalogue query User job (many events) Data set (ESD’s, AOD’s) Job output Job Optimizer Grouped by SE files location Sub-job 1 Sub-job 2 Sub-job n Job Broker Submit to CE with closest SE CE and SE CE and SE CE and SE processing processing processing Output file 2 Output file n File merging job

  19. Batch Analysis: input • Input Files • Downloaded into the local job sandbox: macros, configuration… • Input Data • Created from Catalogue Queries • Stored as ROOT Objects (Tchain, TDSet, TAlienCollection) in a registered GRID file • Stored in XML file format in a registered GRID file • Stored in a regular AliEn JDL • on demand GRID jobs don't stage Input Data into the job sandbox (no download) • GRID jobs access Input Data via “xrootd” protocol using the TAlienFile class implementation in ROOT TFile::Open(“alien://alice/...../Kinematis.root”);

  20. ALICE Analysis - File Access from ROOT“all files accessible via LFNs”

  21. ALICE AliEn Batch Analysis: Scheduling • After optimization, a Job Scheduler periodically assign priorities to jobs in the TQ • Scheduling defined by user based on reference and maximum number of parallel jobs • Avoids flooding of the TQ and resources by a single user submitting many jobs • Dynamic configuration • Users can be privileged or blocked in the TQ by a system administrator Reordering of the Task Queue ID 1 Priority -1 ID 2 Priority -1 ID 3 Priority -1 ID 4 Priority -1 ID 4 Priority 10 ID 2 Priority 5 ID 1 Priority 3 ID 3 Priority 1 Job Scheduler

  22. Batch Analysis via Agents on heterogeneous GRIDs • Requirements to run AliEn as a “GRID on an GRID” • Provide few (one) User logins per VO • Install the Agent Software • Startup agents via Queue/Broker systems or run as permanent daemon • Access local storage element • all data access from the application via xrootd • run “xrootd” as front-end daemon to any mass storage system • ideally via the SRM interface, read-write mode • enforce strong authorization through file catalogue tokens • run “xrootd” with every JobAgent / WN as an analysis cache • read-only mode • strong authorization only for specific secure MSS paths => “public access SE”

  23. ALICE Batch Analysis via agents in heterogeneous GRIDs /alice/file1.root /alice/file2.root /alice/file3.root /alice/file4.root /alice/file5.root /alice/file6.root /alice/file7.root Job Optimizer-Splitting JDL:InputData JDL:InputData JDL:InputData JDL:InputData /alice/file1.root /alice/file2.root /alice/file3.root /alice/file4.root /alice/file5.root /alice/file6.root /alice/file7.root TAlienCollection XML - File TAlienCollection XML - File TAlienCollection XML - File Job Agent ROOT xrootd MSS Site A Site B Site C

  24. Interactive Analysis Model: PROOF • Four different use cases to consider • Local Setups • Conventional single-tier PROOF cluster in sites for interactive Analysis (data pre-staged on the cluster disks) • site autonomy • site policies apply • manual work for data deployment, but quite easy to do • integrate single-tier PROOF clusters into AliEn • a permanent PROOF cluster(proofd+xrootd) is registered as a read-only storage element in AliEn working on a MSS backend • PROOF Chains are queried from the AliEn File Catalogue • Location of data files in the xrootd cache using the xrootd redirector

  25. Interactive Analysis Model: PROOF • Multi-tier Static Setup • Permanent PROOF clusters are configured in a multi-tier structure • A PROOF Analysis Chain is queried directly from the File Catalogue • A Chain is looked up by sub-masters using the local xrootd redirectors • During an Analysis Query the PROOF master assigns analysis packets to the sub-master -- workers have the right (=local) data accessible • Multi-tier Dynamic Setup • all like in the multitier static setup, but • proofd/xrootd are started up as jobs for a specific user in several tiers using the AliEn Task Queue • …or… proofd/xrootd are started up as generic agents by a Job Agent – the assignment of PROOFD to a specific user has to be implemented in the PROOF master

  26. PROOF@GRID Multitier Hierarchical Setup with xrootd read-cache Depending on the Catalogue model, LFNs can be either resolved by the PROOF master using a centralized file catalogue or only SE indexed and resolved by Submasters and local file catalogues. proofd xrootd Client MSS PROOF Master Submaster Local File Catalogue Site 1 Site 2 Storage Index Catalogue

  27. LCG CE LCG CE LCG CE LCG SE/SRM LCG SE/SRM LCG SE/SRM Services structure Phase 1: Event production and storage at CERN Phase 2: Test of file transfer utilities (FTS) Phase 3: analysis – batch and interactive with PROOF • Central services: • Catalogue • Task queue • Job optimization • -etc. File registration AliEn CE/SE Job submission LCG UI AliEn CE/SE AliEn CE/SE LCG RB

  28. Courtesy of I. Bird, LCG GDB, May 2005 VO “Agents & Daemons” • VO-specific services/agents • Appeared in the discussions of fts, catalogs, etc. • …. – all experiments need the ability to run “long-lived agents” on a site • At Tier 1 and at Tier 2 •  how do they get machines for this, who runs it, can we make a generic service framework

  29. ALICE & LCG Service Challenge 3 • AliEn and monitoring agents and services running on the VO node: • Storage Element Service (SES) – interface to local storage (via SRM or directly) • File Transfer Daemon (FTD) – scheduled file transfers agent (possibly using FTS implementation) • xrootd – application file access • Cluster Monitor (CM) – local queue monitoring • MonALISA – general monitoring agent • PackMan (PM) – application software distribution and management • Computing Element (CE)

  30. Issues (1) • VO-Box support/operation: • Experts needed for site installation, tuning and operation • Patricia is validating LCG installation on VO-boxes • Stefano and Pablo are installing and tuning ALICE agents/services • Mikalai is responsible for the Russian sites • Artem for France • Kilian for Germany • Instruction manual on LCG-AliEn interoperability and site tuning (Stefano’s idea) will help a lot to speed up the process

  31. PSS Schedule for gLite 3.0 YOU ARE HERE February March April May June Certification PPS Deploy in prodn PRODUCTION!! Tuesday 28/2/06 gLite 3.0β exitscertification andenters PPS. Wednesday 15/3/06 gLite 3.0β availableto users in the PPS. Thursday 1/6/06 SC4 starts!! Friday 28/4/06 gLite 3.0 exits PPSand enters production. Deployment ofgLite 3.0β in PPS Patches for bugscontinually passedto PPS.

  32. In the case of ALICE • Take some points into account • You require UI inside the VO-BOX • The UI configuration changes. It is a combination of LCG and gLite UIs • If you want to do something as gLite-job-submit from the VO-BOX you have to include the new UI • You require the FTS and LFC clients inside • Next Monday here is a meeting to clarify the status of these services with the experts and the sites • More details next TF Meeting after that meeting

  33. FTD gridftp bbftp … fts FTD-FTS integration • It works!!  • Several transfers done successfully BDII FTS Endpoint MyProxy

  34. Current limitations • Only for specified channels • Only for SRM SE • At the moment, only CNAF and CERN • Required myproxy password • At the moment, my certificate… • GGUS response time • It takes quite long (more than 15 hours!!) to assign the tickets to the responsible. • Once assigned, tickets solved pretty fast • Endpoints have to be defined in BDII

More Related