1 / 16

ATLAS and GridPP

ATLAS and GridPP. GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University. ATLAS Needs. Long term, ATLAS needs a fully Grid-enabled Reconstruction, Analysis and Simulation environment

duman
Download Presentation

ATLAS and GridPP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5th November 2001 RWL Jones, Lancaster University

  2. ATLAS Needs • Long term, ATLAS needs a fully Grid-enabled Reconstruction, Analysis and Simulation environment • Short-term, the first ATLAS priority is a Monte Carlo production system, building towards the full system • ATLAS has an agreed program of Data Challenges (based in MC data) to develop and test the computing model RWL Jones, Lancaster University

  3. Data Challenge 0 • Runs from October-December 2001 • Continuity test of MC code chain. • Only modest samples 105 event samples, and essentially all in flat file format. • All the Data Challenges will be run on Linux systems • compilers distributed with the code if not already installed locally in the correct version. RWL Jones, Lancaster University

  4. Data Challenge 1 • Runs in the first half of 2002 • Several sets of 107 events (high level trigger studies, physics analysis). • Intend to generate and store 8Tbytes in the UK, • 1-2Tbytes in Objectivity. • Will use of M9 DataGrid deliverables and as many other Grid tools as time permits. • Tests of distributed reconstruction and analysis • Test of database technologies RWL Jones, Lancaster University

  5. Data Challenge 2 • Runs for the first half of 2003 • Will generate several samples of 108 events • Mainly in OO-databases • Full use of the Testbed 1 and Grid tools • Complexity and scalability tests of the distributed computing system • Large-scale distributed physics analysis using Grid tools, calibration and alignement  RWL Jones, Lancaster University

  6. Lab m Uni x USA Brookhaven Uni a UK USA FermiLab Lab a France Tier 1 Physics Department Uni n CERN Tier2 ………. Italy Desktop Lab b Germany NL Lab c  Uni y Uni b   LHC Computing Model (Cloud) The LHC Computing Centre RWL Jones, Lancaster University

  7. Implications of Cloud Model • Internal: need cost sharing between global regions within collaboration • External (on Grid services): Need authentication/accounting/priority on the basis of experiment/region/team/local region/user • Note: The NW believes this is a good model for tier-2 resources as well. RWL Jones, Lancaster University

  8. ATLAS Software • Late in moving to OO as physics TDR etc given a high priority • Generations and reconstruction now done in C++/OO Athena framework • Detector simulation still in transition to OO/C++/Geant4; DC1 will still use G3 • Athena common framework with LHCb Gaudi RWL Jones, Lancaster University

  9. Simulation software for DC1. ATHENA HepMc ATHENA Fast det.simulation Particle lev. simulation GeneratorModules C++, linux ---------------- Py6 +code dedicated to B-physics ---------------- PYJETS->HepMc --------------- EvtGen BaBar package ( later). Detector simulation Atlfast++ reads HepMc produce Dice: slug+geant3 fortran produce GENZ+KINE bank Ntuples Reconstruction ZEBRA C++ reads GENZ +kine convert to HepMc produce Ntuples RWL Jones, Lancaster University

  10. Requirement Capture • Extensive use case studies:“ATLAS Grid Use Cases and Requirements” 15/X/01 • Many more could be developed, especially in the monitoring areas • Short-term use case centred on immediate MC production needs • Obvious overlaps with LHCb – joint projects • Three main projects defined, “Proposed ATLAS UK Grid Projects” 26/X/01 RWL Jones, Lancaster University

  11. Grid User interface for Athena • Completely common project with LHCb • Obtains resource estimates and applies quota and security policies • Query installation tools • Correct software installed? Install if not • Job submission guided by resource broker • Run-time monitoring and job deletion • Output to MSS and bookkeeping update RWL Jones, Lancaster University

  12. Installation Tools • Tools to automatically generate installation kits, deploy using Grid tools and install at remote sites via Grid job • Should be integrated with a remote autodetection service for installed software • Initial versions should cope with pre-built libraries and executables • Should later deploy development environment • ATLAS and LHCb build environments converging on CMT – some commonality here RWL Jones, Lancaster University

  13. MC Production System • For DC1, will use existing MC production system (G3), integrated with M9 tools • (Aside: M9/WP8 validation and DC kit development in parallel) • Decomposition of MC system into components: Monte Carlo job submission, bookkeeping services, metadata catalogue services, monitoring and quality-control tools • Bookkeeping and data-management projects already ongoing – will work in close collaboration, good link with US projects • Close link with Ganga developments RWL Jones, Lancaster University

  14. Allow regional management of large productions • Job script and steering generated • Remote installation as required • Production site chosen by resource broker. • Generate events and store locally • Write log to web • Copy data to local/regional store through interface with Magda (data management). • Copy data from local storage to remote MSS • Update book-keeping database RWL Jones, Lancaster University

  15. Work Area PMB Allocation (FTE) Previously Allocated (FTE) Total Allocation (FTE) ATLAS/LHCb 2.0 0.0 2.0 ATLAS 1.0 1.5 2.5 LHCb 1.0 1.0 2.0 This will just allow us to cover the three projects Additional manpower must be found for monitoring tasks, testing the computing model in DC2, and the simple running of the Data Challenges RWL Jones, Lancaster University

  16. WP8 M9 Validation • WP8 M9 Validation now beginning • Glasgow, Lancaster(, RAL?) involved in the ATLAS M9 validation • Validation is exercises the tools using the ATLAS kit • The software used is behind the current version • This is likely to be the case in all future tests (decouples software changes from tool tests) • Previous test of MC production using Grid tools a success • DC1 validation (essentially of ATLAS code); Glasgow, Lancaster (Lancaster is working on tests of standard generation and reconstruction quantities to be deployed as part of kit) Cambridge to contribute RWL Jones, Lancaster University

More Related