1 / 49

ATLAS Progress Report: Trigger, Computing, and Data Preparation

This report provides updates on the installation, commissioning, and operation of the ATLAS detector systems. It also includes information on other matters such as forward detectors, collaboration and management, and early LHC physics goals.

lqualls
Download Presentation

ATLAS Progress Report: Trigger, Computing, and Data Preparation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CERN-RRB-2008-083 10th November 2008 ATLAS Progress Report (part II) (For the installation completion and hardware commissioning of the detector systems, as well as the LHC shutdown planning: see Marzio Nessi in part I) Trigger, computing, and data preparation Brief account on other matters - Forward detectors - Operation tasks sLHC upgrade organization and planning Collaboration and management Status of completion planning Examples of early LHC physics goals CERN-RRB-2008-083

  2. CB Publications Committee, Speakers Committee ATLAS management: SP, Deputy SPs, RC, TC Collaboration Management, experiment execution, strategy, publications, resources, upgrades, etc. Executive Board TMB Detector Operation (Run Coordinator) Detector operation during data taking, online data quality, … Trigger (Trigger Coordinator) Trigger data quality, performance, menu tables, new triggers, .. Computing (Computing Coordinator) SW infrastructure, GRID, data distribution, … Data Preparation (Data Preparation Coordinator) Offline data quality, first reconstruction of physics objects, calibration, alignment (e.g. with Zll data) Physics (Physics Coordinator) optimization of algorithms for physics objects, physics channels (Sub)-systems: Responsible for operation and calibration of their sub-detector and for sub-system specific software … CERN-RRB-2008-083

  3. SDX1 dual-CPU nodes CERN computer centre ~30 ~1600 ~100 ~ 500 Local Storage SubFarm Outputs (SFOs) Event Filter (EF) Event Builder SubFarm Inputs (SFIs) LVL2 farm Event rate ~ 200 Hz Data storage pROS DataFlow Manager Network switches stores LVL2 output Network switches LVL2 Super- visor Event data pulled: partial events @ ≤ 100 kHz, full events @ ~ 3 kHz ATLAS Trigger / DAQ Data Flow Second- level trigger SDX1 pROS stores LVL2 output Event data requests Delete commands Gigabit Ethernet Requested event data USA15 Regions Of Interest USA15 Data of events accepted by first-level trigger 1600 Read- Out Links UX15 ~150 PCs VME Dedicated links Read- Out Drivers (RODs) ATLAS detector Read-Out Subsystems (ROSs) RoI Builder First- level trigger UX15 Timing Trigger Control (TTC) Event data pushed @ ≤ 100 kHz, 1600 fragments of ~ 1 kByte each CERN-RRB-2008-083

  4. ATLAS output disk (point-1)‏ • Full Dress Rehearsal (FDR)‏ • Played data through the computing system just as for real data from the LHC • started at point 1, like the real data • processed data at CERN Tier-0, various calibration & data quality steps • shipped out to the Tier-1s and Tier-2s for physics analysis • Complementary to “milestone runs” which test the real detector, but only with cosmic rays • Two “FDR runs” • (February and June-July)‏ • Were a vital preparation for processing and analysing the first LHC data Tier-0 and CAF CERN-RRB-2008-083 Tier-1 and Tier-2 sites

  5. wLCG Grid: Tier-0 and the 10 ATLAS Tier-1s CERN-RRB-2008-083

  6. ATLAS during the Common Computing Readiness Challenge CCRC Phase 2 Data transfer Tier0--> Tiers-1 Nominal peak level (~1 GB/s) sustained over 3 days CERN-RRB-2008-083

  7. Number of world-wide ATLAS production jobs per day from 1 May to 5 September 2008 CERN-RRB-2008-083

  8. Excitement in the ATLAS Detector Control Room: The first LHC event on 10th September 2008

  9. … as well as in the ATLAS Tier-0 and Data Quality Control Rooms: Reconstruction follow-up and analysis of the first LHC events CERN-RRB-2008-083

  10. The very first beam-splash event from the LHC in ATLAS on 10:19, 10th September 2008 Online display Offline display CERN-RRB-2008-083

  11. A busy beam-halo event with tracks bent in the Toroids from the start-up day (offline)

  12. LVL1 System • System is fully installed • Still large programme of work to be done to commission it with beam • Much work done with cosmic rays, test pulses, etc • Already made good start with single beam, starting on 10th September • Some aspects of commissioning can only be done with collision data • E.g. detailed time alignment of barrel muon trigger Beam pickups Minimum-bias (MBTS) LUCID, BCM, etc CERN-RRB-2008-083

  13. Timing-in the trigger 10 September • Experiment timing currently based on beam-pickup (“BPTX”) reference • First task of LVL1 central trigger team on 10th September was to commission the beam pickups • Times of arrival of other triggers are being adjusted to match • Plots show evolution from 10 to 12 September • Timing-in for down-stream side for single beam to have similar timing to collisions • Each LVL1 sub-system also needs to be timed in internally • L1-calo, L1-RPC, L1-TGC, MBTS, etc 12 September (note different scale, also RPC not adjusted) Note change of scale! CERN-RRB-2008-083

  14. High-Level Trigger • LVL2 and Event Filter processor system installed • Full processing power will be added later • HLT has been used routinely online • Cosmic-ray selection to enhance purity of data samples for detector studies • E.g. data with TRT tracks • “Dummy” algorithms performed data streaming from 10th September • Based on HLT examination of LVL1 trigger type • Full set of algorithms available for collision running • Muon, electron, photon, tau, jet, MET, B-physics, etc • Very extensive studies performed on simulated raw-data events • Rate, efficiency and timing performance consistent with computing resources for initial running • Also have HLT menu for commissioning LVL2 and EF in single-beam and 900 GeV collisions operations • Raw data collected in the morning of 10th September were passed, offline, through some algorithms during the same day • Studies, tuning, etc. continue since then CERN-RRB-2008-083

  15. A nice cosmic muon through the whole detector… CERN-RRB-2008-083

  16. A huge amount of cosmic ray triggers are recorded, in total (left) as well as giving tracks also in the smallest-volume detector, the Pixels (below) CERN-RRB-2008-083

  17. Cosmic ray data-taking with HLT L2 ID algorithms HLT has been deployed for the first time, running different L2 tracking algorithms, running full ID reconstruction on L2 Overall efficiency ~ 97% CERN-RRB-2008-083

  18. Examples of ID commissioning analyses Turn-on of transition radiation produced from cosmic muons (mm) Mean x residuals (mm) of pixel barrel layer 0 (after various alignment steps) SCT residuals (will improve with better alignment and calibration) CERN-RRB-2008-083

  19. EM cells LAr wave 15GeV cosmics Measured Predicted Difference Examples of calorimeter commissioning analyses 1 MeV Pedestal stability: LAr EM (5 month period) Precise knowledge is very important for an accurate calibration CERN-RRB-2008-083

  20. Examples of muon spectrometer commissioning analyses RPC-MDT Correlation 8 inner + 6 middle + 6 outer hits Good correlation between MDT and RPC Correlation between the momentum measurement in the muon system and in the Inner detector CERN-RRB-2008-083

  21. Forward Detectors LUCID at 17 m ALFA at 240 m ZDC at 140 m Luminosity Cerenkov Integrating Detector Zero Degree Calorimeter (Phase I detector is operational) (Plus an internal LoI for future Forward Proton detectors at 220 and 420 m which is currently under internal review in the Collaboration) Absolute Luminosity for ATLAS

  22. First hits in the LUCID detectors on Sep. 10th ! LV1C MBTS LUCID with beam 2 Lucid with beam 1 CERN-RRB-2008-083

  23. Operation Task sharing is being put in place A reminder of the framework: The operation of the ATLAS experiment, spanning from detector operation to computing and data preparation, will require a very large effort across the full Collaboration (initially estimated at ~600 FTE effort per year, of which some 60% require presence at CERN) The framework that has been approved by the Collaboration Board in February 2007 aiming at a fair sharing of these duty tasks (‘Operation Tasks’, OTs) is now being implemented and the systems and activity areas are using a dedicated Web tool for the planning and documentation The main elements are: - OTs needs and accounting are reviewed and updated annually - OTs are defined under the auspices of the sub-system and activity managements - Allocations are made in two steps, expert tasks first, and then non-expert tasks - The ‘fair share’ is proportional to the number of ATLAS authors (per Institution or Country) - Students are ‘favoured’ by a weight factor 0.75 - New Institutions will have to contribute more in the first two years (weight factors 1.5 and 1.25) Note that physics analysis tasks, and other privileged tasks, are not OTs, of course CERN-RRB-2008-083

  24. Fractional contributions of all Funding Agencies as compared to expectation (ideally should peak around 1.0) • Early experiences: • Works well for shift planning – • but some improvements needed • for 2009 • Need to discuss/learn detailed • assessment for all other tasks • There are many tasks on the borderline between of duties and physics Note that the example plots are based on still incomplete data FTEs per month entered in the OT data base as a function of the month CERN-RRB-2008-083

  25. ATLAS organization to steer R&D for upgrades (recalling from earlier RRBs) ATLAS has, in place and operational, a structure to steer its planning for future upgrades, in particular for R&D activities needed for possible luminosity upgrades of the LHC (‘sLHC’) This is already a rather large and broad activity… The main goals are to - Develop a realistic and coherent upgrade plan addressing the physics potential - Retain detector experts in ATLAS with challenging developments besides detector commissioning and running - Cover also less attractive (but essential) aspects right from the beginning The organization has two major coordination bodies Upgrade Steering Group (USG) (With representatives from systems, software, physics, and relevant Technical Coordination areas) Project Office (UPO) (Fully embedded within the Technical Coordination) Upgrade R&D proposals are reviewed and handled in a transparent way within the Collaboration There is a good and constructive synergy from common activities with CMS where appropriate Detailed organization charts are given in the Appendix of the slides CERN-RRB-2008-083

  26. Anticipated Peak and Integrated Luminosity • LHC, ATLAS and CMS have agreed to use this as a basis for planning as discussed at a LHCC meeting 1 July 2008 • Sets the conditions and timescale • Phase 1 starts with 6 – 8 month shutdown end 2012 • Peak luminosity 3 x 10-34 cm-2 s-1 at end of phase 1 • Phase 2 will start with an 18 month shutdown at end of 2016 • Peak 10 x 10-34 cm-2 s-1 in phase 2 • 3000 fb-1 integrated luminosity lifetime of detectors minimum in phase 2 Note that this was of course before the LHC incident, consequences have not yet been discussed CERN-RRB-2008-083

  27. Overview of Upgrade CERN-RRB-2008-083

  28. Upgrade milestones and schedule (IBL means new insertable pixel b-layer) CERN-RRB-2008-083

  29. At this stage the upgrade work and planning have grown already to a substantial activity in the Collaboration Major workshops, two recent examples: - 4th Tracker upgrade workshop at Nikhef 5-7 Nov, more than 150 participants - Joint LAr – Tile – Level1 calorimeter workshop 13-14 November - Two global upgrade weeks are planned for 2009 for converging to an ATLAS sLHCLoI Many ATLAS upgrade R&D projects are underway, within the internal review framework already mentioned before: - 15 approved projects - 4 in the process of evaluation - 11 at the stage of submitted Expressions of Interests - 1 not approved (not relevant for ATLAS) ATLAS is glad that the LHCC has also taken ‘on-board’ the task to overview sLHC activities, in order to foster a coherent planning for machine and experiments One issue to be addressed very soon is a dedicated budget line for upgrade R&D steering, and preparing centrally for the upgrade design A list of these ATLAS upgrade R&D projects is given in the Appendix of the slides CERN-RRB-2008-083

  30. Collaboration composition Since the RRB in April 2008 there were three formal admissions of new Institutions in the Collaboration, following the standard procedures defined in the initial Construction MoU The Collaboration Board welcomed with unanimous votes in its July 2008 meeting the Julius-Maximilians-University of Würzburg, Germany (Muon software, computing, sLHC R&D, outreach) Palacký University in Olomouc, Czech Republic (Fibre tracking in the forward Roman Pots) University of Texas at Dallas, U.S.A. (Pixels, computing) In all three cases people have already been involved in ATLAS activities for a few years The RRB is now invited to formally endorse these new collaborating Institutions CERN-RRB-2008-083

  31. ATLAS Collaboration (Status October 2008) 37 Countries 169 Institutions 2800 Scientific participants total (1850 with a PhD, for M&O share) Albany, Alberta, NIKHEF Amsterdam, Ankara, LAPP Annecy, Argonne NL, Arizona, UT Arlington, Athens, NTU Athens, Baku, IFAE Barcelona, Belgrade, Bergen, Berkeley LBL and UC, HU Berlin, Bern, Birmingham, UAN Bogota, Bologna, Bonn, Boston, Brandeis, Bratislava/SAS Kosice, Brookhaven NL, Buenos Aires, Bucharest, Cambridge, Carleton, CERN, Chinese Cluster, Chicago, Chile, Clermont-Ferrand, Columbia, NBI Copenhagen, Cosenza, AGH UST Cracow, IFJ PAN Cracow, UT Dallas, DESY, Dortmund, TU Dresden, JINR Dubna, Duke, Frascati, Freiburg, Geneva, Genoa, Giessen, Glasgow, Göttingen, LPSC Grenoble, Technion Haifa, Hampton, Harvard, Heidelberg, Hiroshima, Hiroshima IT, Indiana, Innsbruck, Iowa SU, Irvine UC, Istanbul Bogazici, KEK, Kobe, Kyoto, Kyoto UE, Lancaster, UN La Plata, Lecce, Lisbon LIP, Liverpool, Ljubljana, QMW London, RHBNC London, UC London, Lund, UA Madrid, Mainz, Manchester, CPPM Marseille, Massachusetts, MIT, Melbourne, Michigan, Michigan SU, Milano, Minsk NAS, Minsk NCPHEP, Montreal, McGill Montreal, RUPHE Morocco, FIAN Moscow, ITEP Moscow, MEPhI Moscow, MSU Moscow, Munich LMU, MPI Munich, Nagasaki IAS, Nagoya, Naples, New Mexico, New York, Nijmegen, BINP Novosibirsk, Ohio SU, Okayama, Oklahoma, Oklahoma SU, Olomouc, Oregon, LAL Orsay, Osaka, Oslo, Oxford, Paris VI and VII, Pavia, Pennsylvania, Pisa, Pittsburgh, CAS Prague, CU Prague, TU Prague, IHEP Protvino, Regina, Ritsumeikan, UFRJ Rio de Janeiro, Rome I, Rome II, Rome III, Rutherford Appleton Laboratory, DAPNIA Saclay, Santa Cruz UC, Sheffield, Shinshu, Siegen, Simon Fraser Burnaby, SLAC, Southern Methodist Dallas, NPI Petersburg, Stockholm, KTH Stockholm, Stony Brook, Sydney, AS Taipei, Tbilisi, Tel Aviv, Thessaloniki, Tokyo ICEPP, Tokyo MU, Toronto, TRIUMF, Tsukuba, Tufts, Udine/ICTP, Uppsala, Urbana UI, Valencia, UBC Vancouver, Victoria, Washington, Weizmann Rehovot, FH Wiener Neustadt, Wisconsin, Wuppertal, Würzburg, Yale, Yerevan

  32. Collaboration Board (Chair: K. Jon-And Deputy: C. Oram) ATLAS Plenary Meeting Resources Review Board CB Chair Advisory Group Spokesperson (P. Jenni Deputies: F. Gianotti and S. Stapnes) ATLAS Organization October 2008 Technical Coordinator (M. Nessi) Resources Coordinator (M. Nordberg) Executive Board Inner Detector (L. Rossi) Tile Calorimeter (B. Stanek) Magnet System (H. ten Kate) Electronics Coordination (P. Farthouat) Trigger Coordination (N. Ellis) Data Prep. Coordination (C. Guyot) Additional Members (T. Kobayashi, M. Tuts, A. Zaitsev) LAr Calorimeter (I. Wingerter-Seez) Muon Instrumentation (L. Pontecorvo) Trigger/DAQ ( C. Bee, L. Mapelli) Commissioning/ Run Coordinator (T. Wengler) Computing Coordination (D. Barberis, D. Quarrie) Physics Coordination (D. Charlton)

  33. ATLAS will undergo important leadership changes early next year (Based on elections and appointments during the last two Collaboration Board meetings) Management as from 1st March 2009: Spokesperson Fabiola Gianotti (CERN) Deputy Spokespersons Dave Charlton (Birmingham) Andy Lankford (UC Irvine) Technical Coordinator Marzio Nessi (CERN) Resources Coordinator Markus Nordberg (CERN) Collaboration Board as from 1st January 2009: CB Chairperson Kerstin Jon-And (Stockholm) CB Deputy Chairperson Gregor Herten (Freiburg) CERN-RRB-2008-083

  34. Updated Financial Overview Financial framework Initial Construction MoU 1995 475 MCHF Updated construction baseline 468.4 MCHF Additional Cost to Completion (accepted in RRB October 2002) 68.2 MCHF based on the Completion Plan (CERN-RRB-2002-114) Additional CtC identified in 2006 and detailed in CERN-RRB-2006-069) 4.4 MCHF Total costs for the initial detector 541.1 MCHF Missing funding at this stage for the initial detector: Baseline Construction MoU, mainly Common Fund 7.2 MCHF (of which 4.0 MCHF are in progress of being paid, and 3.2 MCHF remain at risk) 2002 Cost to Completion (CC and C&I) calculated shares 9.2 MCHF (of which 2.8 MCHF are in progress of being paid, and assuming that the U.S. will provide their remaining 4.5 MCHF on a best effort basis, 1.9 MCHF remain at risk) Note for planning purposes that the following items are not included: - There occurred almost 2 MCHF additional manpower costs because of the delays in the LHC start-up; as already mentioned previously, these were partially covered elsewhere, not all CtC against HLT deferrals - No provisions against future ‘force majeure’ costs - Re-scoping of the design-luminosity detector, estimated material costs of parts not included in present initial detector (CERN-RRB-2002-114) 20 MCHF - Forward detectors parts (luminosity) not fully funded yet 1.5 MCHF ATLAS gratefully appreciates that CERN has already secured its contribution to re-scope the detector for its TDR design-luminosity capabilities in the coming 2 years

  35. Cost to Completion, and initial staged detector configuration As a reminder from previous RRB meetings: The Cost to Completion (CtC) is defined as the sum of Commissioning and Integration (C&I) pre-operation costs plus the Construction Completion (CC) cost in addition to the deliverables The following framework was accepted at the October 2002 RRB (ATLAS Completion Plan, CERN-RRB-2002-114rev.): CtC 68.2 MCHF (sum of CC = 47.3 MCHF and C&I = 20.9 MCHF) Commitments from Funding Agencies for fresh resources (category 1) 46.5 MCHF Further prospects, but without commitments at this stage (category 2) 13.6 MCHF The missing resources, 21.7 MCHF, have to be covered by redirecting resources from staging and deferrals The funding situation will be reviewed regularly at each RRB, and is expected to evolve as soon as further resources commitments will become available The physics impact of the staging and deferrals wasdiscussed in detail with the LHCC It was clearly understood that the full potential of the ATLAS detector will need to be restored for the high luminosity running, which is expected to start only very few years after turn-on of the LHC, and to last for at least a decade CERN-RRB-2008-083

  36. Cost to Completion Funding (kCHF) (Status CERN-RRB-2008-082 31st October 2008) All CtC contributions as requested in 2002 are needed to complete the payments for the initial detector configuration (And in that case ATLAS will also manage to cover the CtC – 2 reported in 2006 thanks to the larger share contributed by CERN)

  37. ATLAS100 pb-1 after b-tag and W-mass selection Example of early physics: Top without / with b-tagging Large cross section: ~ 830 pb Reconstructed mass distribution after simple selection of tt  Wb Wb  ℓb qqbdecays: ATLAS100 pb-1 • Cross section measurement (test of perturbative • QCD) with data corresponding to 100 pb-1 possible • with an accuracy of ±10-15% • Errors are dominated by systematics • (jet energy scale, Monte Carlo modeling (ISR, FSR),…) • Ultimate reach (100 fb-1): ± 3-5% • (limited by uncertainty on the luminosity)

  38. 1 fb-1 m(ll) GeV Example of an early surprise: Z’ → e+e- with SM-like couplings (ZSSM) Mass Events / fb-1 Luminosity needed (TeV) (after cuts) for a 5s discovery + (10 obs. events) 1 ~160 ~70 pb-1 1.5 ~30 ~300 pb-1 2 ~7 ~1.5 fb-1 ATLAS Preliminary Discovery reach above Tevatron limits m ~ 1 TeV perhaps already in 2009... (?) CERN-RRB-2008-083

  39. Search for Higgs Bosons Standard Model H → ZZ(*) → ℓℓ ℓℓ Charged Higgs boson in Supersymmetry (MSSM) gb  t H+ jjb tn ATLAS L = 10 fb-1 L = 30 fb-1 tan b = 35 CERN-RRB-2008-083

  40. Search for Supersymmetric Particles L = 1 fb-1 ATLAS reach for (equal) Squark- and Gluino masses: 0.1 fb-1 M ~ 750 GeV 1 fb-1 M ~ 1350 GeV 10 fb-1 M ~ 1800 GeV Deviations from the Standard Model due to SUSY at the TeV scale can be detected fast ! (Tevatron reach typically 400 GeV) CERN-RRB-2008-083

  41. Conclusions The project proceeded within the framework of the accepted 2002 Completion Plan All the resources requested in that framework are needed to cover the costs of the initial detector now installed (this will also cover the additional CtC costs as reported in 2006) The experiment (detector hardware, trigger and DAQ, data distribution for distributed analyses, data preparation which includes quality monitoring, calibrations and alignments …) was ready for the LHC start-up in September With the first LHC beam-induced events, and the long cosmic ray data taking period, ATLAS has demonstrated readiness for exploiting the LHC data The worldwide LHC Computing Grid (WLCG) is the essential backbone for the ATLAS distributed computing resources needed for the Analysis Model ATLAS is on track for the eagerly awaited LHC physics (ATLAS expects to remain at the energy frontier of HEP for the next 10 -15 years, and the Collaboration has set in place a coherent organization to evaluate and plan for upgrades in order to exploit future LHC machine high-luminosity upgrades) CERN-RRB-2008-083

  42. Appendices Upgrade organization charts List of upgrade R&D projects Initial physics reach comparison for 10 vs 14 TeV CERN-RRB-2008-083

  43. Upgrade Organisation - overall CERN-RRB-2008-083

  44. Organisation - Steering Group CERN-RRB-2008-083

  45. Organisation - Project office and review office CERN-RRB-2008-083

  46. Updated list of ATLAS sLHC R&D proposals (3 slides for reference) CERN-RRB-2008-083

  47. CERN-RRB-2008-083

  48. CERN-RRB-2008-083

  49. Appendix: Physics reach for 10 vs 14 TeV At 10 TeV, more difficult to create high mass objects... Below about 200 GeV, this suppression is <50% (process dependent ) e.g. tt ~ factor 2 lower cross-section Above ~2-3 TeV the effect is more marked The plots of the talk are for √s=14 TeV James Stirling CERN-RRB-2008-083

More Related