1 / 11

ECFA - TOOC

ECFA - TOOC. ECFA HL-LHC workshop PG7: Trigger, Online, Offline and Computing preparatory group Wesley H. Smith U. Wisconsin - Madison Graeme Stewart U. Glasgow June 13, 2014. Membership. ALICE: Latchezar Betev , Mikolaj Krzewicki , Pierre Vande Vyvre ,

josie
Download Presentation

ECFA - TOOC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECFA - TOOC • ECFA HL-LHC workshop PG7: Trigger, Online, Offline and Computingpreparatory group • Wesley H. Smith U. Wisconsin - Madison • Graeme Stewart • U. Glasgow June 13, 2014

  2. Membership • ALICE: LatchezarBetev, MikolajKrzewicki, Pierre Vande Vyvre, • ATLAS: Graeme Stewart, Benedetto Gorini, Nikos Konstantinidis, Imma Riu, Stefano Veneziano • CMS: Wesley Smith, Maria Girone, David Lange, Frans Meijers • LHCb: Peter Clarke, Vava Gligorov, Niko Neufeld

  3. Preparatory Groups: Charge • The preparatory groups are charged to identify the key topics to be discussed and to organize their corresponding session at the workshop. In conjunction with program of other groups (avoid overlaps) • They will propose the detailed agenda within that session and propose speakers to the workshop Steering Committee • Chairs of the PGs will write a document of few pages summarizing the main outcomes of the workshop and proposing further steps on each topic • See 2013 charge at: http://indico.cern.ch/event/250557/material/0/1.pdf

  4. Preparatory Groups: Modus Operandi • The chairs of PGs compile mailing lists of all those willing to contribute and organise dedicated meetings and/or mini-workshops advertised within the HL-LHC community to fulfil the objectives outlined above. • A common TWIKI page should be used to collect and circulate information and progress of each group as last time • (see https://twiki.cern.ch/twiki/bin/view/ECFA/ECFAHiLumiLHC) • The meetings of the groups should normally be fully open, although a mechanism to review talks and discuss material still being approved for release should be organised by the relevant PG chairs • The email lists from last time along with internal lists within the theory, accelerator and individual experiments can be used to find volunteers: • E-mail list of the PG members: PGMembers-HL-LHC-ECFA@cern.ch • E-mail list of the PG chairs: HL-LHC-ECFA-PGChairs@cern.ch • E-mail list of the workshop SC: HL-LHC_ECFAWorkshop_SC@cern.ch

  5. Conclusions from ‘13 TDAQ(W. Smith) • ATLAS & CMS L1 Trigger Scenario: • 10 - 20 μs latency & L1 Accept rates of 0.2 – 1 MHz. • L1 Track Trigger • ATLAS & CMS Phase 2 DAQ • HLT design to accept 1 MHz of 4 MB events w/PU = 140 • Output of 10 kHz. • ALICE & LHCb Trigger & DAQ: • Moving to “triggerless” architecture • R&D Program • FPGAs, Links, Telcom Tech., Associative Mem., GPU, New Processors, Architectures (heterogeneous) • More powerful tools require more investment to exploit!

  6. Conclusions on ‘13 Technology(N. Neufeld) • Big-data and cloud-computing drive market for Commercial Off The Shelf IT equipment  HEP profits from that, but depends on economy at large • We expect current performance rates (and price performance improvements) to grow at historic rates until at least LS2 • 25% performance improvement per year in computing at constant cost • local area network and link technology sufficient for all HL-LHC needs also beyond LS2 • wide area network growth sufficient for LHC needs, provided sufficient funding • 20% price-drop at constant capacity expected for disk-storage, • Far beyond LS2 the technical challenges for further evolution seem daunting. Nevertheless the proven ingenuity and creativity of the IT justify cautious optimism • The fruits of technology are there, but hard work is needed to make the best of it

  7. Conclusions on Computing(P. Buncic) • Resources needed for Computing at HL-LHC are going to be large but not unprecedented • Data volume is going to grow dramatically • Projected CPU needs are reaching the Moore’s law ceiling • Grid growth of 25% per year is by far not sufficient • Speeding up the simulation is very important • Parallelization at all levels is needed for improving performance (application, I/O) • Requires reengineering of experiment software • New technologies such as clouds and virtualization may help to reduce the complexity and help to fully utilize available resources

  8. Conclusions on ‘13 Software(D. Rousseau) • HL-LHC : high pile-up and high read-out rate • → large increase of processing needs • With flat resource (in euros), and even with Moore’s law holding true • (likely, provided we maintain/improve efficient use of processors), this is • not enough (by 1/2 to one order of magnitude) • → large software improvement needed • Future evolution of processors: many cores with less memory per core, • more sophisticated processors instructions (micro-parallelism), possibility of specialised cores→ • Optimisationof software to use high level processors instructions, especially in identified hot spots (expert task) • Parallel framework to distribute algorithms to cores, in a semi-transparent way to regular physicist software developer • LHC experiments code base more than 15 millions of line of code, written • by more than 3000 people → a whole community to engage, starting • essentially now, new blood to inject • We are sharing already effort and software. We can do much more: • concurrency forum http://concurrency.web.cern.ch

  9. Next Steps and Upcoming Events • Meetings of each PG • Define topics to be addressed at the workshop • Meeting among all chairs to compare proposed talk areas by end of June • Meeting of PGs with Workshop SC • A half day meeting by end of July • Presentation of progress from each PGs • Develop the workshop agenda • Session titles, number of talks and durations agreed and entered into indico with draft talk titles • September Preview of the Final Workshop Session Content • Chairs report to the workshop SC • Names of possible speakers to be discussed with SC • Final talk titles to be entered into indico with speakers names

  10. Documentation • Some useful references: • ALICE LoI for run 3 upgrade: LHCC-2012-012 • ATLAS phase 2 upgrade LoI: LHCC-2012-022 • CMS: Phase 2 Plan and Cost: RRB-2013-124 • LHCb Phase 2 LoI: LHCC-2012-007 • ECFA Workshop: https://indico.cern.ch/event/252045/ • ECFA Workshop Report: https://cds.cern.ch/record/1631032 • TDOC ‘13 Workshop: http://indico.cern.ch/event/262985/ • Please send us other useful references

  11. Discussion • Community involvement • Engagement with experts outside the experiments in really critical • What topics to focus on? • What has changed since last year? • Planning for Mini-Workshops? • Within specific communities or general • White Papers? • Preparations for June meeting of PG chairs: • Trigger, Online and Offline computing tentatively 120’ on Thursday afternoon • Proposed talk areas

More Related