1 / 28

Exa -Scale Data Preservation in HEP

Exa -Scale Data Preservation in HEP. International Collaboration for Data Preservation and Long Term Analysis in High Energy Physics. Jamie.Shiers@cern.ch APA/C-DAC Conference February 2014. Background.

jenn
Download Presentation

Exa -Scale Data Preservation in HEP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Exa-Scale Data Preservation in HEP International Collaboration for Data Preservation and Long Term Analysis in High Energy Physics Jamie.Shiers@cern.ch APA/C-DAC Conference February 2014

  2. Background • Whilst this talk concerns data from High Energy Physics (HEP) experiments at CERN and elsewhere, many points are generic • The scale: 100PB today, reaching ~5EB by 2030 • “Trusted” repositories of this size– and with a lifetime of at least decades – are a sine qua non of our work • I will also talk about costs, business cases, problems and opportunities…

  3. BEFORE!

  4. Data flow to permanent storage: 4-6 GB/sec 200-400 MB/sec 1-2 GB/sec 1.25 GB/sec 1-2 GB/sec CERN-JRC meeting Bob Jones

  5. Tier 0 – Tier 1 – Tier 2 • Tier-0 (CERN): • Data recording • Initial data reconstruction • Data distribution • Tier-1 (11 centres): • Permanent storage • Re-processing • Analysis • Tier-2 (~130 centres): • Simulation • End-user analysis • Tier-2 centres in India: • Kolkata (ALICE) • Mumbai (CMS) The LHC Computing Grid, February 2010

  6. Managing 100 PBytes of data CERN-JRC meeting Bob Jones

  7. LHC Schedule 2009 2010 2011 2011 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2030? … First run LS1 Second run LS2 Third run LS3 HL-LHC Phase-1 Upgrade (design energy, design luminosity) Phase-2 Upgrade (High Luminosity) Phase-0 Upgrade (design energy, nominal luminosity) LHC startup 900 GeV 7 TeV L=6x1033 cm-2s-2 Bunch spacing = 50 ns 14 TeV L=1x1034 cm-2s-2 Bunch spacing = 25 ns 14 TeV L=2x1034 cm-2s-2 Bunch spacing = 25 ns 14 TeV L=1x1035 cm-2s-2 Spacing = 12.5 ns CERN-JRC meeting Bob Jones

  8. ATLAS Higgs Candidates

  9. AFTER!

  10. CERN has ~100 PB archive

  11. But its still early daysfor the LHC! Only EYETS (19 weeks) (no Linac4 connection during Run2) LS2 starting in 2018 (July)18 months + 3months BC (Beam Commissioning) LS3 LHC: starting in 2023 => 30 months + 3 BC injectors: in 2024 => 13 months + 3 BC Run 2 LS 2 Run 3 LS 3 Run 4 LS 4 Run 5 LS 5 LHC schedule approved by CERN management and LHC experiments spokespersons and technical coordinators Monday 2nd December 2013

  12. High Luminosity LHC (HL-LHC) Update of the European Strategy for Particle Physics adopted 30 May 2013 in a special session of CERN Council at Brussels. Statement c: c) The discovery of the Higgs boson is the start of a major programme of work to measure this particle’s properties with the highest possible precision for testing the validity of the Standard Model and to search for further new physics at the energy frontier. The LHC is in a unique position to pursue this programme. Europe’s top priority should be the exploitation of the full potential of the LHC, including the high-luminosity upgrade of the machine and detectors with a view to collecting ten times more data than in the initial design, by around 2030. This upgrade programme will also provide further exciting opportunities for the study of flavour physics and the quark-gluon plasma. HL-LHC Workshop

  13. Data: Outlook for HL-LHC PB • Very rough estimate of a new RAW data per year of running using a simple extrapolation of current data volume scaled by the output rates. • To be added: derived data (ESD, AOD), simulation, user data…

  14. Volume: 100PB + ~50PB/year (+400PB/year from 2020)

  15. 1. DPHEP Portal • Digital library tools (Invenio) & services (CDS, INSPIRE, ZENODO) + related tools (HepData, RIVET, …) • Sustainable software, coupled with advanced virtualizationtechniques, “snap-shotting” and validationframeworks • Proven bit preservation at the 100PB scale, together with a sustainable funding model with an outlook to 2040/50 • Open Data (“Open everything”)

  16. Case B) increasing archive growth Start with 10PB, then +50PB/year, then +50% every 3y (or +15% / year)

  17. Case B) increasing archive growth

  18. Case B) increasing archive growth Total cost: ~59.9M$ (~2M$ / year)

  19. Case B) increasing archive growth

  20. Summary • DPHEP portal: build in collaboration with other disciplines, including RDA IG and the APA… • Digital libraries: continue existing collaborations • Sustainable “bit preservation” – certified repositories as part of EINFRA-1-2014 • “Knowledge capture & preservation”: BIG CHALLENGE not addressed in multi-disciplinary way: next funding round? • Open “Big Data”: a Big Opportunity (for RDA?)

  21. Portal Example # 1

  22. Portal Platform – Zenodo?

  23. Documentation projects with INSPIRE • Internal notes from all HERA experiments now available on INSPIRE • Experiments no longer need to provide dedicated hardware for such things • Password protected now, simple to make publicly available in the future • The ingestion of other documents is under discussion, including theses, preliminary results, conference talks and proceedings, paper drafts, ... • More experiments working with INSPIRE, including CDF, D0 as well as BaBar

  24. LEP Cost would be “now” … • Completely different, of course … • Direct resource cost is already compatible with zero for LEP experiments • Total ALEPH DATA + MC (analysis format) = 30 TB • ALEPH: Shift50 = 320 CernUnit. One of today’s pizza box largely exceeds this • CDF data: O(10 PB), bought today for <400kEur • CDF CPU ~ 1MSi2k = 4 kHS06 = 40kEur • Here the main problem is knowledge /support, clearly • Can you trust a “NP peak” 10 years later, when experts are gone? • ALEPH reproducibility test (M.Maggi, by NO mean a DP solution) ~0.5 FTE for 3 months Zero! !=0, but decreasing fast

  25. Open Data?

  26. Costs and Scale • There are 4 (main) collaborations + detectors at the LHC: the largest has 3000 members • The annual cost of WLCG (infrastructure, operations, services) is ~EUR100M • The CERN database services costs around 2MCHF per year for Materials (licenses, maintenance, hardware) and 2MCHF for personnel • The central gridExperiment Integration Support team varied between 4-10 people, plus significant effort at sites and within experiments • The DPHEP Full Costs of Curation workshop concluded that a team of ~4 people, with access to experts, could “make significant progress” (be careful with this number!)

  27. Conclusions • Long-term data preservation is a journey, not a destination • As such, it is best not to venture out alone • A clear understanding of costs & benefits is necessary to secure funding • We are eager to share our knowledge and experience (exa-scale “bit preservation”) • We have learned a lot through collaboration through the APA – and keen to learn more in the future

More Related