1 / 11

Physics Data Processing at NIKHEF

Physics Data Processing at NIKHEF. Jeff Templon. WAR 7 May 2004. Goals. Realize an LHC physics computing infrastructure optimized for use by NIKHEF physicists Where possible, combine expertise associated with goal 1 for other projects with NIKHEF participation

Download Presentation

Physics Data Processing at NIKHEF

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Physics Data Processing at NIKHEF Jeff Templon WAR 7 May 2004

  2. Goals • Realize an LHC physics computing infrastructure optimized for use by NIKHEF physicists • Where possible, combine expertise associated with goal 1 for other projects with NIKHEF participation • Capitalize on expertise & available funds by participating in closely-related EU & NL projects • Use NIKHEF Grid-computing expertise and capacity as currency

  3. NIKHEF-optimal LHC Computing Infrastructure • Operation of LCG core site • Build experience with site operation and discover “external” site issues traditionally ignored by CERN • Leverage front-runner position earned by our EDG effort • Strong participation in LHC/LCG/HEP Grid framework projects • Meten is weten – preoptimization is the root of all evil (Knuth) • Leverage front-runner position earned by EDG effort • Leading role in Architecture/Design arm of EGEE • AA model is fulcrum of balance between “CERN-centric” and “really distributed” models • Make use of accumulated expertise in “security” to gain position in middleware design • Preparation for Tier-1 • Avoids having others determine NIKHEF computing priorities

  4. LHC/LCG/HEP Projects • Strong Coupling to NIKHEF LHC experiment analysis • One grad student per experiment, working with ARDA project; early influence, experience, and expertise with LHC analysis frameworks • Room for more participation in medium term (postdocs, staff) • Continuing work with D0 reprocessing • D0 metadata model is far advanced compared to LHC model • Influence via our (LHC) task-distribution expertise on US computing • Investigations on ATLAS distributed Level-3 trigger • Precursor for LOFAR/Km3NeT activities

  5. Preparation for Tier-1 • Tier-1 for LHC • Archive ~ 1/7 of raw data, all ESDs produced on site, all MC produced on site, full copies of AOD and tags • Contribute ~ 1/7 of twice-yearly reprocessing power • End result: major computing facility in Watergraafsmeer • 1 petabyte each of disk cache & tape store per year start 2008 • ~ 2000 CPUs in 2008 • ~ 1.5 Gbit/s network to CERN • These numbers are per experiment • NIKHEF contributes research, SARA eventually takes lion’s share of operation • NCF must underwrite this effort (MoU with CERN)

  6. Overlap with other NIKHEF projects • Other HEP experiments • D0 work Q4 2003, continuing • Babar project together with Miron Livny (Wisconsin) • Astroparticle physics • LOFAR SOC much like LHC Tier-1 • Km3NeT on-demand repointing much like ATLAS Level-3 trigger

  7. EU & NL Projects • EGEE (EU FP6 project, 2+2 years, 30M) • Funding for site operation (together with SARA) • Funding for Grid Technology projects (together with UvA) • Funding for “generic applications” (read non-LHC) • BSIK/VL-E • Funding for Data-Intensive Science (everything we do) • Funding for Scaling and Validation (large-scale site operation) • Cooperation with other disciplines • Leverage multi-disciplinary use of our infrastructure into large NCF-funded facility (Tier-1)

  8. Currency • Advantages of Grid Computing for external funding • Grid computing (cycles & expertise) in exchange for membership fees

  9. People • LHC applications • Templon, Bos, “postdoc”, 3 grad students • Non-LHC applications • Van Leeuwen (CT), Grijpink (CT), Bos, Templon, Groep • Grid Technology • Groep, Koeroo (CT), Venekamp (CT), Steenbakkers (UvA), Templon • Site Operations • Salomoni (CT), Groep, Templon, other CT support

  10. People / Funding • EGEE • 1 FTE Generic Apps, 1 FTE Site Operations, 1 FTE AA • BSIK/VL-E • 1 FTE Scaling & Validation, 1 FTE Data-Intensive Sciences • Both projects require local 1-1 matching (50% cost model) • Can overlap +- 15% • Possible additional money from bio-range project • Possible to replace some manpower with equivalent equipment

  11. Name EGEE NA4 VLE DIS EGEE SA1 VLE SV EGEE JRA3 Totals David Groep 0.15 0.50 0.50 1.15 Kors Bos 0.35 0.65 0.15 1.15 Davide Salomoni 0.50 0.65 1.15 New Hire VL-E 1 0.15 1.00 1.15 New Hire VL-E 2 0.15 1.00 1.15 Sjors Grijpink 0.50 0.50 1.00 Oscar Koeroo 1.00 1.00 Jeff Templon 0.35 0.15 0.30 0.15 0.95 Postdoc 0.15 0.30 0.10 0.10 0.65 Gerben Venekamp 0.50 0.50 Willem van Leeuwen 0.20 0.10 0.30 Djuhaeri Harapan 0.10 0.15 0.25 Grad Stud ALICE 0.15 0.10 0.25 Grad Stud ATLAS 0.15 0.10 0.25 Grad Stud LHCb 0.15 0.10 0.25 Wim Heubers 0.10 0.15 0.25 Paul Kuipers 0.10 0.10 0.20 Ton Damen 0.10 0.10 0.20 Arjen van Rijn 0.10 0.10 0.20 Sum 2.00 2.00 2.00 4.00 2.00 12.00 Possible “Funding Model”

More Related