1 / 21

Glenn Patrick Rutherford Appleton Laboratory

Fit and Healthy?. Glenn Patrick Rutherford Appleton Laboratory. GridPP26 29 th March 2011. or just lethargic?. H1: UK Monte-Carlo. c/o Dave Sankey & Bogdan Lobodzinski. GGUS TICKETS 01.01.2010 – 01.04.2011 UKI-NORTHGRID-MAN-HEP 2 RAL-LCG2 6 UKI-LT2-BRUNEL 7 UKI-NORTHGRID-LANCS-HEP 3

benny
Download Presentation

Glenn Patrick Rutherford Appleton Laboratory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fit and Healthy? Glenn Patrick Rutherford Appleton Laboratory GridPP26 29th March 2011

  2. or just lethargic?

  3. H1: UK Monte-Carlo c/o Dave Sankey & Bogdan Lobodzinski GGUS TICKETS 01.01.2010 – 01.04.2011 UKI-NORTHGRID-MAN-HEP 2 RAL-LCG2 6 UKI-LT2-BRUNEL 7 UKI-NORTHGRID-LANCS-HEP 3 UKI-SOUTHGRID-BHAM-HEP 2 UKI-LT2-QMUL 6 UKI-LT2-RHUL 4 UKI-NORTHGRID-LIV-HEP 2 UKI-LT2-UCL-CENTRAL 1 UK TOTAL (40%) 33 Usually aborted jobs or unavailable SE. Most tickets solved by action in authentication/authorisation area. http://www-h1.desy.de/h1/www/h1mc/mts.html

  4. H1: UK Monte-Carlo 25% of all H1 jobs from 12 UK sites. UK-DESY transfer rate only ~500kB/sec (averaged over 2 months).

  5. Linear Collider Multi TeV (~3 TeV) e+e- collider Two beam accelerator Silicon Detector (SiD) Design Study – tracking and calorimetry. ILC ~500 GeV. LOI published 31 March 2009 c/o Jan Strube

  6. Linear Collider: Status Obviously using DIRAC as well as Ganga CLIC is in the middle of a conceptual design report (CDR) 5 different benchmarking channels at 3 TeV + 1 at 500 GeV Software needed a lot of work to move from 500 GeV to 3 TeV --> delays in the start of production 312 background events / signal events !!!

  7. Linear Collider: Next Steps Current Storage used: • RAL-SRM : 1,513,322,629,922 • KEK-SRM : 27,825,900,796 • CERN-SRM : 15,468,811,712,135 • IN2P3-SRM : 1,362,953,567,143 Total : 18,372,913,809,996 Background merging step planned to be done at RAL. -> Significant increase expected, but no need to increase T1 allocation foreseen

  8. Linear Collider: Long Term CLIC CDR due in Fall 2011 • Production just about to hit the hot phase. ILC detectors to write Detector Baseline Document: • Due to end 2012. • Last time: Production mostly at SLAC Batch farm. • This time: Expected to exclusively use Grid (expected merger of DESY ILC VO w/ Fermigrid ILC VO would provide relief for Tier1). • Total scale ~ 1/4 of CLiCtotal. • Tier1 allocation at the current level should be sufficient. • But limited UK manpower in the future (both Jan and Marcel depart).

  9. SuperB: Newly born? Approved by Italian Government 14/15 December 2010. Three sites under consideration: two near Rome(LNF and Tor Vergata) or green field site. • Re-use components from PEP-II and BaBar. • UK interested in building the Silicon Tracker. • MAPS pixel sensors and modules (~1m2 of silicon). • Opportunities also on accelerator side. • UK Grid resources for computing model & leadership.

  10. SuperB: Computing Computing guesses 2020 (BaBar extrapolation) Nominal luminosity (15 ab-1/year) TAPE 100 PB DISK 50 PB CPU 1700 kHEPSPEC06 • Now - Fast Grid simulation for physics and detector studies. • Run at QMUL, RAL and Oxford. • Produced ~20% of Monte-Carlo. • Babar code converted to run on Grid, but needs recoding. • In next 2 years ramp up detector simulation. • Want to get accelerator studies into computing model. “Our efficiency is infinite since we are producing events with no resources”. c/o Fergus Wilson

  11. SNO+ Neutrino Detector • SNO+ is a multi-purpose liquid scintillator neutrino detector. • Like SNO, but with heavy water returned and replaced. • H2O in 2012 and liquid scintillator in 2013. • More interaction light means more data. Need to store and process many TB of data. • Monte-Carlo is CPU intensive due to many photons that need to be tracked. • Looking to use Grid in Europe: Oxford, Sussex, QMUL, Liverpool and Lisbon. • Early days – VO just set up. • Probably using T2 sites for MC production and storage. c/o Jeanne Wilson

  12. T2K Super-Kamiokande Kamioka mine (1000m deep) 50 kton water Cherenkov Long-baseline neutrino oscillation experiment (295km) Measure oscillation of νµtoνe Neutrino beam generated using 50 GeV proton synchrotron at J-PARC facility in Tokai.

  13. T2K: A few other problems... via Geoff Pearce

  14. T2K: Data Model 100% Raw Data at RAL, TRIUMF (IN2P3) sites. Weighted percentage at T2 sites dependent on resources. FTS server set up at RAL and UK channels working. c/o Ben Still

  15. T2K: Grid Progress • Now starting to heavily use the Grid for processing as well as distribution and storage of data. • System in place to distribute new data from RAL and TRIUMF Tier 1 centres to Tier 2 sites in UK, Spain (Barcelona and Valencia) and France (IN2P3). Done via FTS with channels hosted at RAL. • Issues with just using lcg-utils to copy data from Japan. • Storage snapshot. RAL Tier 1 ~109TB. UK Tier 2 sites ~1- 40TB. • Soon entering largest data processing phase – Grid will play major role. • Team of 3 post-docs working on Grid related issues: Ben Still (QMUL), Gustav Wikstrom (Geneva) and Jon Perkin (Sheffield). c/o Ben Still

  16. NA62 • NA62-II (2013-2014): UK – Birmingham, Bristol, Glasgow, Liverpool. • Short test end of 2012. • Large MC run in Autumn 2011. • VO created in UK, enabled at Glasgow. • Grid interface (Janusz). c/o Dave B.

  17. MICE MICE magnet delays, etc. Rescheduling. Some tests/runs in 2011. Next “Step” in 2012. Custodial copy of raw data (2 copies) at UK T1. Important that this is efficient! c/o Dave Colling

  18. PhenoGrid: Health Warning! Unlike large VOs, there is no dedicated support staff. Chasing problems takes academic time away from research. Don’t have the ability to submit team tickets. Appears tickets get low priority because they are considered individual user (not VO) problems. Let to a number of issues in last 6 months taking a long time to resolve and Grid being unusable for much of that period. Hoping this will improve after interaction with Jeremy. Stability of Grid service is still dismal. Too many potential sources of point failure leading to a total up time of the Grid being well below 50% in our opinion – rather than anticipated high 90%+. Grid middleware considered to be a failure on a scale that would embarrass a government IT project. Still inconsistent, unstable and unreliable after ~10years of development. Fails the “Daily Mail” test. c/o Peter Richardson

  19. Little Sign of Life? “Maybe he’s dead, Jim”

  20. Not to Forget... MINOS: Nick West and Phillip Rodrigues have left. Talked to Alfons Weber (UK Spokesman) – now little UK effort and expect limited UK Grid use. Some legacy issues over storage (NFS, etc). Fire in Soudan mine! Future neutrino experiment (NOvA)? SuperNemo: Gianfranco Sciacca has left. Not sure of status – no update on “health”. CDF and DZERO: Little UK Grid use now (none at T1). I did not consult.

More Related