1 / 37

GRID Deployment Session

GRID Deployment Session. Summary, F.Carminati CERN LCG Launching Workshop February 15, 2002. Agenda. 14:00 Agenda and Scope F.Carminati (CERN) 14:20 GRID deployment M.Mazzucato (INFN) 14:50 Presentation from funding agencies / countries Holland, M.Mazzucato (on behalf of K.Bos, NIKHEF)

blose
Download Presentation

GRID Deployment Session

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GRID Deployment Session Summary, F.Carminati CERN LCG Launching Workshop February 15, 2002

  2. Agenda 14:00 Agenda and Scope F.Carminati (CERN) 14:20 GRID deployment M.Mazzucato (INFN) 14:50 Presentation from funding agencies / countries Holland, M.Mazzucato (on behalf of K.Bos, NIKHEF) Germany, M.Kunze (Karlsruhe) IN2P3, D.Linglin (Lyon) INFN-Italy, F.Ruggieri (CNAF-INFN) Japan, I.Ueda (Tokyo) Northern European Countries, O.Smirnova on behalf of J.Renner Hansen (NBI DK) 16:00 Coffee Break Russia, V.A.Ilyin (SINP MSU Moscow) UK, J.Gordon (RAL) US-ATLAS, R.Baker (BNL) US-CMS, L.Bauerdick (FNAL) US-ALICE, B.Nilsen (OSU) Spain, M.Delfino 16:20 Proposal for Coordination Mechanisms L.Robertson 16:50 Discussion SC2

  3. SC2

  4. SC2

  5. SC2

  6. SC2

  7. SC2

  8. SC2

  9. SC2

  10. SC2

  11. SC2

  12. SC2

  13. SC2

  14. SC2

  15. LHC computing at a glance • The investment in LHC computing will be massive • LHC Review estimated 240MCHF (before LHC delay) • 80MCHF/y afterwards • These facilities will be distributed • Political as well as sociological and practical reasons Europe: 267 institutes, 4603 users Elsewhere: 208 institutes, 1632 users SC2

  16. Beyond distributed computing • Every physicist should have equal access to data and resources • The system will be extremely complex • Number of sites and components in each site • Different tasks performed in parallel: simulation, reconstruction, scheduled and unscheduled analysis • We need transparent access to dynamic resources • Plans are very quickly outdated therefore continual planning is necessary to reduce uncertainties • Estimated uncertainty in the numbers = factor 2 • ± 10% in Moore’s law over 4 years = factor 2.7 • Therefore testbeds at all levels are necessary to continuously assess the evolution of technology and computing model SC2

  17. Scalability? • The ALICE Data Challenge example • When using COTS the question • Is your architecture scalable? • Is replaced by • Have you been there or not? SC2

  18. Deployment issues • Data Challenges are integral part of experiment planning • A large effort from the experiments, almost like a testbeam • Planning and commitment are essential from all sides • LCG, IT at CERN, remote centres and the experiments This has to be a priority for the project • GRID (mostly GLOBUS / Condor) already facilitate the day-to-day work of physicists • Now we need to deploy a pervasive, ubiquitous, seamless GRID testbed between the LHC sites • Possibly including other HEP and non HEP activities • How do we deploy/make use of components of the basic technology that are appearing SC2

  19. GRID issues • Testbed need close collaboration between all parties • Support is very labour intensive • Planning and resource commitment is necessary • Need development and Production testbeds in parallel • And stability, stability, stability • Experiments are buying into the common testbed concept • This is straining existing testbeds • But they should not be deceived • MW from different GRID projects is at risk of diverging • From bottom-up SC2 has launched a common use case RTAG for LHC experiments • From top-down there are coordination bodies • LCG has to find an effective way to contribute and follow these processes SC2

  20. If we manage to define Or even better VO use cases & requirements LHC Other apps VO use cases & requirements LHC Other apps Common core use case It will be easier to arrive at MW1 MW1 MW1 MW2 MW2 MW2 MW3 MW3 MW3 MW4 MW4 MW4 MW5 MW5 MW5 MW1 MW2 MW3 MW4 MW5 MiddleWare MiddleWare MiddleWare MiddleWare VO use cases & requirements LHC Other apps Common use cases Common use cases Specific application layer ALICE ATLAS CMS LHCb Other apps GLOBUS team Bag of Services (GLOBUS) OS & Net services SC2

  21. GRID issues • At the end there will be one GRID or zero • … or at least less GRIDs than computing centres! • We may not afford more • Interoperability is a key issue (common components or different interoperating) • On national / regional and transatlantic testbeds • This world-wide close coordination has never been done in the past • We need mechanisms here to make sure that all centres are part of a global planning • And these mechanisms should be complementary and not parallel or conflicting to existing ones SC2

  22. SC2

  23. SC2

  24. SC2

  25. ?! SC2

  26. SC2

  27. SC2

  28. SC2

  29. SC2

  30. SC2

  31. SC2

  32. SC2

  33. Issues • Experiments need testbeds for production and data challenges • We believe that GRID is the solution • But we need to be sure • It will take time to • Chose common middleware software • Deploy, validate, etc. etc. etc. Challenges started today! • But we are NOT starting from scratch • DataGRID has already a testbed • DataTAG and iVDGL have the mandate to create intercontinental testbeds SC2

  34. Application testbed requirements DataGRID conference • Three main priorities Stability Stability Stability SC2

  35. Future development Current development Production TB Short term / long term • LCG needs a testbed from now! • Let’s not be hung-up by the concept of production • The testbed must be stable and supported • The software is unstable and we know • Consolidation of existing testbeds is necessary • A small amount of additional effort may be enough! • Several things that can be done now • Setup support hierarchy / chain • Consolidate authorisation / authentication • Consolidate / extend VO organisation • Test EDG / VDT interopearbility • This has been heroically pioneered by DataGRID • … and we need development testbed(s) • John’s trident SC2

  36. Short term actions – SC2 15/3/02 • Establish high level milestones • Start from the end of the project and work back • Rescope Use Case RTAG • Include mandate to express experiment requirements for testbeds • Interim report mid April, final mid March • PEB sets up a planning group for GRID deployment • Starting mid April, ready in 4/6 weeks, experiments participate • Participate actively to GLUE, but not freeze everything waiting for GLUE • Experiments may go away and not come back ever SC2

  37. Conclusions • A challenge in the challenge • It is here that technology comes in contact with politics • GRID deployment and Data Challenges will be the proof of the existence of the system • Where it all comes together • Complicated scenario, manifold role for LCG • Follow technology • Complement existing GRID coordination • Harmonize planning of different experiments and Project • Provide support and collaborate with experiments • Deploy and support testbeds and Data Challenges • Managing long term and short term will be difficult • But only present experience can provide enough knowledge to make the right long term choices SC2

More Related