1 / 34

LHCOPN Status and Plans

LHCOPN Status and Plans. LHCOPN Status and Plans. Joint Techs Hawaii. Joint-Techs Hawaii. David Foster Head, Communications and Networks CERN January 2008. David Foster Head, Communications and Networks CERN January 2008. Acknowledgments.

arne
Download Presentation

LHCOPN Status and Plans

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LHCOPN Status and Plans LHCOPN Status and Plans Joint Techs Hawaii Joint-Techs Hawaii David FosterHead, Communications and Networks CERNJanuary 2008 David FosterHead, Communications and Networks CERNJanuary 2008

  2. Acknowledgments • Many presentations and material in the public domain have contributed to this presentation, too numerous to mention individually.

  3. LHC Mont Blanc, 4810 m Downtown Geneva

  4. 26659m in Circumference SC Magnets pre‑cooled to -193.2°C (80 K) using 10 080 tonnes of liquid nitrogen 60 tonnes of liquid helium bring them down to -271.3°C (1.9 K). The internal pressure of the LHC is 10-13 atm, ten times less than the pressure on the Moon 600 Million Proton Collisions/second CERN – March 2007

  5. CERN’s Detectors • To observe the collisions, collaborators from around the world are building four huge experiments: ALICE, ATLAS, CMS, LHCb • Detector components are constructed all over the world • Funding comes mostly from the participating institutes, less than 20% from CERN CMS ATLAS ALICE LHCb

  6. The LHC Computing Challenge • Signal/Noise 10-9 • Data volume • High rate x large number of channels x 4 experiments  15 PetaBytes of new data each year • Compute power • Event complexity x Nb. events x thousands users  100 k of today's fastest CPUs • Worldwide analysis & funding • Computing funding locally in major regions & countries • Efficient analysis everywhere  GRID technology

  7. CERN – March 2007

  8. CERN – March 2007

  9. Tier-0 – the accelerator centre • Data acquisition and initial • Processing of raw data • Distribution of data to the different • Tier’s • Tier-1 (11 centers ) – “online” to the dataacquisition process high availability • Managed Mass Storage – grid-enabled data service • Data-heavy analysis • National, regional support Canada – Triumf (Vancouver) France – IN2P3 (Lyon) Germany – Forschunszentrum Karlsruhe Italy – CNAF (Bologna) Netherlands – NIKHEF/SARA (Amsterdam) Nordic countries – distributed Tier-1 Spain – PIC (Barcelona) Taiwan – Academia SInica (Taipei) UK – CLRC (Oxford) US – FermiLab (Illinois) – Brookhaven (NY) The WLCG Distribution of Resources • Tier-2 – ~200 centres in ~40 countries • Simulation • End-user analysis – batch and interactive 14

  10. Centers around the world form a Supercomputer • The EGEE and OSG projects are the basis of the Worldwide LHC Computing Grid ProjectWLCG Inter-operation between Grids is working!

  11. The Grid is now in operation, working on: reliability, scaling up, sustainability Tier-1 Centers: TRIUMF (Canada); GridKA(Germany); IN2P3 (France); CNAF (Italy); SARA/NIKHEF (NL); Nordic Data Grid Facility (NDGF); ASCC (Taipei); RAL (UK); BNL (US); FNAL (US); PIC (Spain)

  12. Guaranteed bandwidth can be a good thing

  13. LHCOPN Mission • To assure the T0-T1 transfer capability. • Essential for the Grid to distribute data out to the T1’s. • Capacity must be large enough to deal with most situation including “Catch up” • The excess capacity can be used for T1-T1 transfers. • Lower priority than T0-T1 • May not be sufficient for all T1-T1 requirements • Resiliency Objective • No single failure should cause a T1 to be isolated. • Infrastructure can be improved • Naturally started as an unprotected “star” – insufficient for a production network but enabled rapid progress. • Has become a reason for and has leveraged cross border fiber. • Excellent side effect of the overall approach.

  14. LHCOPN Design Information • All technical content is on the LHCOPN Twiki: http://lhcopn.cern.ch • Coordination Process • LHCOPN Meetings (every 3 months) • Active Working Groups • Routing • Monitoring • Operations • Active Interfaces to External Networking Activities • European Network Policy Groups • US Research Networking • Grid Deployment Board • LCG Management Board • EGEE

  15. CERN – March 2007

  16. CERN External Network Links 12.5G SWITCH Geant2 20G COLT - ISP Interoute - ISP Globalcrossing - ISP CA-TRIUMF - Tier1 WHO - CIC 6G CERN WAN Network DE-KIT - Tier1 CITIC74 - CIC ES-PIC - Tier1 20G CIXP Equinix -TIX 40G FR-CCIN2P3 - Tier1 TIFR - Tier2 CH-CERN – Tier0 LHCOPN IT-INFN-CNAF - Tier1 UniGeneva - Tier2 NDGF - Tier1 Russian Tier2s RIPN NL-T1 - Tier1 20G USLHCnet Chicago – NYC - Amst 5G TW-ASGC - Tier1 UK-T1-RAL - Tier1 US-FNAL-CMS - Tier1c US-T1-BNL - Tier1c 10Gbps 1Gbps 100Mbps

  17. CERN External Network E513-E – AS513 GPRS - VPN as1(-5)-csen C2511 r513-c-rca80-1 CIXP E513-X TIX GPN I-root dns server SWITCH AS559 IX Europe K-root dns server GEANT AS20965 swice3.switch.ch C7606 g513-e-rci76-1 g513-e-rci76-2 RIPE RIS(04) AS12654 rt1.par.fr.geant2.net JT640 rt1.gen.ch.geant2.net JT640 e513-x-mfte6-1 swice2.switch.ch C7606 evo-eu ext-dns-1 Internet Level3 AS3356 e513-e-rci76-2 e513-e-rci76-1 Internet COLT AS8220 WHO158.232.0.0/16 Internet GC AS3549 who-7204-a Reuters AS65020 e513-e-rci72-4 who-7204-b CITIC74 195.202.0.0/20 Internet Level3 AS3356 Tier2 UniGe JINR AS2875 KIAE AS6801 RadioMSU AS2683 LHCOPN l513-c-rftec-2 Akamai AS21357 l513-c-rftec-1 e513-e-rci65-3 e513-e-shp3m-4 USLHCnet AS1297192.65.196.0/23 Amsterdam e600gva1 e513-e-mhpyl-1 GN2 - E2E e600gva2 e600ams as1-gva C2509 as2-gva C2511 Chicago POP tt87.ripe.net ext-dns-2 New York POP StarLight Force10 e600nyc.uslhcnet.org tt31.ripe.net ESnet AS293 FNAL AS3152 evo-us Abilene AS11537 x424nyc.uslhcnet.org e600chi.uslhcnet.org Abilene AS11537 edoardo.martelli@cern.ch - last update: 20070801

  18. Transatlantic Link Negotiations Yesterday A major provider lost their shirt on this deal!

  19. LHCOPN Architecture 2004 Starting Point

  20. GÉANT2: Consortium of 34 NRENs 22 PoPs, ~200 Sites38k km Leased Services, 12k km Dark Fiber Supporting Light Paths for LHC, eVLBI, et al. • Dark Fiber Core Among16 Countries: • Austria • Belgium • Bosnia-Herzegovina • Czech Republic • Denmark • France • Germany • Hungary • Ireland • Italy, • Netherland • Slovakia • Slovenia • Spain • Switzerland • United Kingdom H. Doebbeling Multi-Wavelength Core (to 40) + 0.6-10G Loops

  21. Basic Link Layer Monitoring • Perfsonar very well advanced in deployment (but not yet complete). Monitors the “up/down” status of the links. • Integrated into the “End to End Coordination Unit” (E2ECU) run by DANTE • Provides simple indications of “hard” faults. • Insufficient to understand the quality of the connectivity

  22. Active Monitoring • Active monitoring needed • Implementation consistency needed for accurate results • One-way delay • TCP achievable bandwidth • ICMP based round trip time • Traceroute information for path changes • Needed for service quality issues • First mission is T0-T1 and T1-T1 • T1 deployment could be also used for T1-T2 measurements as a second step and with corresponding T2 infrastructure.

  23. Background Stats

  24. Monitoring Evolution • Long standing collaboration of the measurement and monitoring technologies • Monitoring working group of the LHCOPN • ESNet and Dante have been leading the effort • Proposal for a Managed Service by Dante • Manage the tools, archives • Manage the hardware, O/S • Manage integrity of information • Sites have some obligations • On-site operations support • Provision of a terminal server • Dedicated IP port on the border router • PSTN/ISDN line for out of band communication • Gigabit Ethernet Switch • GPS Antenna • Protected power • Rack Space

  25. Operational Procedures • Have to be finalised but need to deal with change and incident management. • Many parties involved. • Have to agree on the real processes involved • Recent Operations workshop made some progress • Try to avoid, wherever possible, too many “coordination units”. • All parties agreed we need some centralised information to have a global view of the network and incidents. • Further workshop planned to quantify this. • We also need to understand existing processes used by T1’s.

  26. Resiliency Issues • The physical fiber path considerations continue • Some lambdas have been re-routed. Others still may be. • Layer3 backup paths for RAL and PIC are still an issue. • In the case of RAL, excessive costs seem to be a problem. • For PIC, still some hope of a CBF between RedIris and Renater • Overall the situation is quite good with the CBF links, but can still be improved. • Most major “single” failures are protected against.

  27. Connect. Communicate. Collaborate DK ES IT SURFnet T1 T1 T1 T1 T1 T1 T1 T1 T1 T1 T1 NL UK CERN-TRIUMF CERN-ASGC FR CH T0 NY T0-T1 Lambda routing (schematic) Copenhagen ASGC Via SMW-3 or 4 (?) TRIUMF NDGF T0-T1s: CERN-RAL ??? BNL CERN-PIC CERN-IN2P3 Hamburg RAL SARA CERN-CNAF CERN-GRIDKA MAN LAN London CERN-NDGF Amsterdam CERN-SARA Frankfurt AC-2/Yellow DE USLHCNET NY (AC-2) VSNL N USLHCNET NY (VSNL N) USLHCNET Chicago (VSNL S) VSNL S Paris GRIDKA Starlight Strasbourg/Kehl Stuttgart Atlantic Ocean FNAL Zurich Basel Lyon Madrid Barcelona Milan GENEVA IN2P3 CNAF PIC

  28. Connect. Communicate. Collaborate DK ES IT SURFnet T1 T1 T1 T1 T1 T1 T1 T1 T1 T1 T1 NL UK FR CH T0 NY T1-T1 Lambda routing (schematic) Copenhagen ASGC TRIUMF Via SMW-3 or 4 (?) NDGF T1-T1s: ??? BNL GRIDKA-CNAF Hamburg RAL SARA GRIDKA-IN2P3 GRIDKA-SARA MAN LAN London SARA-NDGF Frankfurt AC-2/Yellow DE VSNL N VSNL S Paris GRIDKA Starlight Strasbourg/Kehl Stuttgart Atlantic Ocean FNAL Zurich Basel Lyon Madrid Barcelona Milan GENEVA IN2P3 CNAF PIC

  29. Connect. Communicate. Collaborate DK ES IT KEY GEANT2 SURFnet NREN T1 T1 T1 T1 T1 T1 T1 T1 T1 T1 T1 USLHCNET NL UK Via SURFnet T1-T1 (CBF) FR CH T0 NY Some Initial Observations Copenhagen ASGC TRIUMF Via SMW-3 or 4 (?) NDGF ??? BNL Hamburg RAL SARA (Between CERN and BASEL) Following lambdas run in same fibre pair: CERN-GRIDKA CERN-NDGF CERN-SARA CERN-SURFnet-TRIUMF/ASGC (x2) USLHCNET NY (AC-2) Following lambdas run in same (sub-)duct/trench: (all above +) CERN-CNAF USLHCNET NY (VSNL N) [supplier is COLT] Following lambda MAY run in same (sub-)duct/trench as all above: USLHCNET Chicago (VSNL S) [awaiting info from Qwest…] MAN LAN London Frankfurt AC-2/Yellow DE (Between BASEL and Zurich) Following lambdas run in same trench: CERN-CNAF GRIDKA-CNAF (T1-T1) Following lambda MAY run in same trench as all above: USLHCNET Chicago (VSNL S) [awaiting info from Qwest…] VSNL N VSNL S Paris GRIDKA Starlight Strasbourg/Kehl Stuttgart Atlantic Ocean FNAL Zurich Basel Lyon Madrid Barcelona Milan GENEVA IN2P3 CNAF PIC

  30. Closing Remarks • The LHCOPN is an important part of the overall requirements for LHC Networking. • It is a (relatively) simple concept. • Statically Allocated 10G Paths in Europe • Managed Bandwidth on the 10G transatlantic links via USLHCNet • Multi-domain operations remain to be completely solved • This is a new requirement for the parties involved and a learning process for everyone • Many tools and ideas exist and the work is now to pull this all together into a robust operational framework

  31. Simple solutions are often the best!

More Related