1 / 8

Thoughts on LHCONE/LHCOPN evolution

Thoughts on LHCONE/LHCOPN evolution. DANTE, DFN,GARR,RENATER CERN, 10-11 February 2014 Presenter: Roberto Sabatino - DANTE. Recap on facts & figures. LHCOPN is a private, closed, infrastructure with primary purpose to provide dedicated connectivity between T0 and T1s

erelah
Download Presentation

Thoughts on LHCONE/LHCOPN evolution

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Thoughts on LHCONE/LHCOPN evolution DANTE, DFN,GARR,RENATER CERN, 10-11 February 2014 Presenter: Roberto Sabatino - DANTE

  2. Recap on facts & figures • LHCOPN is a private, closed, infrastructure with primary purpose to provide dedicated connectivity between T0 and T1s • 13 T1s connected directly to T0 • Capacity used also for T1-T1 traffic, via T0 or additional links, and transit to T0 via other T1s • LHCONE: private IP overlays (VRFs) on NRENs, GEANT, ESNet, Internet2 interconnected via Open Exchanges • 8 T1s and 40+ T2s interconnected • Multi-domain routing just like normal IP, but with restrictions on advertised IP space, with dedicated access links and trans-atlantic links

  3. Changing landscape • In 2012 when LHCONE started, R&E networks were mostly Nx10Gbps based • Today many are based on, or moving towards, 100Gbps, with connectors at 100Gbps too • Stronger co-ordination on transatlantic connectivity, compared to pre-2012 • Resilient, diverse, bundles for general connectivity and LHCONE connectivity • trials of 100Gbps • 100Gbps transatlantic operationally/commercially viable in late 2014/early 2015

  4. Current shortcomings • T1-T1 traffic via T0 constrained. T0-T1 not guaranteed 10Gbps • T1-T1 transfers suboptimal via LHCOPN • Many T1-T1 need to go via T0 (BNL-NLT1) • Upgrading OPN and improving resilience costly • Current transatlantic capacity still somewhat limited • Current use of T.A T1-T0 not flexible, cannot be shared for T1-T2 traffic

  5. Thoughts • Open up LHCONE policy, to allow T1-T1 traffic • Within regions (GEANT, North America) there is ample capacity to support the T1-T1 traffic • DFN, GARR, RENATER are able to provide capacity to move T1-T1 traffic from LHCOPN to LHCONE at short notice • Possibly, use as T0-T1 backup as well • More flexibility in use of T.A LHC dedicated links • Perform a Intra-Europe T1-T1 trial between KIT, CNAF, IN2P3 • Perform a T.A trial over ANA-100G link

  6. From the single T1 point of view (today) T0 CERN T1 CERN LHCONE GEANT LHCONE [R/N]REN T1 T1 LHCONE IP Service (No T1-T1) T1 LHCOPN Private circuit

  7. Possible scenario T1 LHCONE [R/N]REN LHCONE GEANT T0/T1 CERN LHCONE [R/N]REN T1 Double link to local NREN (increased bw and resiliency) LHCONE IP Service LHCOPN Private circuit

  8. Summary • Validate T1-T1 traffic over LHCONE • Allow T1-T1 traffic over LHCONE • Can be prioritised (and capped) for transatlantic transfers • Flexible use of LHC-dedicated T.A. T1-T0 links for LHCOPN and LHCONE • T1s, T2s choose how to connect to other sites • LHCOPN (T0, T1) • LHCONE (T1, T2,…) • Dedicated connectivity (lambda, p2p, BoD)

More Related