1 / 24

LCG Phase-2 Planning

LCG Phase-2 Planning. David Foster IT/CS 14 th April 2005 Thanks to Dante, ASNet and ESnet for material presented at the T0/T1 Networking meeting in Amsterdam on 8 th April 2005. European Tier0/Tier1/Tier2 Connectivity Overview.

Sophia
Download Presentation

LCG Phase-2 Planning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LCG Phase-2 Planning David Foster IT/CS 14th April 2005 Thanks to Dante, ASNet and ESnet for material presented at the T0/T1 Networking meeting in Amsterdam on 8th April 2005

  2. European Tier0/Tier1/Tier2 Connectivity Overview • A total of 7 NRENs serving Tier1 sites in Europe plus one in Asia Pacific • All currently have at least 1Gbps IP connections to CERN • 10Gbps lambda available on timescales ranging from July 2005 to autumn 2006 • Number, location and bandwidth requirements of Tier2 sites unclear to many NRENs

  3. GÉANT2 Project Partners

  4. RedIRIS (Spain) • Connecting the PIC Tier 1 site in Barcelona • Traffic crosses 3 domains prior to reaching GÉANT2: • PIC network • Anella Cientifica (Catalan regional network) • RedIRIS • Currently 1Gbps VPN is supported • Upgrade planned for RedIRIS connection to Catalan network, date TBD • No request has yet been received from PIC for 10G lambda • PIC requirement timeline unclear • 7 Tier 2 sites are known in Spain • Bandwidth requirement of Tier 2 sites unknown • Tier 2 sites connectivity varies from GE to STM-4 • Cost sharing TBD with Spanish ministry

  5. DFN (Germany) • DFN will connect the Tier 1 site at Karlsruhe to CERN via GÉANT2 • Presently 10G is available over GÉANT (Layer 3), providing LSP Karlsruhe-to-GÉANT-to-CERN • Testing is already taking place and high-datarate transfers have been shown Karlsruhe to CERN • Tier2 centres are not yet known so provision is unclear • Cost sharing: Karlsruhe will pay a subscription to DFN, a proportion of which will be passed to GÉANT2

  6. GARR (Italy) • GARR will connect the Bologna Tier1 site to CERN via GÉANT2 • 10Gbps lambda ring provided by GARR, connecting INFN-CNAF (Tier 1) and GÉANT2 PoP in Milan will be operational by September 2005 • By the end of 2005, multiple lambdas will be available from this site to GÉANT2, allowing as many 10Gbps connections as required • GARR connects Bologna Tier1 to other Tier1s via GÉANT2 • 12 Italian Tier 2 sites identified, all with DF to GARR backbone • 8 Tier 2 sites already have 1Gbps connection. All will have 1Gbps connectivity by September 2005 • GARR will bill INFN for all services provided, details of the cost sharing TBD

  7. UKERNA (UK) • UKERNA will connect the RAL Tier1 site to CERN via GÉANT2 • 2x1Gbps RAL-CERN via UKLight possible now • 10G lambda RAL-UKLight (switched port)-GÉANT2 by end 2006 • Cost will be addressed by UK national funding (discussions ongoing) -a proportion being channelled to GÉANT2 • Four distributed Tier 2 sites: NorthGrid, SouthGrid, ScotGrid, LondonGrid: bandwidth requirements unknown

  8. SURFnet (Netherlands) • SURFnet will connect the Tier1 site at SARA, Amsterdam • SURFnet6 will provide a 10G lambda to SARA by July 2005 • Initially 10G Lambda to CERN will be provided by SURFnet, later by GÉANT2 when available • Tier2 sites in the Netherlands will be connected via 10G lambdas by January 2006 • 1G lightpaths will be provided over NetherLight and/or GÉANT from Dutch Tier2s to non-Dutch Tier1s • SURFnet will absorb networking costs of the NL-access to CERN via GÉANT2 and all costs inside NL for accessing the Tier1 and Tier2s

  9. RENATER (France) • RENATER will connect the IN2P3 (Lyon) Tier1 site directly to CERN (not via GÉANT2) • RENATER will procure dark fibre between Paris, Lyon and CERN • 10G lightpath will be provided Lyon-CERN by July 2005 • Tier1-Tier1 traffic TBD • Traffic to/from the 3 French Tier 2 sites will pass over the RENATER network • Cost sharing TBD

  10. NORDUnet (Nordic Countries) • NORDUnet will connect the ‘distributed’ Tier1 site in the Nordic countries • Connectivity via lambdas can be provided by mid-2006 for all the sites concerned • Cost sharing TBD

  11. SWITCH (Switzerland) • The Tier1 site at CERN is connected directly to Tier0 • There are no Tier1 sites connected by SWITCH • The Tier2 site, CSCS has10GE if necessary • CSCS will connect directly to CERN (ie not via GÉANT2) • Cost of this connection will be borne by SWITCH

  12. European Tier1 SUMMARY • High data rate transfer tests Karlsruhe to CERN already underway • 10G available now (L3) from Bologna to CERN • Testing of 10G lambdas from Lyon and Amsterdam can commence from July 2005 • Amsterdam (and Taipei) will use Netherlight link to CERN until GÉANT2 paths are available • Testing of Barcelona link at 10G from October 2005 • Nordic distributed facility restricted to 1G until late-2006 when 10G available • RAL could operate between 2 and 4 x 1GE (subject to scheduling and NetherLight/CERN agreement) until late-2006 when 10G available. Interconnection of UKLight to GEANT2 might make an earlier transition to higher capacities possible.

  13. SC test bed @ Taipei

  14. Taipei Status • ASNet* runs one STM16 IPLC from Taipei to Amsterdam • One GE Local loop from Amsterdam to Geneva (via NetherLight LightPath Service), second GE is waiting to turn on (I already signed the xconnect order form) • will double the IPLC on July 1st (contract will be signed on May 27th) • ASCC LCG facilities now have multiple GE uplinks to ASNet’s core router; will be replaced by one or two 10GE during this summer vacation • ASNet connects (10GE*2 + STM64*n) to domestic backbone, a.k.a. TANet/TWAREN joint backbone, to reach T2’s in Taiwan. • Every T2 in Taiwan has it’s own 10GE link to the domestic backbone * ASNet (Academic Services Network, as#9264) is the network division and is also the network name that registered in APNIC.

  15. Asia Pacific Link Status • How many T2s in Asia Pacific region? • Taipei – Tokyo • STM-4 IPLC • 2*GE to APAN-JP (for JP Universities) • Will have GE to SiNet/SuperSiNet (for KEK) • IPLC should be increased before LCG production run. • Taipei – Singapore • STM-1 IPLC to SingAREN (National Grid Office of Singapore)

  16. Asia Pacific Link Status • Taipei – Hong Kong • STM-4 IPLC, will upgrade to STM-16 on Feb 1st, 2006 (contract already been signed on March 30th) • GE link to CERnet, GE link to CSTnet • CERnet : China Education and Research network • CSTnet : China Science and Technology network • Maybe have GE link with KOREN (still work with KOREN engineer)

  17. ASNet (Taiwan) Summary • ASNet connect the Taipei Tier1 site • Currently Taipei is connected via an STM16 link (IPLC) to Amsterdam, this will double in July 2005 • Currently traffic is routed over GE local loop and via NetherLight to CERN • Quality is assured by MPLS and PIP service • A move to a GÉANT2 connection Amsterdam-CERN is planned • A WDM solution is being considered, no decision as yet • Tier2 sites are connected in Taiwan (by 10GE to ASNet), Japan and China

  18. ESnet High-Speed Physical Connectivity toDOE Facilities and Collaborators, Summer 2005 SINet (Japan) Japan – Russia (BINP) Australia CA*net4 Taiwan (TANet2) Singaren CA*net4 France GLORIAD Kreonet2 MREN Netherlands StarTap TANet2 Taiwan (ASCC) CERN (LHCnet – partDOE funded) GEANT - Germany, France, Italy, UK, etc PNNL PNWGPoP NERSC SLAC BNL MIT ANL LIGO INEEL LLNL LBNL MAN LANAbilene SNLL TWC JGI Starlight 4xLAB-DC GTN&NNSA INEEL-DC ORAU-DC LLNL/LANL-DC Chi NAP PPPL FNAL AMES JLAB ORNL SRS SNV SDN HUB SNV SDN HUB LANL SNLA DOE-ALB PANTEX NOAA ORAU OSTI ARM YUCCA MT BECHTEL-NV GA Abilene Abilene Abilene Abilene MAXGPoP Allied Signal KCP SDSC HUB ELP HUB ALB HUB NYC HUB CHI HUB DC HUB ATL HUB SoXGPoP NREL ESnet Science Data Network (SDN) core SEA HUB ESnet IP core CHI-SL HUB QWEST ATM MAE-E SNV HUB Equinix PAIX-PA Equinix, etc. 42 end user sites Office Of Science Sponsored (22) NNSA Sponsored (12) International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) ESnet IP core: Packet over SONET Optical Ring and Hubs Laboratory Sponsored (6) peering points SND core hubs IP core hubs SNV HUB high-speed peering points

  19. ESnet • The IP core is primarily a layer 3 infrastructure • However, supports layer 2 via MPLS • Directly connects sites • Provides global peering for sites • The SDN core is primarily a layer 2 infrastructure • Targeted at providing virtual circuit services

  20. ESnet Target Architecture: IP Core + Science Data Network + MANs CERN Asia-Pacific GEANT (Europe) ESnet Science Data Network (2nd Core) Seattle (SEA) Aus. Chicago (CHI) New York(AOA) MetropolitanArea Rings Core loops Washington, DC (DC) Sunnyvale(SNV) ESnetIP Core Aus. Atlanta (ATL) Albuquerque (ALB) Existing IP core hubs El Paso (ELP) SDN hubs Production IP core Science Data Network core Metropolitan Area Networks Lab supplied International connections New hubs Primary DOE Labs Possible new hubs

  21. ESnet Near-Term Planning for FNAL T320 T320 CERN ORNL OC192 NRL,UltraScienceNet, etc. ESnet IP core ESnet SDN core IWire Qwest hub (NBC Bld.) Starlight Notes FNAL Qwest FNAL FNAL Shared w/ IWire All circuits are 10Gb/s ESnet ESnet Switch/RTR FNAL ESnet site gateway router ESnet/Qwest site equip. ESnet fiber

  22. ESnet Planning for BNL (Long Island MAN Ring) Other connections Other connections Engineering Study for LI MAN

  23. Major DOE Office of Science Sites ESnet Goal – 2007/2008 AsiaPac • 10 Gbps enterprise IP traffic • 40-60 Gbps circuit based transport SEA CERN Aus. Europe Europe ESnet Science Data Network (2nd Core – 30-50 Gbps,National Lambda Rail) Japan Japan CHI SNV NYC DEN DC MetropolitanAreaRings Aus. ESnet IP Core (≥10 Gbps) ALB ATL SDG ESnet hubs New ESnet hubs ELP Metropolitan Area Rings High-speed cross connects with Internet2/Abilene 10Gb/s 10Gb/s 30Gb/s40Gb/s Production IP ESnet core Science Data Network core Lab supplied Major international

More Related