1 / 36

Networking@desy

Networking@desy. Volker Gülzow, Kars Ohrenberg. Computing Seminar. Zeuthen, 23.04.2013. Network Topology Hamburg. Topology. Zeuthen inbound. Zeuthen outbound. Bandwidth Evolution @ DFN. DFN is upgrading the optical platform of the X-WiN

reese
Download Presentation

Networking@desy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Networking@desy Volker Gülzow, Kars Ohrenberg Computing Seminar Zeuthen, 23.04.2013

  2. Network Topology Hamburg

  3. Topology

  4. Zeuthen inbound

  5. Zeuthen outbound

  6. Bandwidth Evolution @ DFN • DFN is upgrading the optical platform of the X-WiN • Contract awarded to ECI Telecom (http://www.ecitele.com) • Migration work is currently underway • High Bandwidth Capabilities • 88 wave length per fiber • Up to 100 Gbps per wave length • thus 8.8 Tbps per fiber! • 1 Tbps Switching Fabric (aggregation of 10 Gbps lines on single 100 Gbps line)

  7. Growing WiN Capacities

  8. Bandwidth Evolution @ DFN • Significant cheaper components for 1 Gbps and 10 Gbps components -> reduced cost for VPN connections, new DFN pricing • New DFN conditions starting 1.7.2013 • DESYs contract of 2 x 2 GBits will go to 2 x 5 Gbps without additional costs • New cost model for Point-to-point VPNs • 1) Initial installation payment • 10 GBps ~ 11.400 €, 40 GBps ~ 38.000 €, 100 GBps ~ 94.000 € • 2) Annual fee now depends on the distance • Hamburg <> Berlin at ~ 20% of the current costs (for 10 Gbps) • Hamburg <> Berlin at ~ 80% of the current costs (for 40 Gbps) • Hamburg <> Karlsruhe at ~ 45% of the current costs (for 10 Gbps) • Hamburg <> Karlsruhe at ~ 150% of the current costs (for 40 Gbps)

  9. Geant3 topology

  10. Networking for LHC

  11. LHC Computing Infrastructure • WLCG in brief: • 1 Tier-0, 11 Tier-1s, ~ 140 Tier-2s, O(300) Tier-3s worldwide

  12. The LHC Optical Private Network • The LHCOPN (from http://lhcopn.web.cern.ch) • The LHCOPN is the private IP network that connects the Tier0 and the Tier1 sites of the LCG. • The LHCOPN consists of any T0-T1 or T1-T1 link which is dedicated to the transport of WLCG traffic and whose utilization is restricted to the Tier0 and the Tier1s. • Any other T0-T1 or T1-T1 link not dedicated to WLCG traffic may be part of the LHCOPN, assuming the exception is communicated to and agreed by the LHCOPN community • Very closed and restricted access policy • No Gateways

  13. LHCOPN Network Map

  14. Data transfers CERN Tier 1s Global transfers Ian.Bird@cern.ch Global transfer rates are always significant (12-15 Gb/s) – permanent on-going workloads CERN export rates driven (mostly) by LHC data export By Ian Bird, CRRB,4/13

  15. Resource usage: Tier 0/1 By Ian Bird Ian.Bird@cern.ch

  16. Resource use vs pledge CERN Tier 1s CCRC F2F 10/01/2008

  17. Resource vs pledges: Tier 2 By Ian Bird Ian.Bird@cern.ch

  18. Connectivity (100 Gb/s) By Ian Bird Latency measured; No problems anticipated Ian.Bird@cern.ch

  19. Computing Models Evolution • The original MONARC model was strictly hierarchical • Changes introduced gradually since 2010 • Main evolutions: • Meshed data flows: Any site can use any other site as source of data • Dynamic data caching: Analysis sites pull datasets from other sites „on demand“, including from Tier-2s in other regions • Remote data access • Variations by experiment • LHCOPN only connects T0 and T1

  20. LHC Open Network Environment • With the successful operation of the LHC accelerator and the start of the data analysis, there has come a re-evaluation of the computing and data models of the experiments • The goal of LHCONE (LHC Open Network Environment) is to ensure better access to the most important datasets by the worldwide HEP community • Traffic patterns have altered to the extent that substantial data transfers between major sites are regularly being observed on the General Purpose Networks (GPN) • The main principle is to separate the LHC traffic from the GPN traffic, thus avoiding degraded performance • The objective of LHCONE is to provide entry points into a network that is private to the LHC T1/2/3 sites. • LHCONE is not intended to replace LHCOPN but rather to complement it

  21. LHCONE Achitecture

  22. LHCONE VRF Map (from Bill Johnston, ESNet)

  23. LHCONE: A global Infrastructure

  24. LHCONE Activities • With the above in mind, LHCONE has defined the following activities: • VRF-based multipoint service: a “quick-fix” to provide multipoint LHCONE connectivity, with logical separation from R&E GPN • Layer 2 multipath: evaluate use of emerging standards such as TRILL (IETF) or Shortest Path Bridging (SPB, IEEE 802.1aq) in WAN environments • Openflow: There was wide agreement that SDN is the most probable candidate technology for LHCONE in the long-term (but needs more investigations) • Point-to-point dynamic circuits pilots • Diagnostic Infrastructure: each site to have the ability to perform E2E performance tests with all other LHCONE sites

  25. Software-Defined Networking (SDN) • Is a form of network virtualization in which the control plane is separated from the data plane and implemented in a software application • This architecture allows network administrators to have programmable central control of network traffic without requiring physical access to the network's hardware devices • SDN requires some method for the control plane to communicate with the data plane. One such mechanism is OpenFlow which is a standard interface for controlling computer networking switches

  26. LHCONE VRF • Implementation of multiple logical router instances inside a physical device (virtualized Layer 3) • Logical control plane separation between multiple clients • VRF in LHCONE: regional networks implement VRF domains to logically separate LHCONE from other flows • BGP peerings used inter-domain and to end-sites

  27. Multipath in LHCONE • Multipath problem: • How to use the many (transatlantic) paths at Layer 2 among the many partners, e.g. USLHCNet, GEANT, SURFnet, NORDUnet, ... • Layer 3 (VRF) can use some BGP techniques • MED, AS padding, local preference, restricted announcements • works in a reasonably small configuration, not clear it will scale up to O(100) end-sites • Some approaches to Layer 2 mulitpath: • IETF: TRILL (TRansparent Interconnect of Lots of Links) • IEEE: 802.1aq (Shortest Path Bridging) • None of these L2 protocols is designed for WAN! • R&E needed

  28. LHCONE Routing Policies • Only the networks which are announced to LHCONE are allowed to reach the LHCONE • Networks announced by DESY: • 131.169.98.0/24, 131.169.160.0/21, 131.169.191.0/24 (Tier-2 Hamburg) • 141.34.192.0/21, 141.34.200.0/24 (Tier-2 Zeuthen) • 141.34.224.0/22, 141.34.228.0/24, 141.34.229.0/24, 141.34.230.0/24 (NAF Hamburg) • 141.34.216.0/23, 141.34.218.0/24, 141.34.219.0/24, 141.34.220.0/24 (NAF Zeuthen) • e.g. networks announced by CERN: • 128.142.0.0/16 but not 137.138.0.0 • Only these networks will be reachable via the LHCONE • Other traffic uses the public, general purpose networks • Asymmetric routing should be avoided as this will cause problems for traffic passing (public) firewalls

  29. LHCONE - the current status • Currently ~100 network prefixes • German sites currently participating in LHCONE • DESY, KIT, GSI, RWTH Aachen, Uni Wuppertal • Europe • CERN, SARA , GRIF (LAL + LPNHE), INFN, FZU, PIC, ... • US: • AGLT2 (MSU + UM), MWT2 (UC), BNL, ... • Canada • TRIUMF, Toronto, ... • Asia • ASGC, ICEPP, ... • Detailed monitoring via perfSONAR

  30. LHCONE Monitoring

  31. R&D Network Trends • Increased multiplicity of 10Gbps links in the major R&E networks: GEANT, Internet2, ESnet, various NREN, ... • 100Gbps Backbones in place and transition now underway • GEANT, DFN, ... • CERN - Budapest 2 X 100G for LHC Remote Tier- 0 Center • OpenFlow (Software-defined switching and routing) taken up by much of the network industry and R&E networks

  32. WAN + LHCONE Infrastructure at DESY

  33. Summary • The LHC computing and data models continue to evolve towards more dynamic, less structured, on-demand data movement thus requiring different network structures • LHCOPN and LHCONE may merge in the future • With the evolution of the new optical platforms bandwidth will get more affordable

More Related