1 / 27

The extension of optical networks into the campus

Explore the motivation, issues, and lessons learned from the extension of optical networks into the campus, focusing on the CA*net 4 IGT project. This project aims to federate research-based computing resources on campus and provide unencumbered access to end users.

tcarter
Download Presentation

The extension of optical networks into the campus

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The extension of optical networks into the campus • Wade Hong • Office of the Dean of Science • Carleton University

  2. Outline • Motivation • CA*net 4 IGT • From the Carleton U Perspective • Issues • Lessons learned

  3. Motivation • Large scale distributed scientific experiments (LHC - ATLAS, SNOLab, Polaris, NEES Grid ... ) • Access to regional distributed HPC resources (HPCVL, SharcNet, WestGrid, TRIUMF Tier 1.5, ...) • Federating growing research-based computing resources on campus • Allowing the end users to access these resources in an unencumbered way • CA*net 4 customer empowered networking last mile

  4. CA*net 4 IGT • CANARIE funded directed research project • build a testbed to experiment with customer empowered networking, pt2pt optical networks, network performance, long haul 10 GbE, UCLP, last mile issues, etc. • participants from the HEP community from across Canada, the provincial ORANs, CERN, StarLight, SURFnet, and potentially others • setup end to end GbE and 10 GbE lightpaths between institutions in Canada and CERN

  5. CA*net 4 Network

  6. CA*net 4 IGT Sites

  7. CA*net 4 IGT • interoperability testing with 10 GbE WAN PHY and OC-192 • used IXIA traffic generators to characterize the trams-atlantic link • transferred real experimental data from ATLAS FCAL beam tests (GbE and 10 GbE) • demonstrated native end-to-end 10 GbE between CERN and Ottawa for the ITU Telecom World 2003

  8. Planned CA*net 4 IGT Activities • complete the last mile connectivity for most of the participating Canadian sites • third OC-192 across Canada being brought up using Nortel OME 6500s • continuing long haul native 10 GbE experiments (Foundry MG8s) • TRIUMF to CERN, TRIUMF to Carleton, Carleton to CERN • CERN to Tokyo via Canada • HEPix Robust Transfer Challenge - sustained disk to disk transfers between TRIUMF and CERN

  9. Planned CA*net 4 IGT Activities • Real-time remote farms for ATLAS • CERN to U of Alberta • Data transfer of End Cap Calorimeter data from the combined beam tests to several Canadian sites • one beam test just completed (~1TB) • second test to start late August (significantly more data) • Transfer of CDF MC data from the Big Mac Cluster • establish a GbE lightpath between UofT and FermiLab

  10. Planned CA*net 4 IGT Activities • Experimentation with bulk data transfer • investigating RDMA/IP (sourcing NICs) • establish GbE lightpaths between Canadian sites

  11. Carleton University • located in Ottawa, the nation’s capital • at the southern end of the world’s longest outdoor skating rink • Canada’s Capital University • student population of 22,000 students, 1700 faculty and staff • over $100M in research funding in the past year • CFI contribution significant • about half to Physics • Bill St. Arnaud’s alma mater

  12. Carleton University

  13. External Network Connectivity • commodity Internet • Telecom Ottawa - was the largest metro 10 GbE deployment • R&E traffic • finally connected to ORION (Dec 2003), the new ORAN, just prior to the decommissioning of ONET • EduNet • non profit, OCRI managed dial-up and High Speed Internet for higher education institutions in Ottawa • dial-up ISP has a dedicated link back to campus

  14. Carleton U Network Upgrade • campus has been in the process of planning a campus network upgrade for the past 3 to 4 years • several false starts • application to funding agencies based on requirements of research activities • may have missed the window of opportunity • finally proceeding with the network upgrade • RFPs currently being evaluated

  15. Network Upgrade Proposal • original proposal • phase one (Year 1) • build the campus core network • phase two (Year 2) • build the distribution layer • phase three (Year 3) • rewire the buildings for access • not my preferred ordering!

  16. Proposed Topology

  17. Differing Viewpoints • debate over how to handle high capacity research traffic flows • necessity of routing traffic through the proposed high capacity campus core • on the other hand optical bypasses would simplify and reduce the complexity and cost of the campus network • 4 fibre pairs between Herzberg Laboratories and Robertson Hall cost about $4K CDN - we prevailed • reality check • current campus network cannot handle the high volume and high speed flows

  18. Motivations Revisited • Large scale distributed scientific experiments

  19. Motivations Revisited • Access to regional distributed HPC resources • other HPCVL sites (Queens, UofO, RMC, Ryerson U) • TRIUMF ATLAS Canada computing centre • SNOLab • shared ORION and CA*net 4 connectivity is only at GbE • high capacity flows probably dictate pt2pt optical bypass • interconnectivity can be static or dynamic • fully statically meshed or scheduled dynamic connectivity on demand - probably the latter

  20. Motivations Revisited • Federating growing research-based computing resources into a campus grid • HPCVL Linux cluster upgrade (128+256 CPUs) • Physics research cluster upgrade (40+96+96 CPUs) • Civil Engineering (~128 CPUs) • Architecture/Psychology visualization cluster (>128 CPUs) • Systems and Computer Engineering ( 64 CPUs) • debating a condominium or distributed model • most likely a hybrid with optical fibre as the interconnecting fabric • probably static pt2pt optical bypass for ease of use and user control

  21. Motivations Revisited • federated the Physics research computing cluster with part of the HPCVL Linux cluster last summer for about 2 months • clusters located on different floors • pt2pt link established - much easier than routing through the campus network • completed half of the MC regeneration for the third SNO paper • similar arrangement this summer to add part of the HPCVL cluster to the Carleton U Physics contribution to the LHC Computing Grid till the end of the year

  22. Issues • control • central management and control vs end user empowerment • disruptive • network complexity • using pt2pt ethernet links for high capacity flows should simplify campus networks (reduce costs?) • security • disruptive - bypassing DMZ • for the uses considered here, the pt2pt links are inherently secure - non routed private subnets

  23. Issues • why not copper? • it could be but with fibre • greater distances • requires less active devices along the path • management and control - device at each end under the control of the end users is ideal • consistent device characteristics - jumbo frames, port speed, duplex, etc. • inter-building connectivity is fibre and planned vertical cabling will be fibre

  24. Issues • last mile connectivity • demarcation point • end user device (NIC) or an edge device (switch, CWDM mux) • location of the demarc • at the end user or a common shared location • technology used to extend the end to end lightpath into the campus • pt2pt GbE • optical GbE NIC - patched thru to GbE interface on ONS • media converter - copper to optical

  25. Issues • pt2pt 10GbE • LAN PHY to WAN PHY conversion to OC192c on ONS 15454/OME 6500 • wavelength conversion • CWDM • media converters - copper to colored wavelength • colored GBICS for GbE switch • optical link charateristics • padding (attenuation), proper power budget, etc. • end user shouldn’t need to be an optical networking expert

  26. Lessons Learned • good to be rich in fibre • provides greater flexibility • support of ORANs, national R&E network, and international partners is essential - all have been very supportive • need to convince local campus networking folks that this is not really too disruptive • will simplify and not burden the campus production network • need a more coherent way of dealing with optical access in the last mile • still lots to learn!

  27. Thank You! Wade Hong xiong@physics.carleton.ca

More Related