1 / 24

Building a Lambda Switching Capable Infrastructure

Building a Lambda Switching Capable Infrastructure. Sam Mokbel, Project Director Optical Regional Advanced Network of Ontario. Network Highlights. 20 year access to dark fibre as a strategic long-term investment Complements the government’s investment in research and education

ulric
Download Presentation

Building a Lambda Switching Capable Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Building a Lambda Switching Capable Infrastructure Sam Mokbel, Project Director Optical Regional Advanced Network of Ontario

  2. Network Highlights • 20 year access to dark fibre as a strategic long-term investment • Complements the government’s investment in research and education • Co-ownership of optical fibre with private sector partners • Full ownership of equipment • Customized to specific requirements of a research and education network • Fully operational by Spring, 2003 and financially sustainable within 3 years

  3. ORION • Ontario’s Research and Innovation Optical Network (ORION) is a new high-speed, optical network that brings broadband access and connectivity to Ontario’s research and education (R&E) institutions • Created to support innovative and collaborative projects and activities in Ontario’s publicly funded research and education community • Will connect more than 100 Ontario institutions and organizations with other local and global networks

  4. User-Identified Applications • ORION will connect users to each other, the Internet, CA*net4 , and other R&E networks around the world • Once fully deployed, ORION will allow: • virtual classrooms and laboratories • complex multimedia-based interactivity • real-time collaborative research • shared and ready access to large genomic and biotech databases. • grid computing • Transport of operation level data such as email, web browsing, ftp

  5. ORION Operating Guidelines • The ORION network manager will seek the help of a user technical advisory panel on important engineering issues • The ORION and member network managers may classify user traffic and apply traffic engineering rules to optimize bandwidth utilization and enhance performance • Research and Internet traffic of users share the same physical infrastructure • All ORION backbone circuits are to be at OC192 speeds • All ORION PoPs will have Gigabit capable equipment to interconnect the members • PoP site hosts are ORION partners • Users can interconnect to ORION directly or through a CBN • RANs or CBNs that interconnect local institutions manage their own local infrastructure

  6. ORION Functions • Collects traffic from directly-connected institutions and community networks • Transports inter-member and external traffic • Provides advanced services • Enables exchange of knowledge • Peers with other advanced R & E networks • Transports Internet traffic for some members • Nucleus of a new telecommunication infrastructure to the R&E community

  7. Operations Requirements • Define interface to the members • Layers 1, 2, and 3 • Demarcation • Bandwidth • Reporting • Administration • Set objectives for equipment and fibre availability • Develop processes and guidelines for • Network management • Network maintenance • External connectivity

  8. PoP Requirements • Interconnection points for local members • Physically accessible to ORANO staff on a 7x24 basis • Supply the required environment to operate the ORION equipment • Host ORION optical, data, and management equipment • Integrate with ORION’s remote management approach • Neutral to the termination of 3rd party fibre • Located at partner operated and member endorsed premises

  9. Transport Requirements • Optical transport • Lambda at 2.5Gbps or higher between PoPs • Scalable up to 8 lambdas per PoP-to-PoP section • Dynamic lambda setup and tear down • No OADM between PoPs • Terminal equipment located at partner hosted PoPs • Transport IP traffic • Capable of provisioning private layer 2 circuits and lambdas between users

  10. Service Requirements • QoS capable point-to-point paths between ORION members • Provide layer 2 or optical light path connectivity between users and external advanced networks • IPV6 and IPv4 • IPV4 Multicast • Support multi-homing of members • Internal and external routing of member IP traffic • Shape Internet traffic to/from members • Accommodate new technology testing • Enterprise level network availability

  11. Timmins Thunder Bay Duluth Sudbury Sault Ste. Marie North Bay Ottawa Kingston Great Lakes Peterborough Barrie Belleville Oshawa Guelph Toronto Oakville Waterloo St. Catharines Hamilton London Welland Sarnia Windsor Chicago Cleveland

  12. Architecture

  13. Fibre Routes

  14. PoP

  15. Routing Layer

  16. Integration • Multiple suppliers for • Long haul and local fibre • PoP space and repeater huts • Optical, data switching, power, and remote management equipment • Challenges in • Finalizing full design requirements • Finding and communicating compatible specs and coordinated installation and testing • Uniform configuration • Who does what?

  17. What Fibre Glut? • Fibre-rich sections not necessarily where we needed them • Significant number of local loops are not readily available • Toronto had the most severe shortages • Where is all that fibre that has been deployed over the past decade? • Have the carriers used their fibre in the most efficient way? • Optimal network design may not necessarily be feasible due to • fibre shortage • fibre stranding • fibre holes

  18. Power and Grounding • Long haul optical equipment uses large amounts of DC power • Most PoP sites located at universities and colleges do not have DC power readily available • Initial vendor requirements were for an average of 150A per node and isolated dedicated ground • Would have required massive power conversion systems and physical construction to provide grounding and floor enforcement to support massive batteries • Working closely with the vendor resulted in • Reduced power requirements to 50A • Shared building ground

  19. Remote Management • We are not a carrier, yet we are deploying a carrier class network • No capacity or resources to maintain 3700KM of fibre and 31 repeater sites • Must rely on outsourcing for maintenance, but must maintain control over member connectivity • Designed the network into static and dynamic components • fibre and repeaters on long routes don’t require many changes, can be hosted and maintained by a carrier partner • minimized the number of regen repeater sites • PoP sites where members connect are hosted by member partners and jointly operated and maintained • building remote access capability into the PoP design

  20. Partnership With PoP Sites • ORION uses a distributed operations model that is centrally coordinated • PoP hosts are an integral part of the ORION network operation team • PoP sites have a stake in the success of ORION to provide service to their own institutions, local region, and the rest of the R&E community • Not a commercial relationship; the PoP hosts are making significant contributions to the network • Relationship fulfills the knowledge exchange mandate of ORION

  21. Lessons Learned so Far -1 Change Management • Be flexible & creative, but stay focused • Final network will be different from initial design • Small details matter, clarify assumptions • Be diligent: this is a network for 20 years • Engage your users early; this is their network • Find out for yourself: your partners perspective may be different • Formal contracts make you appreciate what is important to your partners and members • Communication, communication, communication

  22. Lessons Learned -2 Carrier networks are different from enterprise networks • Repeater hut space can get you: real estate in rural Ontario is not dirt cheap • Higher speed and longer distance = stringent requirements • Power is not the same as enterprise class • We spent as much time on PoP layout & design as on transport or IP layer • Think about insurance (liability & loss) • Justify your design by requirements not by best or previous practices • Documentation, documentation, documentation

  23. Lessons Learned -3 Need a first class team • Engineers, communications staff, and project managers must have been there before • Mistakes can be costly • No room for experimentation • Team needs telecom/carrier experience • Startup organization and a new network can be stressful • Everyone on the team must be a self starter

  24. www.orion.on.ca Thank You – Merci !

More Related