1 / 27

Multi-Service Backbone Design Drivers behind Next Generation Networks

Multi-Service Backbone Design Drivers behind Next Generation Networks. Vijay Gill <vgill@mfnx.net> Jim Boyle <jboyle@level3.net>. Multi-Service Core. What is it? Why Multi-Service? Options for delivering Multiple services. What. mul·ti·serv·ice adj.

dallon
Download Presentation

Multi-Service Backbone Design Drivers behind Next Generation Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-Service Backbone DesignDrivers behind Next Generation Networks Vijay Gill <vgill@mfnx.net> Jim Boyle <jboyle@level3.net>

  2. Multi-Service Core • What is it? • Why Multi-Service? • Options for delivering Multiple services

  3. What • mul·ti·serv·ice adj. • Offering or involving a variety of services • IP, Voice, Private Line, VPNs, ATM, FR

  4. Why • $700 Billion – Voice, Fax, Modem Market • Telco Companies Day Job • Voice based communications is still ~ 90% of Global Telco revenue • Voice bit is ~ 14 x more expensive than data bit

  5. Ways to Deliver Multiple Services • Multiple Backbones • One for each Service (SONET, ATM/FR, IP) • Common Backbone • Layer multiple services on top of a common transport fabric

  6. Multiple Backbones • Application Aware • Dedicated Infrastructure used to implement each application – PSTN, FR, ATM, Private Line • Discourages Diversity • Needs Large Market Demand before it is cost effective to go out and build the support infrastructure • ATM/SONET infrastructure • More boxes, complex management issues • Hard to upgrade bandwidth in sync for all backbones

  7. Common Backbone • Application Unaware • Characterized by the new breed of Telcos • Rapid innovation • Clean separation between transport, service, and application • Allows new applications to be constructed without modification to the transport fabric. • Less Complex (overall)

  8. Why A Common Backbone? • Spend once, use many • Easier capacity planning and implementation • Elastic Demand • 1% price drops result in 2-3% rise in demand – Matt Bross, WCG • Increase of N on edge necessitates 3-4 N core growth • Flexibility in upgrading bandwidth allows you to drop pricing faster than rivals

  9. Bandwidth Blender - Set on Frappe 18,000 16,000 14,000 PRICE 12,000 Price per STM-1 ($m) 10,000 8,000 6,000 4,000 COST 2,000 2005 1996 1997 1998 1999 2000 2001 2002 2003 2004 Historical and forecast market price and unit cost of Transatlantic STM-1 circuit (on 25 year IRU lease) Source: KPCB

  10. There is no absolute way to measure any statistic regarding the growth of the Internet The Internet is getting big, and it's happening fast Some Facts Source: Robert Orenstein

  11. “Already, data dominates voice traffic on our networks” -Fred Douglis, ATT Labs

  12. Solution • Leverage packet based technology • Multi service transport Fabric • Optimize for the biggest consumer - IP • Provide a loosely coupled access point for service specific networks • (e.g. IP good, per-call signaling bad)

  13. Solution Internet (IP) Internet (IP) VPN VPN Voice/Video Voice/Video CES CES Multi Service IP Transport Fabric

  14. Requirements • Isolating inter-service routing impacts • Address space protection/isolation • Fast Convergence (Service Restoration) • Providing COS to services

  15. Requirements • Support multiple services • Voice, VPN, Internet, Private Line • Improving service availability with stable approaches where possible

  16. Stabilize The Edge • LSPs re-instantiated as p2p links in IGP • e.g. ATL to DEN LSP looks like p2p link with metric XYZ • Run multiple instances of IGPs (TE and IP)

  17. Stabilize The Core • Global instability propagated via BGP • Fate sharing with the global Internet • All decisions are made at the edge where the traffic comes in • Rethink functionality of BGP in the core

  18. COS • Mark service bits upon ingress • WRR on trunks • configure max time-in queue • Avoid congestion • But when congested, monitor that traffic delivered in line with objectives • Crude (compared to what?) but effective.

  19. Implementation Approaches • Pure IP • Layer 2 tunneling (aka CCC, AToM) • RFC2547 (base and bis) • Merged IGP • Multi process IGP • IP + Optical • Virtual Fiber • Mesh Protection • GMPLS (UNI, NNI)

  20. IP Only • Removal of an entire layer of active optronics • Directly running on DWDM • Technology for Private Lines and Circuit Emulation isn’t here yet • Fate sharing with Global Internet IP / Routers Fiber DWDM / 3R

  21. LSP Distribution • LDP alongside RSVP • Routers on edge of RSVP domain do fan-out • Multiple Levels of Label Stacking • Backup LSPs • Primary and Backup in RSVP Core • Speed convergence • Removes hold down issues (signaling too fast in a bouncing network) • Protect path should be separate from working • There are other ways, including RSVP E2E

  22. RSVP-TE Core LDP Service Edge

  23. IP + Optical IP / Routers • Virtual Fiber • Embed Arbitrary fiber topology onto physical fiber. • Mesh restoration. • Private Line • Increased Velocity of service provisioning • Higher cost, added complexity Optical Switching Fiber DWDM / 3R

  24. Backbone Fiber Metro Collectors DWDM Terminal Optical Switch Peter’s “Ring of Fire” Core Edge

  25. IP + Optical Network Big Flow Big Flow Out of port capacity, switching speeds on routers? Bypass intermediate hops

  26. Dual Network Layers • Optical Core (DWDM Fronted by OXC) • Fast Lightpath provisioning • Attach Metro collectors in Mega PoPs via multiple OC-48/192 uplinks • Metro/Sub-rate Collectors • Multiservice Platforms, Edge Optical Switches • Groom into lightpaths or dense fiber. • Demux in the PoP (light or fiber) • Eat Own Dog Food • Utilize customer private line provisioning internally to run IP network.

  27. Questions (3) Jim Boyle Vijay “Route around the congestion, we must” Gill Many thanks to Tellium (Bala Rajagopalan and Krishna Bala) for providing icons at short notice! Nota Bene – This is not a statement of direction for our companies!

More Related