1 / 48

Internet2: Technology Innovation and Distributed Infrastructure

Internet2: Technology Innovation and Distributed Infrastructure. Guy Almes Internet2 Project <almes@internet2.edu> NANOG Meetings Denver — February 1, 1999. Overview. Universities, Engineering, and Applications Technical Innovation Distributed Infrastructure. The challenge before us.

kynan
Download Presentation

Internet2: Technology Innovation and Distributed Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Internet2:Technology Innovation and Distributed Infrastructure Guy AlmesInternet2 Project <almes@internet2.edu> NANOG Meetings Denver — February 1, 1999

  2. Overview • Universities, Engineering, and Applications • Technical Innovation • Distributed Infrastructure

  3. The challenge before us • Universities, by their nature, • mix teaching and research • collaborate with scholars at other universities • Thus, advanced applications for • conferencing • remote instrument access • digital libraries • What networks will these need?

  4. Applications and engineering Applications Motivate Enables Engineering

  5. Combination of: high bandwidth wide area intrinsically bursty applications Need for multicast Need for quality of service Need for measurements What makes this hard?

  6. Initiated 1-Oct-96 by 34 research universities (NGI Program announced one week later) UCAID incorporated Oct-97 Board of Directors drawn from university presidents Staff mainly in three locations Compact, growing set of international partners Internet2 History / Status

  7. We now have about 140 universities A few dozen corporate members also make key contributions Key goal: create and support advanced applications Key infrastructure tactic: campus, gigapop, backbone structure History/Status, continued

  8. IPv6 Measurement Multicast Network Management Network Storage Quality of Service Routing Security Topology Working Group Progress

  9. Technical Innovation: Measurement • Chair: David Wasley, Univ California and Matt Zekauskas, Internet2 staff • Focus: • Places to measure: • at campuses, at gigaPoPs, within interconnect(s) • Things to measure: • traffic utilization • performance: delay and packet loss • traffic characterization

  10. Backbone ‘A’ Backbone ‘B’

  11. Backbone ‘A’ Backbone ‘B’

  12. Backbone ‘A’ Backbone ‘B’

  13. Active Measurements of Performance • IETF IPPM WG defining one-way delay • Take all delay to be due to: • Propagation • Transmission • Queuing • Variation in delay suggests congestion

  14. Passive Measurements of Traffic Characterization OC3MON and OC12MON Developed by MCI vBNS engineering with NLANR group at UCSD passive taps into fiber links extracts IP packet headers gradually improving maturity Help understand nature of Internet use

  15. Technical Innovation: Multicast • Chair: Kevin Almeroth,Univ California at Santa Barbara • Focus: Make native IP multicast scalable and operationally effective • Must be coordinated across backbones, gigaPoPs, and campuses • Must be coordinated with unicast routing

  16. 1999: A key year for multicast • In the past, multicast has meant ‘MBone’ • core set of committed users and engineers • ‘legacy’ non-scalable approaches to routing • Our hope: • PIM-Sparse Mode • MBGP, MSDP, etc. • enable scalable use of high-speed multicast flows throughout the Internet2 structure

  17. Technical Innovation: Quality of Service • Chair: Ben Teitelbaum, Internet2 staff • Focus: Multi-network IP-based QoS • Relevant to advanced applications • Interoperability: carriers and kit • Architecture • QBone distributed testbed

  18. Big Problem #1: Understanding Application Requirements • Range of poorly-understood needs • Both intolerant and tolerant apps important • Many apps need absolute, per-flow QoS assurances • Adaptive apps may require a minimum level of QoS, but can exploit additional network resources if available

  19. Big Problem #2: Scalability • # flows through core >> # flows through edge • Goal: keep per-flow state out of the core • Design principles • Put “smarts” in edge routers • Allow core routers to be fast and dumb

  20. CampusNetworks CampusNetworks GigaPoPs GigaPoPs Big Problem #3: Interoperability ... between separately administered and designed clouds ... Backbone Networks(vBNS, Abilene, …) … and between multiple implementations of network elements ... … is crucial if we are to provide end-to-end QoS.

  21. DiffServ Architecture Bandwidth Brokers (perform admissions control, manage network resources, configure leaf and edge devices) Destination Source BB BB Core routers Core routers Ingress Edge Router (classify, police, mark aggregates) Egress Edge Router(shape aggregates) Leaf Router (police, mark flows)

  22. Premium Service • Emulates a leased line • Contract: peak rate profile • PHB = “forward me first” (e.g. priority queuing, WFQ) • Policing rule = drop out-of-profile packets • On egress, clouds need to shape Premium aggregates to mask induced burstiness

  23. Internet2 “QBone” • A “meta-testbed” for absolute diff-serv services • Many Internet2 clouds already keenly interested in experimenting with diff-serv • Objectives: • Fostering interoperability among participant clouds • Encouraging collective problem solving • Creating opportunities for inter-disciplinary dialogue • Growing a snowball of participating clouds • Technical diversity • Topological diversity • Contiguity

  24. Summary • Internet2’s WGs focus on project’s needs • Complement IETF WGs • Membership by invitation of chair

  25. Distributed Infrastructure • Campuses: • scalable 10/100 Mb/s • multicast • GigaPoPs: • scalable access to wide-area resources • Backbones: • vBNS • Abilene

  26. Recent progress and challenges • Early gigaPoPs getting stronger • Recent major advances: • CalREN2 • Great Plains Network • Northern Crossroads

  27. JET Collaboration • Joint Engineering Team • federal NGI agency • Internet2 • NGIX effort • exchange points appropriate for Internet2 / NGI / non-US similar networks • Ideal: connect universities and labs with advanced performance/functionality

  28. Abilene: Design and Status Guy AlmesInternet2 Project <almes@internet2.edu> NANOG Meetings Denver — February 1, 1999

  29. Abilene and Internet2 • Internet2 as infrastructure: • 140+ campus LANs • about 35 gigaPoPs • a few interconnect backbones • Abilene is the 2nd Backbone • OC-48 trunks from Qwest • Cisco 12008 routers with IP/Sonet • OC-3 and OC-12 access to gigaPoPs

  30. Abilene Core at 29-Jan-99 Seattle New York Cleveland Sacramento Indianapolis Denver Kansas City Los Angeles Atlanta Houston

  31. Abilene Architecture • Core Architecture • Access Architecture • Network Operations Center • at Indiana University • Schedule: • 14-Apr-98: announced • Sep-98: demonstrated • 29-Jan-99: operational

  32. Abilene Architecture: Core • Router Nodes located at Qwest PoPs • Cisco 12008 GSR • ICS Unix PC: IPPM and Network Mgmt • Cisco 3640 Remote Access for NOC • 100BaseT LAN and ‘console port’ access • Remote 48v DC Power Controllers • Initially, ten Router Nodes

  33. Abilene: by end of February 1999 Seattle New York Cleveland Sacramento Indianapolis Denver Kansas City Los Angeles Atlanta Houston

  34. Abilene Architecture: Access • Access Nodes • Located at Qwest PoPs • Sonet: Connects Local to Long-distance • Initially, about 120 Access Nodes: • This list grows as the Qwest Sonet plant grows

  35. Abilene, with Some Access Nodes Seattle Boston Eugene Minneapolis Westfield New York New Haven Cleveland Newark Detroit Trenton Salt Lake City Chicago Philadelphia Wilmington Pittsburgh Lincoln Columbus Sacramento Indianapolis Washington Oakland Denver Kansas City Raleigh Albuquerque Oklahoma City Nashville Los Angeles Atlanta Anaheim Phoenix Dallas New Orleans Router Node Access Node Houston Miami

  36. Abilene NOC • Located at Indiana University • Excellent Operations and Engineering Skills • Commitment evidenced in Abilene Rollout

  37. Schedule • Design work: Mar-98 and ongoing • Rack design: May-98 to Jul-98 • Initial assembly / testing: Jul-98 to Aug-98 • Router Nodes / Interior Lines: Jul-98 • Demo network installed: Sep-98 • Production began: 29-Jan-99 • Completion of OC-48 Core: mid-1999 • Continuing improvement: ongoing

  38. Jun-99: Core Architecture Seattle New York Cleveland Sacramento Indianapolis Denver Kansas City Los Angeles Atlanta Houston

  39. Sep-99: Core Architecture Seattle New York Cleveland Sacramento Indianapolis Denver Washington Kansas City Los Angeles Atlanta Houston

  40. Outline of Engineering Issues Routing: OSPF, BGP4, Routing Arbiter Database Multicast PIM-SparseMode, MBGP, MSDP Measurements Surveyor: One-way delay and loss Traffic utilization End to end flows with gigaPoP help OC3MON -- passive measurements

  41. Broader Internet2, NGI, and International Advanced Net Initial NGIX sites Possible CA*net3 peering sites StarTap

More Related