1 / 23

OptIPuter Backplane: Architecture, Research Plan, Implementation Plan

OptIPuter Backplane: Architecture, Research Plan, Implementation Plan Joe Mambretti, Director, ( j-mambretti@northwestern.edu ) International Center for Advanced Internet Research ( www.icair.org ) Director, Metropolitan Research and Education Network ( www.mren.org )

vivi
Download Presentation

OptIPuter Backplane: Architecture, Research Plan, Implementation Plan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OptIPuter Backplane: Architecture, Research Plan, Implementation Plan Joe Mambretti, Director, (j-mambretti@northwestern.edu) International Center for Advanced Internet Research (www.icair.org) Director, Metropolitan Research and Education Network (www.mren.org) Partner, StarLight/STAR TAP, PI-OMNINet (www.icair.org/omninet) OptIPuter Backplane Workshop OptIPuter AHM CalIT2 January 17, 2006

  2. LambdaGrid Control Plane Paradigm Shift Traditional Provider Services: Invisible, Static Resources, Centralized Management Distributed Device, Dynamic Services, Visible & Accessible Resources, Integrated As Required By Apps Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Unlimited Functionality, Flexibility Limited Functionality, Flexibility Ref: OptIPuter Backplane Project, UCLP

  3. OptIPuter Architecture, Joint Project w/UCSD, EVL, UIC ODIN Signalling, Control, Management Techniques Source: Andrew Chien, UCSD OptIPuter Software Architect

  4. Optical Control Plane Client Controller Client Layer Control Plane Optical Layer Control Plane UNI Controller Controller Controller Controller Controller I-UNI CI CI CI Client Device Client Layer Traffic Plane Optical Layer – Switched Traffic Plane

  5. Network Side Interface • WS BPEL • APIs • GMPLS as a uniform control plane • SNMP, with extensions as the basis of a management plane • Extended MIB capabilities • L3 • IEEE L2 MIB Developments • MIB Integration with higher layer functionality • Monitoring, Analysis, Reporting

  6. L2 10 GE • 10 GE Node Compute Clusters • APIs • 10 GE NICs • 10 Gbps Switch on a Chip • Currently, Low Cost Devices, Lost Per Port Cost • 240 GE • SNMP • Standard Services • Spanning Tree • vLANs • Priority Queuing • IEEE Enhancing Scalability

  7. IEEE L2 Scaling Enhancements • Current Lack of Hierarchy • IEEE Developing Hierarchical Architecture • Network Partitioning (802.1q, vLAN tagging) • Multiple Spanning Trees (802.1s) • Segmentation (802.1ad, “Provider Bridges”) • Enables Subnets To be Characterized Differently Than Core • IETF – Architecture for Closer Integration With Ethernet • GMPLS As Uniform Control Plane • Generalized UNI for Subnets • Link State Routing In Control Plane • TTL Capability to Data Plane • Pseudo – Wire Capabilities

  8. L2 Services Enhancements • Metro Ethernet Forum • Three Primary Technical Specifications/Standards • Ethernet Services Model (ESM) • Ethernet Service Attributes (Core “Building Blocks”) • Architectural Framework For Creating an Ethernet Service • No Specific Service – Any Potential Service • Ethernet Services Definitions (ESD) • Guidelines for Using ESM Components for Service Development • Provides Example Service Types and Variations of Types • Ethernet Line (E-Line) • Ethernet LAN (E-LAN) • Ethernet Traffic Management (ETM) • Implications for Operations, Traffic Management, Performance, eg, Managing Services vs Pipes • Quality of Service Agreements, Guarantees

  9. L1 10 Gbps • 10 GE Node Compute Clsuters • APIs • Automated Switch Panels • GMPLS • IETF GMPLS UNI (vs ONI UNI, Implications for Restoration Reliability) • 10 G Ports • MEMs Based • Services • Lightpaths with Attributes, Uni-directional, Bi-directional • Highly Secure Paths • OVPN • Optical Multicast • Protected Through Associated Groups • ITU-T SG Generic VPN Architecture (Y.1311), Service Requirements (Y.1312), L1 VPN Architecture (Y.1313)

  10. HP-PPFS HP-APP2 HP-APP3 HP-APP4 VS VS VS VS Previously OGSA/OGSI, Soon OGSA/OASIS WSRF tcp • Lambda Routing: • Topology discovery, DB of physical links • Create new path, optimize path selection • Traffic engineering • Constraint-based routing • O-UNI interworking and control integration • Path selection, protection/restoration tool - GMPLS tcp ODIN Server Creates/Deletes LPs, Status Inquiry Access Policy (AAA) Process Registration GMPLS Tools LP Signaling for I-NNI Attribute Designation, eg Uni, Bi directional LP Labeling Link Group designations System Manager Discovery Config Communicate Interlink Stop/Start Module Resource Balance Interface Adjustments Process Instantiation Monitoring Discovery/Resource Manager, Incl Link Groups Addresses OSM ConfDB UNI-N Physical Processing Monitoring and Adjustment Data Plane Resource Resource Resource Resource Control Channel monitoring, physical fault detection, isolation, adjustment, connection validation etc

  11. The OptIPuter LambdaGrid UIC UoA Seattle Northwestern Chicago Amsterdam StarLight CERN NASA Ames CENIC LA GigaPOP NASA JPL NASA Goddard ISI UCSD UCI San Diego CENIC San Diego GigaPOP

  12. Optera 5200 OFA 1310 nm 10 GbE WAN PHY interfaces 5200 OFA l l 1 1 l l 2 2 l l 3 3 Optera 5200 10Gb/s TSPR Optera 5200 10Gb/s TSPR Optera 5200 10Gb/s TSPR l l 4 4 Fiber 5200 OFA StarLight Interconnect with other research networks 5200 OFA Fiber in use Fiber not in use OMNInet Network Configuration 2006 • 8x8x8l Scalable photonic switch • Trunk side – 10 G WDM • OFA on all trunks 750 North Lake Shore W Taylor DOT Clusters Photonic 10 GE l 10 GE Photonic 1 PP PP 10/100/ GIGE Node 10 GE l 10 GE Node 2 NWUEN-1 8600 8600 Optera 5200 10Gb/s TSPR 10/100/ GIGE l 3 l 4 Optera Metro 5200 OFA NWUEN-5 INITIAL CONFIG: 10 LAMBDAS (ALL GIGE) CAMPUS FIBER (16) … CAMPUS FIBER (4) NWUEN-6 NWUEN-2 NWUEN-3 EVL/UIC OM5200 710 Lake Shore TECH/NU-E OM5200 10 GE l PP 1 Photonic 10/100/ GIGE 10 GE l INITIAL CONFIG: 10 LAMBDA (all GIGE) 8600 2 Node CAMPUS FIBER (4) l 3 l 4 LAC/UIC OM5200 NWUEN-8 NWUEN-9 NWUEN-7 NWUEN-4 600 S. Federal 10GE LAN PHY (Dec 03) 10 GE PP Photonic 10 GE 8600 To Ca*Net 4 Node 10/100/ GIGE

  13. Default configuration: Tribs can be moved as needed Could have 2 facing L2 SW Only TFEC link can support OC-192c (10G Wan) operation Non -TFEC link used to transport Ge traffic 600 N. Federal Optical Switch Optical Switch 1890 W. Taylor 1x 10G Wan 1 x 10G Wan High Performance L2 Switch High Performance L2 Switch Ge (x2) Ge (x2) Optical Switch Optical Switch 1 x 10G Wan 1 x 10G Wan High Performance L2 Switch High Performance L2 Switch Ge (x2) Ge (x2) 710 North Lake Shore Drive 750 North Lake Shore Drive Trib Content OC-192 – with TFEC 16 OC-192 – without TFEC 12 Ge 8 OC-48 0 Non - TFEC Link TFEC = Out of band error correction TFEC Link OMNInet 2005

  14. StarLight Argonne UIC/EVL • Research Areas • Latency-Tolerant Algorithms • Interaction of SAN/LAN/WAN technologies • Clusters UIUC CS NCSA Extensions to Other Sites Via Illinois’ I-WIRE • Research Areas • Displays/VR • Collaboration • Rendering • Applications • Data Mining Source: Charlie Catlett

  15. Summary Optical Services: Baseline + 5 Years

  16. Summary Optical Technologies: Baseline + 5 Years

  17. Summary Optical Interoperability Issues: Baseline + 5 Years

  18. Overall Networking Plan Seattle PW/CENIC NetherLight Dedicated Lightpaths Dedicated Lightpaths NLR Pacific Wave CENIC Chicago 4*1Gpbs Paths + One Control Channel San Diego (iGRID,UCSD) NetherLight 4 Dedicated Paths San Diego (iGRID, UCSD) Seattle University of Amsterdam StarLight Route B

  19. AMROEBA Network Topology SURFNet/ University of Amsterdam iGRID Conference StarLight Visualization OME L2SW L2SW L2SW L2SW L3 (GbE) L2SW L2SW Control UvA VanGogh Grid Clusters iGRID Demonstartion iCAIR DOT Grid Clusters

  20. Visualization

  21. www.startap.net/starlight

More Related