1 / 16

DOE UltraScience Net Update

DOE UltraScience Net Update. ESCC/Internet2 Joint Techs July 21, 2004 W. R. Wing. Talk Outline. Quick Review of What We About Current Status Expected Time Line Control Plane. UltraScience Net: A Lambda-Switching Testbed. Enough lambdas (2 initial) to make switching real

Download Presentation

DOE UltraScience Net Update

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DOE UltraScience Net Update ESCC/Internet2 Joint Techs July 21, 2004 W. R. Wing

  2. Talk Outline • Quick Review of What We About • Current Status • Expected Time Line • Control Plane

  3. UltraScience Net:A Lambda-Switching Testbed • Enough lambdas (2 initial) to make switching real • Explore “Light-Paths” for high-end transport • Connect four hubs close to large DOE science users (but let the user labs pay last-mile costs) • Hubs in Sunnyvale, Seattle, Chicago, Atlanta • Provide an evolving matrix of switching capabilities • Separately fund research projects (e.g., high-performance protocols, control, visualization) that will exercise the network and directly support applications at the host institutions

  4. Additional Bits and Pieces • Off-hours bandwidth via MPLS on ESnet • Core SONET Switches at the four hubs • Edge MSPP boxes for additional services • Gigabit Ethernet-attached storage • A control plane to tie it all together • A scheduler to make it available

  5. Bits and Pieces Cont. • A progression of switching approaches • Study/compare MPLS(GMPLS), SONET, all optical • New all-optical technologies coming (e.g., laser tuning) • Local Storage • Logistical Storage Depot (prior to local application development) • Progression of experimental point-to-point transport technologies • Fiber channel • Infiniband

  6. ESnet CERN etc. NLR The Physical View From 50,000 feet ORNL Connector

  7. Seattle PNNL Starlight FNAL ANL Sunnyvale LBL SLAC ORNL Attached Sites Gig-E Attached Linux Storage Switches WhiteRock Mini-MSPP /w Gig-E MPLS-link via ESnet 2 x OC192 Global Architecture for DOE UltraScience Net Location Switch Type Interfaces Sunnyvale Cisco 15454 2 x OC192, 2 x OC48, 8 x Gig-E Ciena CD-CI 2 x OC192 Seattle Ciena CD-CI 6 x OC192 Starlight Ciena CD-Ci 8 x OC192 or 6 x OC192 and 16 x Gig-E + 2 x OC48 White Rock 1 x OC192, 8 x Gig-E Oak Ridge Ciena CD-Ci 2 x OC192, 2 x OC48, 8 x Gig-E White Rock 1 x OC192, 8 x Gig-E

  8. Initial Switch Selection (& why so many?) • Cisco 15454, White Rock, and Sequoia 16000 MSPP’s • Pros: • Explicit Sub-lambda Switching (OC-12, OC-48) • Can save configurations for fast switching • Provide Gig-E connections • Cons: • No GMPLS on Cisco • Ciena (Core Director) • Pros: • Direct Sub-lambda Switching at OC1 granularity • Can do Gig-E and Fiber Channel directly • Industry leader in developing GMPLS • Research partnership • Cons: • SONET is expensive

  9. UltraScience Net Circuit Switching

  10. UltraScience Control-Plane: Phase I Seattle Core Director Core Director Chicago host VPN Sunnyvale host VPN VPN Core Director host IP network ORNL VPN Core Director host lambda VPN TL1

  11. Control-Plane • Phase I • Centralized VPN connectivity • TL1-based communication with Core Directors and MSPPs • User access via centralized web-based scheduler • Phase II • GMPLS direct enhancements and wrappers for Tl1 • User access via GMPLS and web to bandwidth scheduler • Inter-domain GMPLS-based interface

  12. CD-CI 10 x 1Gig-E (GigE LM) ESnet Switch Media Converter Starlight Switch 10Gig-E 10 x 1Gig-E (GigE LM) Starlight Configuration (per ChicagoEngineering Meeting) OC192 We will install 20-port GigE cards (GigE LM) cards at all sites Need “media converter” at Starlight (10Gig-E to OC192) Need media converter at Sunnyvale iff required by research programs All Starlight “local” customers arrive via Starlight switch Local storage server and control server not shown

  13. Current Status • Contracts all placed or ready for placement and awaiting ORO review • Initial connectivity to Chicago in two-three weeks (via Atlanta) • MSPP hardware all on order Why has this taken so long? -answer- These are VERY complicated contracts… (and)

  14. Expected Time Line • NLR Chicago-Sunnyvale “first light” in late August (10-Gig, not SONET) • We have negotiated test-use on that path • Expect to be able to do traffic testing in August–September time frame • NLR SONET circuits follow in October • Expect full system (just) in time for SC2004

  15. Summary • Atlanta-Chicago link to be up this summer • Connecting ORNL to it will initially be via Atlanta • Chicago-Sunnyvale paced by NLR • Initial 10-Gig test circuit, OC192 SONET follows • PNNL fiber schedule will pace their connection • Expect E-to-E tests by fall • Expect user traffic before SC2004

  16. Thank You http://www.csm.ornl.gov/ultranet

More Related