1 / 35

SCinet: Convergence of Advanced Networking and High Performance Computing

SCinet is an annual event that brings together advanced networking and computing professionals to showcase the latest advancements in wide area connectivity, fiber infrastructure, wireless technology, and network operations. This text provides an overview of SCinet, its networks, events, trends, and the committee responsible for its organization.

evalentine
Download Presentation

SCinet: Convergence of Advanced Networking and High Performance Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SCinet:  The Annual Convergence of Advanced Networking and High Performance Computing Steve Corbató, Internet2 MasterWorks track 14 November 2001

  2. SC99 GNAP Demo Network 15-18 November, 1999 Portland, Oregon

  3. Outline • SCinet • Wide area connectivity • Fiber • Wireless • Infrastructure • Operations, Measurement, & Security • Events • Xnet, Bandwidth Challenge, SC Global • Trends • Q&A

  4. SCinet is 4 networks • Production commodity network • Ubiquitous wireless network • High-performance/availability exhibit floor network • Bleeding-edge testbed - Xnet

  5. Basil Decina Bill Iles Bill Kramer Bill Nickless Bill Wing Bob Stevens Brad Pope Brent Sweeny Caren Litvanyi Chris Wright Chuck Fisher Dave Koester Davey Wheeler David Mitchell David Richardson Debbie Mantano Dennis Duke Doug Luce Doug Nordwall Eli Dart Erik Plesset Gayle Allen Greg Goddard Hal Edwards Hoan Mai James Patton Janet Hull Jeff Carrell Jeff Mauth Jerry Sobieski Jim Rogers John Dysert John Jamison (JJ Jon Dugan Kevin Oberman Walsh Kim Anderson Linda Winkler Martin Swany Marvin Drake Matt Zekauskas Paola Grosso Patrick Dorn Paul Daspit Paul Love Paul Reisinger Rex Duncan Rick Bagwell Rick Mauer Riki Kurihara Rob Jaeger Robert Riehl Roland Gonzalez Russ Wolf Seth Viddal Stanislav Shalunov Steve Corbato Steve Kapp Steve Shultz Steve Tenbrink Thomas Hutton Tim LeMaster Tim Toole Tom Kile Tom Lehman Tony Rimovsky Tracey Wilson Warren Birch Will Murray Derek Gassen Paul Fernes Steve Pollock         Scinet is people (and employers)

  6. Bill Wing, ORNL – chair Jim Rogers, CSC – vice chair Dennis Duke, FSU – incoming chair Chuck Fisher, ORNL – hardware Jeff Mauth, PNNL – fiber Martin Swany, UTK –monitoring Eli Dart, NERSC – security Bill Nickless, ANL – routing Tim Toole, SNL – wireless David Koester, Mitre – Xnet Jon Dugan, NCSA – net mgmt Bill Kramer, NERSC – Bandwidth Challenge Greg Goddard, UFl – monitoring Kevin Oberman, LBL – Denver fiber Steve Corbató, Internet2 – WAN Debbie Montano, Qwest – Denver connectivity Linda Winkler, ANL – SC Global SC2001 Leadership

  7. SCinet Committee process • Conference calls – biweekly  weekly • Planning meetings (x3) • Venue recon trips (fiber, wireless) • Staging (~3 weeks before SCxy) • Build (starts Monday before SCxy) • Booth drops (~36 hours before gala reception) • Operate network for ~6 days • Tear down (starts Thursday 4:01p) • Rest & do day job for four months and then start again…

  8. Staging

  9. Wide area connectivity • Denver: 15 Gbps • 2xOC-48c: Abilene (Denver) • 2xGigE: STAR LIGHT (Chicago) • 1xOC-48c: Pacific/Northwest Gigapop (Seattle) • 2xOC-48c: ESnet (Sunnyvale & Chicago) • Level(3) provided wide area connectivity • Qwest provided local dark fiber

  10. WAN Bandwidth trends • SC98 (Orlando): 200 Mbps • SC99 (Portland): 13 Gbps • SC2000 (Dallas): 10 Gbps • SC2001 (Denver): 15 Gbps • SC2002 (Baltimore): Nx10-Gbps ’s?? • Increasing focus on BW utilization

  11. Abilene & SCxy Escalating bandwidth • SC99 Portland: OC-12c SONET (622 Mbps) • SC2000 Dallas: OC-48c SONET (2.5 Gbps) • SC2001 Denver: 2xOC-48c SONET (5 Gbps) SCxy transit connectivity offered to domestic & international R&E nets Backbone MTU raised to 9K bytes Traffic engineering for SC2001 End-to-End Performance: GigaTCP testing SC2002 Baltimore: 10-Gbps  (planned)

  12. Abilene traffic engineering – SC2002

  13. Fiber (Jeff Mauth) • 60+ miles of fiber deployed in exhibit hall • 0.3+ FTE-year • ~1.5 fiber-miles/hour • 120 fiber drops (90% multimode) • Pirelli 24 strand MM fiber used since ’98 • Deployment custom engineered to the venue selected for SCxy • ST fiber connectors standard • Will review choice for SC2002

  14. Fiber timeline – SC2002 • 5 scouting trips • Tue 11/6 9p – gained access to 2/3 of hall • Thu 11/8 6p – gained access to rest of hall • Fri 11/9 a.m. – fiber done • Sun 11/11 a.m. – equipment patching • Sun 11/12 p.m. – booth drops start • wireless & HP Jornada • Mon 11/12 noon – drops complete • Mon 11/12 7p – gala opening (D-DAY) • DANGER: carpet layers (20-30 cuts this year)

  15. Wireless (Tom Hutton) • Significant 802.11b effort this year • 35 Cisco wireless access points (13 in exhibit hall) • One on DCC roof pointed at Embassy Suites • Wireless still requires a lot of wires & work • 5000’ of wiring in exhibition hall • Several site surveys over the year • Totally flat LAN (3.5 Gbps switched BW) • Wireless really helps show set-up • Booth drop teams, booth connectivity prior to fiber • Clients seen: 618 peak, 246 average

  16. Infrastructure (Chuck Fisher) • SC98 • Core Routing provided by traditional Cisco 7500 series routers • First "production" use of gigabit Gigabit Ethernet (only 1 customer drop requested) • Most booth service was 10Base-FL and 100-FX provided via Fore Power Hubs • Limited use of network monitoring and statistics

  17. An earlier topology…

  18. Infrastructure trends - II • SC99 • Core Routing provided by Cisco GSR series routers • Concept of a routing core and a layer of L3 distribution switches adopted • Extensive use of DWDM hardware to provide WAN badwidth • Xnet introduced as a showcase for "bleeding edge" hardware

  19. Infrastructure trends - III • SC2000 • Core routing provided by Cisco and Juniper • Increased focus on network monitoring and statistics • First Xnet demonstration of 10 Gigabit Ethernet • Bandwidth Challenge introduced to SC

  20. SCinet 2001 Network Topology

  21. Infrastructure trends -IV • SC2001 Contributing Hardware Vendors • Cisco • Juniper • Marconi • Nortel • Spirent • Force10 • Foundry • ONI • LuxN • Equivalent to 3-5 bldg advanced campus network on major R&E backbones

  22. Operations • Servers • DNS, DHCP, NTP, Performance, beacons • Database • Network monitoring • Help desk • Trouble ticket system • Routing support (unicast, multicast, v6)

  23. Measurement and Security

  24. Security monitoring

  25. Xnet

  26. TeraGrid Distributed Backplane - NCSA, ANL, SDSC, Caltech StarLight International Optical Peering Point (see www.startap.net) Abilene Chicago DTF Backplane (4x: 40 Gbps) Indianapolis Urbana Los Angeles Starlight / NW Univ UIC San Diego I-WIRE Multiple Carrier Hubs Ill Inst of Tech ANL OC-48 (2.5 Gb/s, Abilene) Univ of Chicago Indianapolis (Abilene NOC) Multiple 10 GbE (Qwest) Multiple 10 GbE (I-WIRE Dark Fiber) NCSA/UIUC • Solid lines in place and/or available by October 2001 • Dashed I-WIRE lines planned for summer 2002 Source: Charlie Catlett, Argonne

  27. Xnet

  28. Trends … or what we might see in Baltimore?

  29. Optical networking • Dense Wave Division Multiplexing (DWDM) • Current systems can support >160 10-Gbps ’s (1.6 Tbps!) • Optical growth can overwhelm Moore’s Law (routers) • Costs scale dramatically with distance • Three possible scenarios for the future • Enhanced IP transport (higher BW and circuit multiplicity) • Fine-grained traffic engineering • p2p links between campuses, HPC centers, & Gigapops • Physical e2e switched circuits (a la ATM SVCs) • Evolution of optical switching will be critical • Don’t write off OEO

  30. Future of Abilene • Extension of Qwest’s original commitment to Abilene for another 5 years – 10/01/2006 • Originally expired March, 2003 • Upgrade of Abilene backbone to optical transport capability - ’s • x4 increase in the core backbone bandwidth • OC-48c SONET (2.5 Gbps) to 10-Gbps DWDM • Capability for flexible provisioning of 10-Gbps ’s to support future point-to-point experimentation & other projects • Emphasis on v6, network measurement, & measurement capabilities

  31. SC2002/Baltimore crystal ball • Strong local networking community • MAX Gigapop (University of Maryland) • DARPA Supernet (ISI-East, NRL) • Dark fiber & network presences in region • Abilene is aiming for 10-Gbps  connectivity • Increased focus on e2e performance & multicast reliability • More wireless (add 802.11a); less ATM? • 10 Gigabit Ethernet should be standardized • Optical switch in Xnet?

  32. Conclusion • Scinet is… … a diverse group of very committed and talented people and companies working very hard under extreme time constraints and trying conditions to make both the expected and the new and impossible in SCxy networking happen for one week in November and then return to do it again the next year

More Related