1 / 25

The DOE UltraScience Network (aka UltraNet)

The DOE UltraScience Network (aka UltraNet). GNEW2004 March 15, 2004 Wing/Rao. Talk Outline. What are we trying to do? Why do it? How (we all sort of know don’t we)? How can we get there from here? …Where will that be?. DOE Science.

Download Presentation

The DOE UltraScience Network (aka UltraNet)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The DOE UltraScience Network(aka UltraNet) GNEW2004 March 15, 2004 Wing/Rao

  2. Talk Outline • What are we trying to do? • Why do it? • How (we all sort of know don’t we)? • How can we get there from here? • …Where will that be?

  3. DOE Science • DOE has bigger problems (drivers) … (and less money) than any other three-letter agency • But with it, we are trying to do things like: • Climate Modeling • Supernova modeling • Genomics (and proteomics)

  4. Climate Modeling North American model at 400Km resolution North American model at 60Km resolution The Rockies block the atmospheric circulation and are important geneses regions for the weather systems that form the North Atlantic Storm Track. They are poorly represented in low resolution models.

  5. Type II CoreCollapse Supernova

  6. Computational Molecular Biology Membrane with possible uses for bioremediation .25TF 11TF 400 TF >1.0 PF Molecular dynamics of a lipopolysaccharide (LPS) Nanosecond molecular dynamics of the LPS membrane of Pseudomonas aeruginosa QM/MM molecular dynamics of membrane plus mineral Energy transduction across membranes is still out of reach at 1.0PF

  7. 1982 1986 1990 1994 1998 2002 Billions of Bases in GenBank According to the GOLD database, there are 146 published genomes, 344 prokaryotic ongoing genomes projects, and 243 eukaryotic ongoing genome projects.

  8. Growth of Proteomic Data vs. Sequence Data

  9. PetaScale Science • Climate modeling today => 50-100 TB data sets • Still adding carbon chemistry • Still refining resolution • You saw DOE Road Map table showing PB requirements in 2-3 years • Supernova modeling just going from 2 1/2 dimensions to 3D - this is a PB problem today • Genomics and Proteomics will dwarf these • …and then of course there is High Energy …but

  10. We run hard into network limits… • There is some VERY nice work being done on TCP • But that will just emphasize router delay • And expose congestion, even where it coexists with it • There is nice work being done to extend MPLS • But that does not eliminate jitter • And imposes additional overhead • And there is still the tyranny of layers

  11. Jitter - ORNL / Atlanta

  12. Science Applications - Bit Rate Budget End-to-end Bit-Rate DWDM (One Lambda) 2.5 Gbps 40 Gbps .65 Gbps 100 Gbps 10 Gbps 5.0 Gbps 1.0 Gbps Research Opportunities SONET (One Channel) OC-768 OC-192 Apps TCP/IP (One Flow) ESnet Cluster ESnet Site Security Firewall End-to-end Performance Gap OC-48 ESnet GigE Host ESnet SITE LAN One Host Optical Backbone IP Network Services Host

  13. What are we trying to do? • We are trying to break out of the limitations of conventional networks • TCP/IP • Firewalls, and layers • Small-frame technologies • Take advantage of the real throughput available in optical networks today at the SONET layer • Develop the techniques needed within 2-3 years for DOE Peta-scale science • Use switched-circuits as a solution we can start on today

  14. Restating the Problem - • Networks will soon offer 40 Gbs channels • Typical multi-stream throughput limited to 5-6Gbs, and programming multi-steam is hard - hero efforts yield 9-10Gbs • Typical single-steam throughput to a single application ~1Gbs, even on a tuned network - hero effort yields 5-6Gbs • Firewalls throw it away anyway

  15. Solution - • We won’t have line-rate optical packet-switching anytime soon • No photonic transistors • No photonic memory buffers • We can do optical circuit switching • Useless for most commercial networks • Doesn’t aggregate for economy of scale • But bulk P2P purchases can be extremely attractive • A good match for DOE high-performance science applications which require both bandwidth and precision • Petabyte file transmission in 2-3 years • Tera-scale interactive visualization ASAP • Remote real-time instrument control now

  16. UltraNet Research Testbed • Build a sparse, lambda-switching, dedicated channel-provisioning testbed • Connect hubs close to DOE’s largest Science users (but let the user labs pay last-mile costs) • Provide an evolving matrix of switching capabilities • Separately fund research projects (e.g., high-performance protocols, control, visualization) that will exercise the network and directly support applications at the host institutions

  17. The Resources: • Off hours capacity from Esnet • Probably 2 x OC48 between Sunnyvale and Chicago in the northern route • Dedicated Lambdas on NLR • Two 10G lambdas between Chicago and Sunnyvale • Possibly more in year two or three • Two dedicated lambdas on the ORNL Chicago-Atlanta connector

  18. Resources Cont. • A progression of switching technologies • Start with Ciena, Cisco, or Sycamore • Migrate to Calient, Glimmerglass all-optical or a hybrid • Some Local Storage • Logistical Storage Depot (prior to local application development) • Progression of experimental point-to-point transport technologies (plus legacy) • Fiber channel • Infiniband • Migrate to the Production ESnet environment

  19. DOE National Lab DOE University Partners Research Exchange Point CERN Science UltraNet Phase I Seattle PNNL 2 x10 Gbps StarLight NERSC LBL BNL 2 x10 Gbps SLAC ANL Sunnyvale FNAL ORNL JLab CalTech SOX Access via production networks Research 10 Gbps lambda

  20. ESnet CERN etc. NLR The Physical View From 50,000 feet ORNL Connector

  21. Network Overview Intro 10GLAN PHY – CD Crawfordsville Bowling Green Chattanooga Indianapolis Stevenson Christiana Schneider Louisville Nashville Acworth Seymour Chicago Atalnta Monon Dalton Upton 10GbE OC192 OC192 10GbE 10GbE OC192 10GbE OC192 10GbE OC192 CI 2 x OC192 CHI – ATL Tx ETF OLA2 2 x OC192 CHI – ORNL Ultranet OLA - 2 OLA - 3 OLA1 SOADM conversion OLA -1 10GbE OC192 Oakridge CI 20 x Gig-E??? Memphis GbE / 10GbE / OC102 Knoxville 10GbE OC192

  22. The Logical View

  23. Schedule • Oak Ridge Connector to be up in June • Fall-back circuits via NLR if necessary • Chicago-Sunnyvale paced by NLR • PNNL fiber schedule will pace their connection • Expect first user-traffic in June

  24. Future Plans • ORNL is half the team building CHEETAH • Nagi Rao and Malathi Veeraraghavan are PI’s • Funded by NSF • Will Connect ORNL, Atlanta, NC State, possibly CUNY • Explicitly developing signaling protocols • Want to partner with HOPI • …and all of you

  25. Thank You

More Related