1 / 14

The Drive Toward Dedicated IP Lightpipes for e-Science Applications

The Drive Toward Dedicated IP Lightpipes for e-Science Applications. OSA’s 6th Annual Photonics & Telecommunications Executive Forum Panel on "Back to the Future of Optical Communications: Fiber Optics Opportunities Outside the Telco Bubble" Los Angeles, CA February 23, 2004.

gaerwn
Download Presentation

The Drive Toward Dedicated IP Lightpipes for e-Science Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Drive Toward Dedicated IP Lightpipes for e-Science Applications OSA’s 6th Annual Photonics & Telecommunications Executive Forum Panel on "Back to the Future of Optical Communications: Fiber Optics Opportunities Outside the Telco Bubble" Los Angeles, CA February 23, 2004 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technologies Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD

  2. Components of Cyberinfrastructure Enabled Science & Engineering NSF Report on Revolutionizing Science and Engineering through Cyber-Infrastructure (Atkins Report) www.communitytechnology.org/nsf_ci_report/ High-performance computing for modeling, simulation, data processing/mining Humans Instruments for observation and characterization. Individual & Global Connectivity Group Interfaces Physical World & Visualization Facilities for activation, manipulation and Collaboration construction Services Knowledge management institutions for collection building and curation of data, information, literature, digital objects

  3. CERN Geneva Large Hadron Collider Cyberinfrastructure Communications of the ACM, Volume 46, Issue 11 (November 2003)

  4. High Energy and Nuclear Physics Major Links: Bandwidth Roadmap (Scenario) in Gbps Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade;We are Rapidly Learning to Use Multi-Gbps Networks Dynamically

  5. The OptIPuter Project – Removing Bandwidth as an Obstacle In Data Intensive Sciences • NSF Large Information Technology Research Proposal • Cal-(IT)2 and UIC Lead Campuses—Larry Smarr PI • USC, SDSU, NW, Texas A&M, Univ. Amsterdam Partnering Campuses • Industrial Partners • IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, BigBangwidth • $13.5 Million Over Five Years [www.optiputer.net] • Optical IP Streams From Lab Clusters to Large Data Objects NIH Biomedical Informatics NSF EarthScope and ORION Research Network http://ncmir.ucsd.edu/gallery.html siovizcenter.ucsd.edu/library/gallery/shoot1/index.shtml

  6. NSF’s ORIONOcean Research Interactive Ocean Network www.neptune.washington.edu Cyberinfrastructure in Design Phase-- Fiber Optic Satellite Wireless

  7. UCSD is Prototyping a Campus-Scale OptIPuter The UCSD OptIPuter Deployment 0.320 Tbps Backplane Bandwidth Juniper T320 20X 6.4 Tbps Backplane Bandwidth Chiaro Estara ½ Mile To CENIC Dedicated Fibers Between Sites Link Linux Clusters SDSC SDSC SDSC Annex SDSCAnnex Preuss High School JSOE Engineering CRCA SOM Medicine 6thCollege Phys. Sci -Keck Collocation Node M Earth Sciences SIO Source: Phil Papadopoulos, SDSC; Greg Hidley, Cal-(IT)2

  8. Ultra-Resolution Displays Utilize Photonic Multicasting --Scaling to 100 Million Pixels GlimmerglassSwitch Used to Multicast and Direct TeraVision Stream from One Tile to Another on the Geowall-2 Driven by Linux Graphics Clusters Glimmerglass Switch

  9. States are Acquiring Their Own Dark Fiber Networks -- Illinois’s I-WIRE and Indiana’s I-LIGHT Source: Charlie Catlett, ANL

  10. Edge and Core OptIPuter Nodes UIC/EVL Int’l GE, 10GE 16x10 GE 16x1 GE OMNInet 10GEs 128x128 Calient 64x64GG I-WIRE OC-192 Future 64-bit Cluster 16-dual Xeon Cluster 16x1GE Nat’l GE, 10GE All Processors also Connected by GigE to Routers

  11. The OptIPuter Will Become aNational-Scale Collaboratory in 2004 NEPTUNE Chicago OptIPuter StarLight NU, UIC NASA Goddard NASA Ames In Discussion USC, UCI UCSD, SDSU SoCal OptIPuter “National Lambda Rail” Partnership Serves Very High-End Experimental and Research Applications 4 x 10Gb Wavelengths Initially Capable of 40 x 10Gb wavelengths at Buildout Source: Tom West, CEO, NLR

  12. LambdaGrids Link the WorldGlobal Lambda Integrated Facility: GLIF NewYork MANLAN Stockholm NorthernLight 10 Gbit/s IEEAF 10 Gbit/s 10 Gbit/s 2.5 Gbit/s 2.5 Gbit/s 10 Gbit/s SURFnet 10 Gbit/s Canada CA*net4 Amsterdam NetherLight Dwingeloo ASTRON/JIVE Chicago StarLight Tokyo WIDE 10 Gbit/s IEEAF 10 Gbit/s DWDM SURFnet NSF 10 Gbit/s 10 Gbit/s 10 Gbit/s 2.5 Gbit/s Tokyo APAN SURFnet 10 Gbit/s 2.5 Gbit/s 10 Gbit/s London UKLight Geneva CERN Prague CzechLight Source: Kees Neggers, SURFnet

  13. LambdaGrid Control Plane Radical Paradigm Shift OptIPuter: Distributed Device, Dynamic Services, Visible & Accessible Resources, Integrated As Required By Apps Traditional Provider Services: Invisible, Static Resources, Centralized Management Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Unlimited Functionality, Flexibility Limited Functionality, Flexibility Source: Joe Mambretti, Oliver Yu, George Clapp

  14. See Nov 2003 CACM For Articles on OptIPuter Technologies

More Related