1 / 8

High-Performance Campus Cyberinfrastructure for Bridging End-User Laboratories to Data-Intensive Sources

This presentation by Larry Smarr discusses the implementation of a high-performance cyberinfrastructure on campus to effectively connect end-user laboratories with data-intensive sources. It explores the use of OptIPlatform, 10Gbps lightpath cloud, HD/4k video cams, HD/4k telepresence instruments, and HPC end-user OptIPortal.

gracew
Download Presentation

High-Performance Campus Cyberinfrastructure for Bridging End-User Laboratories to Data-Intensive Sources

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr to the NSF Campus Bridging Workshop April 7, 2010 University Place Conference Center Indianapolis, IN Philip Papadopoulos, SDSC Larry Smarr, Calit2 University of California, San Diego

  2. Academic Research “OptIPlatform” Cyberinfrastructure:An End-to-End 10Gbps Lightpath Cloud HD/4k Video Cams HD/4k Telepresence Instruments HPC End User OptIPortal 10G Lightpaths National LambdaRail Campus Optical Switch Data Repositories & Clusters HD/4k Video Images

  3. “Blueprint for the Digital University”--Report of the UCSD Research Cyberinfrastructure Design Team April 24, 2009 • Focus on Data Storage and Data Curation • These Become the Centralized Components • Other Common Elements “Plug In” research.ucsd.edu/documents/rcidt/RCIDTReportFinal2009.pdf

  4. Campus Bridging Preparations Needed to Accept CENIC CalREN Handoff to Campus Source: Jim Dolgonas, CENIC

  5. Current UCSD Prototype Optical Core:Bridging End-Users to CENIC L1, L2, L3 Services Enpoints: >= 60 endpoints at 10 GigE >= 32 Packet switched >= 32 Switched wavelengths >= 300 Connected endpoints Approximately 0.5 TBit/s Arrive at the “Optical” Center of Campus. Switching is a Hybrid of: Packet, Lambda, Circuit -- OOO and Packet Switches Lucent Glimmerglass Force10 Source: Phil Papadopoulos, SDSC/Calit2 (Quartzite PI, OptIPuter co-PI) Quartzite Network MRI #CNS-0421555; OptIPuter #ANI-0225642

  6. Calit2 Sunlight Optical Exchange Contains Quartzite 3-Level Switch 10:45 am Feb. 21, 2008

  7. UCSD Campus Investment in Fiber and Networks Enables High Performance Campus Bridging CI CENIC, NLR, I2DCN N x 10Gbe DataOasis(Central) Storage Gordon – HPC System Cluster Condo Triton – Petadata Analysis Scientific Instruments Digital Data Collections Campus Lab Cluster OptIPortal Tile Display Wall Source: Philip Papadopoulos, SDSC, UCSD

  8. Rapid Evolution of 10GbE Port PricesMakes Campus-Scale 10Gbps CI Affordable • Port Pricing is Falling • Density is Rising – Dramatically • Cost of 10GbE Approaching Cluster HPC Interconnects $80K/port Chiaro (60 Max) $ 5K Force 10 (40 max) ~$1000 (300+ Max) $ 500 Arista 48 ports $ 400 Arista 48 ports 2005 2007 2009 2010 Source: Philip Papadopoulos, SDSC, UCSD

More Related