1 / 19

“ High Performance Collaboration – The Jump to Light Speed "

“ High Performance Collaboration – The Jump to Light Speed ". Talk to A Visiting Team from Intel Calit2@UCSD June 25, 2006. Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology; Harry E. Gruber Professor,

job
Download Presentation

“ High Performance Collaboration – The Jump to Light Speed "

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. “High Performance Collaboration –The Jump to Light Speed" Talk to A Visiting Team from Intel Calit2@UCSD June 25, 2006 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology; Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD

  2. From “Supercomputer–Centric” to “Supernetwork-Centric” Cyberinfrastructure Terabit/s 32x10Gb “Lambdas” Computing Speed (GFLOPS) Bandwidth of NYSERNet Research Network Backbones Gigabit/s 60 TFLOP Altix 1 GFLOP Cray2 Optical WAN Research Bandwidth Has Grown Much Faster Than Supercomputer Speed! Megabit/s T1 Network Data Source: Timothy Lance, President, NYSERNet

  3. National Lambda Rail (NLR) and TeraGrid Provides Cyberinfrastructure Backbone for U.S. Researchers NSF’s TeraGrid Has 4 x 10Gb Lambda Backbone International Collaborators Seattle Portland Boise UC-TeraGrid UIC/NW-Starlight Ogden/ Salt Lake City Cleveland Chicago New York City Denver Pittsburgh San Francisco Washington, DC Kansas City Raleigh Albuquerque Tulsa Los Angeles Atlanta San Diego Phoenix Dallas Baton Rouge Las Cruces / El Paso Links Two Dozen State and Regional Optical Networks Jacksonville Pensacola DOE, NSF, & NASA Using NLR Houston San Antonio NLR 4 x 10Gb Lambdas Initially Capable of 40 x 10Gb wavelengths at Buildout

  4. The OptIPuter Project – Creating High Resolution Portals Over Dedicated Optical Channels to Global Science Data • NSF Large Information Technology Research Proposal • Calit2 (UCSD, UCI) and UIC Lead Campuses—Larry Smarr PI • Partnering Campuses: SDSC, USC, SDSU, NCSA, NW, TA&M, UvA, SARA, NASA Goddard, KISTI, AIST, CRC(Canada), CICESE (Mexico) • Industrial Partners • IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent • $13.5 Million Over Five Years—Now In the Fourth Year NIH Biomedical Informatics Research Network NSF EarthScope and ORION

  5. OptIPuter Software Architecture--a Service-Oriented Architecture Integrating Lambdas Into the Grid Distributed Applications/ Web Services Visualization Telescience SAGE JuxtaView Data Services Vol-a-Tile LambdaRAM Distributed Virtual Computer (DVC) API DVC Runtime Library DVC Configuration DVC Services DVC Communication DVC Job Scheduling DVC Core Services Resource Identify/Acquire Namespace Management Security Management High Speed Communication Storage Services RobuStore PIN/PDC Discovery and Control IP Lambdas Source: Andrew Chien, UCSD Globus GSI XIO GRAM GTP XCP UDT CEP LambdaStream RBUDP

  6. OptIPuter Scalable Adaptive Graphics Environment (SAGE) Allows Integration of HD Streams OptIPortal– Termination Device for the OptIPuter Global Backplane Photo: David Lee, NCMIR, UCSD

  7. OptIPortal– Termination Device for the OptIPuter Global Backplane • 20 Dual CPU Nodes, 20 24” Monitors, ~$50,000 • 1/4 Teraflop, 5 Terabyte Storage, 45 Mega Pixels--Nice PC! • Scalable Adaptive Graphics Environment ( SAGE) Jason Leigh, EVL-UIC Source: Phil Papadopoulos SDSC, Calit2

  8. The New Optical Core of the UCSD Campus-Scale Testbed:Evaluating Packet Routing versus Lambda Switching Goals by 2007: >= 50 endpoints at 10 GigE >= 32 Packet switched >= 32 Switched wavelengths >= 300 Connected endpoints Funded by NSF MRI Grant Lucent Glimmerglass Approximately 0.5 TBit/s Arrive at the “Optical” Center of Campus Switching will be a Hybrid Combination of: Packet, Lambda, Circuit -- OOO and Packet Switches Already in Place Force10

  9. Calit2/SDSC Proposal to Create a UC Cyberinfrastructure of OptIPuter “On-Ramps” to TeraGrid Resources OptIPuter + CalREN-XD + TeraGrid = “OptiGrid” UC Davis UC Berkeley UC San Francisco UC Merced UC Santa Cruz UC Los Angeles UC Riverside UC Santa Barbara UC Irvine Creating a Critical Mass of End Users on a Secure LambdaGrid UC San Diego Source: Fran Berman, SDSC , Larry Smarr, Calit2

  10. Creating a North American Superhighway for High Performance Collaboration Next Step: Adding Mexico to Canada’s CANARIE and the U.S. National Lambda Rail

  11. Countries are Aggressively Creating Gigabit Services:Interactive Access to CAMERA Data System Visualization courtesy of Bob Patterson, NCSA. www.glif.is Created in Reykjavik, Iceland 2003

  12. First Remote Interactive High Definition Video Exploration of Deep Sea Vents Canadian-U.S. Collaboration Source John Delaney & Deborah Kelley, UWash

  13. PI Larry Smarr

  14. Marine Genome Sequencing ProjectMeasuring the Genetic Diversity of Ocean Microbes CAMERA will include All Sorcerer II Metagenomic Data

  15. Calit2’s Direct Access Core Architecture Will Create Next Generation Metagenomics Server Dedicated Compute Farm (1000 CPUs) W E B PORTAL Data- Base Farm Web 10 GigE Fabric TeraGrid Backplane (10000s of CPUs) Direct Access Lambda Cnxns Flat File Server Farm Local Cluster User Environment CAMERA Complex + Web Services Source: Phil Papadopoulos, SDSC, Calit2

  16. Analysis Data Sets, Data Services, Tools, and Workflows Assemblies of Metagenomic Data e.g, GOS, JGI CSP Annotations Genomic and Metagenomic Data “All-against-all” Alignments of ORFs Updated Periodically Gene Clusters and Associated Data Profiles, Multiple-Sequence Alignments, HMMs, Phylogenies, Peptide Sequences Data Services ‘Raw’ and Specialized Analysis Data Rich Query Facilities Tools and Workflows Navigate and Sift Raw and Analysis Data Publish Workflows and Develop New Ones Prioritize Features via Dialogue with Community Source: Saul Kravitz Director of Software Engineering J. Craig Venter Institute

  17. Calit2 and the Venter Institute Will Combine Telepresence with Remote Interactive Analysis 25 Miles Venter Institute OptIPuter Visualized Data HDTV Over Lambda Live Demonstration of 21st Century National-Scale Team Science

  18. Calit2 Works with CENIC to Provide the California Optical Core for CineGrid Discussions with CITRIS Partnering with SFSU’s Institute for Next Generation Internet SFSU UCB Digital Archive of Films Calit2’s CineGrid Team is Working with Cinema Industry in LA and SF • In addition, 1Gb and 10Gb Connections to: • Seattle, Asia, Australia, New Zealand • Chicago, Europe, Russia, China • Tijuana, Rosarita Beach, Ensenada Prototype of CineGrid USC Extending SoCal OptIPuter to USC School of Cinema-Television Calit2 UCI Calit2 UCSD

  19. First Trans-Pacific Super High Definition Telepresence Meeting in New Calit2 Digital Cinema Auditorium Keio University President Anzai UCSD Chancellor Fox Lays Technical Basis for Global Digital Cinema Sony NTT SGI

More Related