the optiputer project removing bandwidth as an obstacle in data intensive sciences n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
The OptIPuter Project – Removing Bandwidth as an Obstacle In Data Intensive Sciences PowerPoint Presentation
Download Presentation
The OptIPuter Project – Removing Bandwidth as an Obstacle In Data Intensive Sciences

Loading in 2 Seconds...

play fullscreen
1 / 26

The OptIPuter Project – Removing Bandwidth as an Obstacle In Data Intensive Sciences - PowerPoint PPT Presentation


  • 116 Views
  • Uploaded on

The OptIPuter Project – Removing Bandwidth as an Obstacle In Data Intensive Sciences. Opening Remarks OptIPuter Team Meeting University of California, San Diego February 6, 2003. Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technologies

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'The OptIPuter Project – Removing Bandwidth as an Obstacle In Data Intensive Sciences' - banyan


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
the optiputer project removing bandwidth as an obstacle in data intensive sciences

The OptIPuter Project – Removing Bandwidth as an Obstacle In Data Intensive Sciences

Opening Remarks

OptIPuter Team Meeting

University of California, San Diego

February 6, 2003

Dr. Larry Smarr

Director, California Institute for Telecommunications and Information Technologies

Harry E. Gruber Professor,

Dept. of Computer Science and Engineering

Jacobs School of Engineering, UCSD

the move to data intensive science engineering e science community resources
The Move to Data-Intensive Science & Engineering-e-Science Community Resources

Sloan Digital Sky Survey

ALMA

LHC

ATLAS

why optical networks are emerging as the 21 st century driver for the grid
Why Optical Networks Are Emerging as the 21st Century Driver for the Grid

Scientific American,

January 2001

Parallel Lambdas Will Drive This Decade

The Way Parallel Processors Drove the 1990s

a lambdagrid will be the backbone for an e science network
A LambdaGrid Will Be the Backbone for an e-Science Network

Apps Middleware

Clusters

C

O

N

T

R

O

L

P

L

A

N

E

Dynamically

Allocated

Lightpaths

Switch Fabrics

Physical

Monitoring

Source: Joe Mambretti, NU

the biomedical informatics research network a multi scale brain imaging federated repository
The Biomedical Informatics Research Network a Multi-Scale Brain Imaging Federated Repository

BIRN Test-beds:

Multiscale Mouse Models of Disease, Human Brain Morphometrics, and

FIRST BIRN (10 site project for fMRI’s of Schizophrenics)

NIH Plans to Expand

to Other Organs

and Many Laboratories

geon s data grid team has strong overlap with birn and optiputer
GEON’s Data Grid Team Has Strong Overlap with BIRN and OptIPuter
  • Learning From The BIRN Project
    • The GEON Grid:
      • Heterogeneous Networks, Compute Nodes, Storage
    • Deploy Grid And Cluster Software Across GEON
    • Peer-to-Peer Information Fabric for Sharing:
      • Data, Tools, And Compute Resources

NSF ITR Grant

$11.25M

2002-2007

Two Science “Testbeds”

Broad Range Of Geoscience Data Sets

Source: Chaitan Baru, SDSC, Cal-(IT)2

slide7

NSF’s EarthScope

Rollout Over 14 Years Starting

With Existing Broadband Stations

data intensive scientific applications require experimental optical networks
Data Intensive Scientific Applications Require Experimental Optical Networks
  • Large Data Challenges in Neuro and Earth Sciences
    • Each Data Object is 3D and Gigabytes
    • Data are Generated and Stored in Distributed Archives
    • Research is Carried Out on Federated Repository
  • Requirements
    • Computing Requirements  PC Clusters
    • Communications  Dedicated Lambdas Over Fiber
    • Data  Large Peer-to-Peer Lambda Attached Storage
    • Visualization  Collaborative Volume Algorithms
  • Response
    • OptIPuter Research Project
optiputer inspiration node of a 2009 petaflops supercomputer
OptIPuter Inspiration--Node of a 2009 PetaFLOPS Supercomputer

24 Bytes wide

240 GB/s

DRAM – 16 GB

64/256 MB - HIGHLY INTERLEAVED

5 Terabits/s

DRAM - 4 GB - HIGHLY INTERLEAVED

MULTI-LAMBDA

Optical Network

CROSS BAR

Coherence

  • GB/s

2nd LEVEL CACHE

2nd LEVEL CACHE

8 MB

8 MB

24 Bytes wide

240 GB/s

VLIW/RISC CORE

40 GFLOPS

10 GHz

VLIW/RISC CORE

40 GFLOPS

10 GHz

...

Updated From Steve Wallach, Supercomputing 2000 Keynote

global architecture of a 2009 cots petaflops system
Global Architecture of a 2009 COTS PetaFLOPS System

128 Die/Box

4 CPU/Die

  • meters=
  • 50 nanosec Delay

3

4

...

5

2

16

1

17

64

ALL-OPTICAL

SWITCH

18

63

...

...

32

49

48

Multi-Die

Multi-Processor

...

33

47

46

I/O

Systems Become

GRID Enabled

LAN/WAN

Source: Steve Wallach, Supercomputing 2000 Keynote

from supercomputers to supernetworks changing the grid design point
From SuperComputers to SuperNetworks--Changing the Grid Design Point
  • The TeraGrid is Optimized for Computing
    • 1024 IA-64 Nodes Linux Cluster
    • Assume 1 GigE per Node = 1 Terabit/s I/O
    • Grid Optical Connection 4x10Gig Lambdas = 40 Gigabit/s
    • Optical Connections are Only 4% Bisection Bandwidth
  • The OptIPuter is Optimized for Bandwidth
    • 32 IA-64 Node Linux Cluster
    • Assume 1 GigE per Processor = 32 gigabit/s I/O
    • Grid Optical Connection 4x10GigE = 40 Gigabit/s
    • Optical Connections are Over 100% Bisection Bandwidth
convergence of networking fabrics
Convergence of Networking Fabrics
  • Today's Computer Room
    • Router For External Communications (WAN)
    • Ethernet Switch For Internal Networking (LAN)
    • Fibre Channel For Internal Networked Storage (SAN)
  • Tomorrow's Grid Room
    • A Unified Architecture Of LAN/WAN/SAN Switching
    • More Cost Effective
      • One Network Element vs. Many
    • One Sphere of Scalability
    • ALL Resources are GRID Enabled
      • Layer 3 Switching and Addressing Throughout

Source: Steve Wallach, Chiaro Networks

the ucsd optiputer deployment
The UCSD OptIPuter Deployment

The OptIPuter Experimental UCSD Campus Optical Network

½ Mile

To CENIC

Phase I, Fall 02

Phase I, Fall 02

Phase II, 2003

Phase II, 2003

Collocation point

Collocation point

Production Router

SDSC

SDSC

SDSCAnnex

SDSCAnnex

Preuss

High School

JSOE

Engineering

CRCA

Arts

Medicine

SOM

UndergradCollege

6thCollege

Chemistry

Phys. Sci -Keck

Node M

Collocation

Chiaro Router

SIO

Earth Sciences

Source: Phil Papadopoulos, SDSC; Greg Hidley, Cal-(IT)2

metro optically linked visualization walls with industrial partners set stage for federal grant
Metro Optically Linked Visualization Wallswith Industrial Partners Set Stage for Federal Grant
  • Driven by SensorNets Data
    • Real Time Seismic
    • Environmental Monitoring
    • Distributed Collaboration
    • Emergency Response
  • Linked UCSD and SDSU
    • Dedication March 4, 2002

Linking Control Rooms

UCSD

SDSU

Cox, Panoram,

SAIC, SGI, IBM,

TeraBurst Networks

SD Telecom Council

44 Miles of Cox Fiber

national light rail serving very high end experimental and research applications
National Light Rail- Serving Very High-End Experimental and Research Applications
  • Extension of CalREN-XD Dark Fiber Network
    • Serves Network Researchers in California Research Institutions
      • Four UC Institutes, USC/ISI, Stanford and CalTech
    • 10Gb Wavelengths (OC-192c or 10G LANPHY)
    • Dark Fiber
    • Point-Point, Point-MultiPoint 1G Ethernet Possible
  • NLR is a Dark Fiber National Footprint
    • 4 - 10GB Wavelengths Initially
    • Capable of 40 10Gb Wavelengths at Build-Out
    • Partnership model

John Silvester, Dave Reese, Tom West-CENIC

national light rail footprint layer 1 topology
National Light Rail Footprint Layer 1 Topology

SEA

POR

SAC

BOS

NYC

CHI

OGD

DEN

SVL

CLE

WDC

PIT

FRE

KAN

RAL

NAS

STR

LAX

PHO

WAL

ATL

SDG

STH

DAL

JAC

15808 Terminal, Regen or OADM site

(OpAmp sites not shown)

Fiber route

John Silvester, Dave Reese, Tom West-CENIC

amplified collaboration environments
Amplified Collaboration Environments

CollaborativePassive Stereo

Display

Collaborative Tiled Display

Accessgrid

MultisiteVideo Conferencing

Collaborative

Touch ScreenWhiteboard

Wireless

Laptops &

Tablet PCs To Steer The Displays

Source: Jason Leigh

the optiputer 2003
The OptIPuter 2003

Experimental Network

Wide Array of Vendors

optiputer software research
OptIPuter Software Research
  • Near-term Goals:
    • Build Software To Support Applications With Traditional Models
      • High Speed IP Protocol Variations (RBUDP, SABUL, …)
      • Switch Control Software For DWDM Management And Dynamic Setup
      • Distributed Configuration Management For OptIPuter Systems
  • Long-Term Goals:
    • System Model Which Supports:
      • Grid
      • Single System
      • Multi-System Views
    • Architectures Which Can:
      • Harness High Speed DWDM
      • Exploit Flexible Dispersion Of Data And Computation
    • New Communication Abstractions & Data Services
      • Make Lambda-Based Communication Easily Usable
      • Use DWDM to Enable Uniform Performance View Of Storage

Source: Andrew Chien, UCSD

photonic data services optiputer
Photonic Data Services & OptIPuter

6. Data Intensive Applications (UCI)

5a. Storage (UCSD)

5b. Data Services –

SOAP, DWTP, (UIC/LAC)

4. Transport – TCP, UDP, SABUL,… (USC,UIC)

3. IP

2. Photonic Path Serv. – ODIN, THOR,... (NW)

1. Physical

Source: Robert Grossman, UIC/LAC

optiputer is exploring quanta as a high performance middleware
OptIPuter is Exploring Quanta as a High Performance Middleware
  • Quanta Is A High Performance Networking Toolkit / API
  • Quanta Uses Reliable Blast UDP:
    • Assumes An Over-Provisioned Or Dedicated Network
    • Excellent For Photonic Networks
    • Don’t Try This On Commodity Internet!
      • It Is Fast!
      • It Is Very Predictable
      • We Give You A Prediction Equation To Predict Performance
    • It Is Most Suited For Transferring Very Large Payloads
  • RBUDP, SABUL, and Tsunami Are All Similar Protocols That Use UDP For Bulk Data Transfer

Source: Jason Leigh, UIC

xcp is a new congestion control scheme which is good for gigabit flows
XCP Is A New Congestion Control SchemeWhich is Good for Gigabit Flows

Better Than TCP

Almost Never Drops Packets

Converges To Available Bandwidth Very Quickly, ~1Round Trip Time

Fair Over Large Variations In Flow Bandwidth and RTT

Supports existing TCP semantics

Replaces Only Congestion Control, Reliability Unchanged

No Change To Application/Network API

Status

To Date: Simulations and SIGCOMM Paper (MIT).

See Dina Katabi, Mark Handley, and Charles Rohrs, "Internet Congestion Control for Future High Bandwidth-Delay Product Environments." ACM SIGCOMM 2002, August 2002. http://ana.lcs.mit.edu/dina/XCP/

Current:

Developing Protocol, Implementation

Extending Simulations (ISI)

Source: Aaron Falk, Joe Bannister, ISI USC

multi lambda security research
Multi-Lambda Security Research
  • Security Frequently Defined Through Three Measures:
    • Integrity, Confidentiality, And Reliability (”Uptime”)
  • Can These Measures Can Be Enhanced By Routing Transmissions Over Multiple Lambdas Of Light?
  • Can Confidentiality Be Improved By Dividing The Transmission Over Multiple Lambdas And Using “Cheap” Encryption?
  • Can Integrity Be Ensured Or Reliability Be Improved Through Sending Redundant Transmissions And Comparing?

Source: Goodrich, Karin

research on developing an integrated control plane
Research on Developing an Integrated Control Plane

Gigabit Stream

Bursty Traffic

Tera/Peta Stream

Megabit Stream

Multiple User Data Planes

Optical

Burst

Switching

Lambda

Inverse

Multiplexing

Logical

Label

Switching

Optical

Lambda

Switching

Integrated Control Plane

Source: Oliver Yu, UIC

optiputer transforms individual laboratory visualization computation analysis facilities
OptIPuter Transforms Individual Laboratory Visualization, Computation, & Analysis Facilities

Anatomy

Neuroscience

Visible Human ProjectNLM, Brooks AFB, SDSC Volume Explorer

Dave Nadeau, SDSC, BIRNSDSC Volume Explorer

Fast polygon and volume rendering with stereographics

+

GeoWall

=

3D APPLICATIONS:

Underground Earth Science

Earth Science

GeoFusion GeoMatrix Toolkit

Rob Mellors and Eric Frost, SDSUSDSC Volume Explorer

The Preuss School UCSD OptIPuter Facility