110 likes | 222 Views
This document outlines the initial stages of the OptIPuter Network, focusing on the deployment of OptIPuter nodes, including clusters, storage, and visualization capabilities. It details the connectivity between nodes through optical routers and switches, the planned phases of network deployments from 2002-2004, and anticipated upgrades for enhanced performance. The expansion aims to interconnect multiple OptIPuter sites and establish a dedicated research network while adhering to financial and operational requirements.
E N D
OptIPuter Networks Overview of Initial Stages to Include OptIPuter Nodes OptIPuter Networks OptIPuter Expansion OPtIPuter All Hands Meeting February 6-7 UCSD
Compute + Data + Viz Grid: Building Block Commodity GigE Switch • Generic OptIPuter node is one or more of: • Cluster: 16 – 128 nodes (160GF – 1.2 TF) • Storage: 0.5TB – 10 TB • Visualization: Desktop, Wall, Immersive • Specialized data source/sink instruments • Connected to other nodes via the OptIPuter Network
Nodes Connected by Optical Routers or Switches High-end DB Server switch switch switch switch Chemistry, Preuss, 6th College, Engineering, Arts School of Med switch • Cluster – Disk • Disk – Disk • Viz – Disk • DB – Cluster • Cluster – Cluster ChiaroRouter SDSC SIO, SDSU UCSD OptIPuter
Initial UCSD OptIPuter Deployment ½ Mile To Other OptIPuter Sites Phase I, Fall/Winter 2002-03 Phase II, 2003 Central hub/router SDSC SDSCAnnex Preuss JSOE CRCA SOM 6thCollege Phys. Sci -Keck Node M SIO
UCSD OptIPuter Network Expansions • Initial anticipated network deployments (Fall 2002, Winter 2003) • 4 GigE connections (using 4 fiber pairs) to high performance nodes • 1 GigE connections (using 1 fiber pair) to other nodes • Each node's OPPP (OptIPuter Point of Presence) is initially a local GigE switch with 4 bondable GigE (GBIC) uplinks • Above nodes uplinks connected to a central GigE aggregate switch • Above is a separate research network (not connected to campus network) • During 2003, upgrade OptIPuter network as required … • N x 10 GigE (over existing fiber pairs?, N could be up to 4) ... and/or • Add lambda gear to conserve fiber (supporting 1 or 10 GigE x N lambdas) • Add management to support dynamic lambda allocation • Add edge (of OptIPuter) router for connecting to campus or remote OptIPuter sites • Late 2003 and 2004 • Interconnect multiple OptIPuter sites (NU, EVL, USC/ISI, UCI, …) • Expand network/lambda management to span sites
15808 Terminal, Regen or OADM site (OpAmp sites not shown) Fiber route NLR Footprint and Layer 1 Topology SEA POR SAC BOS NYC CHI OGD DEN SVL CLE WDC PIT FRE KAN RAL NAS STR LAX PHO WAL ATL SDG OLG DAL
New OptIPuter Participate and Nodes • Research requirements • Contribute to existing OptIPuter goals as defined in • OptIPuter Proposal • NSF Statement of Work • Involve collaboration with existing OptIPuter PI and Co-PIs • Expand OptIPuter project to address new areas of research • Supported by new proposals and resources • Providing new application drivers • Providing integration with new infrastructure • Expanding the optical network testbed • Technical and operational requirements • Network connectivity • Dedicated research fiber/lambda path to OptIPuter network • Appropriate network electronics to support connectivity • Ability to work within experimental network environment • Financial requirements • Impose no financial burden on existing grant • Develop support for • hardware • manpower • connectivity to OptIPuter network