Hybrid Packet-Optical Infrastructure
This presentation is the property of its rightful owner.
Sponsored Links
1 / 25

Hybrid Packet-Optical Infrastructure Tom DeFanti, Maxine Brown, Joe Mambretti & Linda Winkler PowerPoint PPT Presentation


  • 106 Views
  • Uploaded on
  • Presentation posted in: General

Hybrid Packet-Optical Infrastructure Tom DeFanti, Maxine Brown, Joe Mambretti & Linda Winkler. The Hard Problems. New protocols needed: the Internet is not designed for single large-scale users Circuits are not scalable, but neither are router$ All intelligence has to be on the edge

Download Presentation

Hybrid Packet-Optical Infrastructure Tom DeFanti, Maxine Brown, Joe Mambretti & Linda Winkler

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Hybrid packet optical infrastructure tom defanti maxine brown joe mambretti linda winkler

Hybrid Packet-Optical Infrastructure

Tom DeFanti, Maxine Brown, Joe Mambretti & Linda Winkler


The hard problems

The Hard Problems

  • New protocols needed: the Internet is not designed for single large-scale users

  • Circuits are not scalable, but neither are router$

  • All intelligence has to be on the edge

  • Tuning compute, data, visualization, networking using clusters to get even a simple order of magnitude improvement is non-trivial

  • Security at 10Gb line speed


Knowing the user s bandwidth requirements

A -> Need full Internet routing

B -> Need VPN services on/and full Internet routing

C -> Need very fat pipes, limited multiple Virtual Organizations

A

B

C

Knowing the User’s Bandwidth Requirements

Bandwidth consumed

Number of users

DSL

GigE LAN

Source: Cees de Laat, UvA


We can now build lambda grids

We Can Now Build Lambda Grids!

  • Importance for Applications

    • To create high-performance trials of new technologies that support application-dictatedsoftware toolkits, middleware, computing and networking

    • To provide known and knowable characteristics with deterministic and repeatable behavior on a persistent basis, while encouraging experimentation with innovative concepts

    • It isn’t science if you can’t repeat it!


Hybrid packet optical infrastructure tom defanti maxine brown joe mambretti linda winkler

IIIIIII

IIIIIII

IIIIIII

IIIIIII

IIIIIII

IIIIIII

IIIIIII

IIIIIII

Performance Monitoring & Control

OMNInet Testbed Experiments - MEMS-Based Dynamic Lambda Switching

(MREN Used as Out-of-Band Control Channel)


Hybrid packet optical infrastructure tom defanti maxine brown joe mambretti linda winkler

Chicago

UIC

Metro Lambda Grid (I-WIRE and OMNInet – An Advanced Photonic Metro Testbed, Joint Project with iCAIR et al)


Illinois i wire state us 7 5m lambda grid

Illinois I-WIRE State US$7.5M Lambda Grid

Source: Charlie Catlett, ANL


Illinois i wire distributed cluster computing

StarLight

Argonne

UIC/EVL

  • Research Areas

  • Latency-Tolerant Algorithms

  • Interaction of SAN/LAN/WAN technologies

  • Clusters

UIUC

CS

NCSA

Illinois’ I-WIRE:Distributed Cluster Computing

  • Research Areas

  • Displays/VR

  • Collaboration

  • Rendering

  • Applications

  • Data Mining

Source: Charlie Catlett


Teragrid @ starlight

TeraGrid @ StarLight

TeraGrid, an NSF-funded Major Research Equipment initiative, has its Illinois hub located at StarLight.


20 tf linux teragrid

OC-12

vBNS

Abilene

MREN

OC-12

OC-3

= 32x 1GbE

32 quad-processor McKinley Servers

(128p @ 4GF, 8GB memory/server)

20 TF Linux TeraGrid

574p IA-32 Chiba City

32

32

256p HP

X-Class

Argonne

64 Nodes

1 TF

0.25 TB Memory

25 TB disk

32

32

Caltech

32 Nodes

0.5 TF

0.4 TB Memory

86 TB disk

128p Origin

24

32

128p HP

V2500

32

HR Display & VR Facilities

24

8

8

5

5

92p IA-32

HPSS

24

HPSS

OC-12

ESnet

HSCC

MREN/Abilene

Starlight

Extreme

Black Diamond

4

OC-48

Calren

OC-48

OC-12

NTON

GbE

OC-12 ATM

Juniper M160

NCSA

500 Nodes

8 TF, 4 TB Memory

240 TB disk

SDSC

256 Nodes

4.1 TF, 2 TB Memory

225 TB disk

Juniper M40

Juniper M40

OC-12

vBNS

Abilene

Calren

ESnet

OC-12

2

2

OC-12

OC-3

Myrinet Clos Spine

8

4

UniTree

8

HPSS

2

Sun

Starcat

Myrinet Clos Spine

4

1024p IA-32

320p IA-64

1176p IBM SP

Blue Horizon

16

14

= 64x Myrinet

4

= 32x Myrinet

1500p Origin

Sun E10K

= 32x FibreChannel

= 8x FibreChannel

10 GbE

32 quad-processor McKinley Servers

(128p @ 4GF, 12GB memory/server)

Fibre Channel Switch

16 quad-processor McKinley Servers

(64p @ 4GF, 8GB memory/server)

IA-32 nodes

Router or Switch/Router

Source: Rick Stevens 12/2001


Hybrid packet optical infrastructure tom defanti maxine brown joe mambretti linda winkler

The US National Lambda RailThe Cost of a Couple of University Buildings

Source: John Silvester, Dave Reese, Tom West, CENIC


Usawaves over at t s next generation network

USAWaves Over AT&T’s Next Generation Network


Ca net 4 physical architecture

CA*net 4 Physical Architecture

Optional Layer 3 aggregation service

Dedicated Wavelength or SONET channel

St. John’s

Regina

Winnipeg

Charlottetown

Calgary

Europe

Vancouver

Montreal

Large channel WDM system

OBGP switches

Fredericton

Halifax

Seattle

Ottawa

Chicago

New York

Toronto

Los Angeles

Miami


Surfnet5

SURFnet5

  • Partners BT and Cisco

  • 15 PoPs connected by thirty 10 Gbit/s lambdas

  • Dual stack IPv4 and IPv6

  • 500,000 users

  • 84 institutes connected at Gbit/s level

Source: Kees Neggars


Starlight in chicago a 1gige and 10gige exchange

StarLight in Chicago:A 1GigE and 10GigE Exchange

Operational since summer 2001, StarLight is a 1GigE and 10GigE switch/router facility for high-performance access to participating networks.

StarLight is equipped for optical switching facility for wavelengths.

www.startap.net/starlight/NETWORKS/

Abbott Hall, Northwestern University’s

Chicago downtown campus


Starlight us and international networks as of august 2003

StarLight US and International Networks as of August 2003

  • Abilene 10Gb

  • ESnet (DOE)

  • DREN (DOD)

  • NREN (NASA)

  • AMPATH (South America)

  • CA*net4 (Canada)

  • SURFnet (Netherlands)

  • CERN/DataTAG

  • TransPAC/APAN (Asia)

  • NaukaNET (Russia)

  • ASnet (Taiwan)

  • Others via STAR TAP OC-12 and Abilene transit

  • See http://loadrunner.uits.iu.edu/mrtg-monitors/starlight/ for statistics on usage


Starlight is

StarLight is

  • StarLight is a Gigabit Ethernet and 10 Gigabit Ethernet exchange for R&E Production Networks (Force 10)

  • And a GigE lambda exchange for US, Canada, Europe, Asia and South America for Experimental Networks

  • And 1&10Gb MEMS-switched Research Network hub

  • And the Chicago host to the NSF DTFnet, a 4x10Gb Network for the TeraGrid and DTF/ETF links to Abilene; NLR, USAWaves, others coming.

  • A colo space: 66 racks for networking and computing, data management and visualization support equipment

  • Using fiber and circuits from SBC, Qwest, AT&T, Global Crossing, T-Systems, Looking Glass, RCN, and I-WIRE


Hybrid packet optical infrastructure tom defanti maxine brown joe mambretti linda winkler

CANARIE

2xGigE

circuits

NetherLight

SURFnet

2xGigE

circuits

StarLight


Hybrid packet optical infrastructure tom defanti maxine brown joe mambretti linda winkler

UK

SuperJANET4

NL

FR

ATRIUM/VTHD

INRIA

SURFnet

GEANT

IT

GARR-B

DataTAG project

NewYork

Abilene

STAR-LIGHT

ESNET

CERN

MREN

STAR-TAP

Major 2.5 Gbps circuits between Europe & USA


What is translight

What is TransLight?

  • TransLight is a global-scale experimental networking initiative to support prototypes of the most aggressive e-science applications.

  • TransLight consists of dozens of provisioned Gigabit Ethernet (GigE) circuits among North America, Europe and Asia via StarLight in Chicago, NetherLight in Amsterdam.

  • Some 10Gb circuits are also available and schedulable.


Hybrid packet optical infrastructure tom defanti maxine brown joe mambretti linda winkler

TransLight Lambdas

European lambdas to US

–8 GigEs Amsterdam—Chicago

–8 GigEs London—Chicago

Canadian lambdas to US

–8 GigEsChicago-Canada-NYC

–8 GigEs Chicago-Canada-Seattle

US lambdas to Europe

–4 GigEs Chicago—Amsterdam

–3 GigEs Chicago—CERN

European lambdas

–8 GigEs Amsterdam—CERN

–2 GigEs Prague—Amsterdam

–2 GigEs Stockholm—Amsterdam

–8 GigEs London—Amsterdam

TransPAC lambda (yellow)

–1 GigE Chicago—Tokyo

IEEAF lambdas (blue)

–8 GigEs NYC—Amsterdam

–8 GigEs Seattle—Tokyo


Translight optical electronic switches at starlight and netherlight

10 GigE

10 GigE

128x128

Calient MEMS

Optical Switch

64x64

Calient MEMS

Optical Switch

N GigE

16 GigE

8 GigE

15454 at StarLight

15454 at NetherLight

2xOC-192

8-processor cluster

N-processor cluster

2 GigE

2 GigE

16 GigE

8 GigE

N GigE

Router

Router

Control plane

Control plane

N E T H E R L I G H T

TransLight Optical/Electronic Switches at StarLight and NetherLight

Data plane

Data plane

16-processor cluster

An OptIPuter Prototype


What is the global lambda integrated facility glif

What is the Global Lambda Integrated Facility (GLIF)?

  • GLIF is a global-scale experimental facility being designed to support advanced applications and to develop new technologies.

  • The GLIF is a multi-organizational international partnership

  • The GLIF will be based on leading edge optical technologies

  • Last Meeting – August, Iceland after NORDUnet conference, 3rd year of Global Lamda Workshops


Bring us your lambdas

Bring Us Your Lambdas!

  • Please bring your lambdas to StarLight, NetherLight, CERN, UKLight, CzechLight, NorthernLight, …

  • Build a hub like StarLight

  • Propose an application or network experiment

  • See www.startap.net/starlight and www.startap.net/translight


Thank you

Thank You!

  • StarLight planning, research, collaborations, and outreach efforts are made possible, in major part, by funding from:

    • National Science Foundation (NSF) awards ANI-9980480, ANI-9730202, EIA-9802090, EIA-9871058, ANI-0225642, and EIA-0115809

    • NSF Partnerships for Advanced Computational Infrastructure (PACI) cooperative agreement ACI-9619019 to NCSA

    • State of Illinois I-WIRE Program, and major UIC cost sharing

    • Northwestern University for providing space, engineering and management

  • NSF/CISE/ANIR and DoE/Argonne National Laboratory for StarLight and I-WIRE network engineering and design

  • NSF/CISE/ACIR and NCSA/SDSC for DTF/TeraGrid/ETF opportunities

  • UCAID/Abilene for Internet2 and ITN/GTRN transit; IU for the GlobalNOC

  • CA*net4 for North American transport

  • Bill St. Arnaud of CANARIE, Kees Neggers of SURFnet, Olivier Martin of CERN and Harvey Newman of CalTech for networking leadership

  • Larry Smarr of Cal-(IT)2 for I-WIRE and OptIPuter leadership


  • Login