Internet2 qbone building a testbed for ip differentiated services
This presentation is the property of its rightful owner.
Sponsored Links
1 / 26

Internet2 QBone: Building a Testbed for IP Differentiated Services PowerPoint PPT Presentation


  • 70 Views
  • Uploaded on
  • Presentation posted in: General

Internet2 QBone: Building a Testbed for IP Differentiated Services. TERENA-NORDUnet Networking Conference 1999 June 8 th , 1999 Lund, Sweden Ben Teitelbaum <[email protected]>. Internet2 Dogma: There is a circularity between advanced networks and advanced apps. Enables.

Download Presentation

Internet2 QBone: Building a Testbed for IP Differentiated Services

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Internet2 qbone building a testbed for ip differentiated services

Internet2 QBone:Building a Testbed for IP Differentiated Services

TERENA-NORDUnet

Networking Conference 1999

June 8th, 1999

Lund, Sweden

Ben Teitelbaum<[email protected]>


Internet2 dogma there is a circularity between advanced networks and advanced apps

Internet2 Dogma:There is a circularity between advanced networks and advanced apps

Enables

NetworkedApplications

NetworkEngineering

Motivate


Qbone dogma article1 inverse apps networking circularity has applied to qos

QBone Dogma Article1:Inverse Apps  Networking circularity has applied to QoS

Inhibited

QoS-needyApplications

NetworkQoS

Prevented


Qbone dogma article2 work with the neediest apps build a testbed and turn the arrows around

QBone Dogma Article2:Work with the neediest apps, build a testbed, and turn the arrows around!

Enables

QoS-needyApplications

NetworkQoS

Motivates


Internet2 qbone initiative

Internet2 QBone Initiative

  • Build interdomain testbed infrastructure

    • Balance networking research with providing a service

    • Experiment and improve understanding of DiffServ

    • Iterate and improve testbed design

  • Support intradomain & interdomain deployment

  • Lead and follow IETF standards work

    • Some parts of DiffServ architecture cooked; others far from it

    • Our experience will inform standards process

  • Openness of R&E community gives us an edge

    • We can live with somewhat flaky infrastructure

    • We are open to sharing implementation experiences and measurement data


Internet2 qos working group

Osama Aboul-Magd (Nortel)

Andy Adamson (Michigan)

Grenville Armitage (Lucent)

Steve Blake (Torrent)

Scott Bradner (Harvard)

Scott Brim (Newbridge)

Larry Conrad (Florida State)

John Coulter (CA*net2)

Chuck Song (MCI/vBNS)

Fred Baker / Larry Dunn (Cisco)

Rüdiger Geib (Deutsche Telekom)

Terry Gray (U Washington)

Jim Grisham (NYSERNet)

Roch Guerin (Penn)

Susan Hares (Merit)

Joseph Lappa (CMU)

Jay Kistler (FORE)

Klara Nahrstedt (UIC)

Kathleen Nichols (IETF coordination)

Ken Pierce (3com)

John Sikora (ATT Labs)

Ben Teitelbaum (chair)

John Wroclawski (MIT)

plus liaisons with all MOU partners

Internet2 QoS Working Group


Internet2 applications

Internet2 Applications

  • Qualitative and quantitative improvements in how we conduct research, teaching, and learning

  • Require advanced networks

  • Examples:

    • Interactive research collaboration and instruction

    • Real-time access to remote scientific instruments

    • Large-scale, multi-site computation and database processing

    • Shared virtual reality


Big problem 1 understanding application requirements

Big Problem #1: Understanding Application Requirements

  • What services do tomorrow’s applications need?

  • Range of poorly-understood needs

    • Both intolerant apps (e.g. tele-immersion) and tolerant apps (e.g. large FTPs, desktop video conferencing) important

    • Many apps need absolute, per-flow QoS assurances

    • Adaptive apps may require a minimum level of QoS, but can exploit additional network resources if available

    • Some institutions/users want multiple classes of best-efforts service (CoS) with relative precedence levels

Better

Good

Bad

Adaptive

Intolerant

Tolerant

Different App Needs

  • Need better understanding through experience


Big problem 2 scalability

Big Problem #2: Scalability

  • Convergence of flows on the core means:

    • Large numbers of flows through each router

    • High forwarding rate requirements

  • Need to support QoS end-to-end, but keep per-flow state & packet forwarding overhead out of the core

Lots of flows here!


Big problem 3 interoperability

CampusNetworks

CampusNetworks

GigaPoPs

GigaPoPs

Big Problem #3: Interoperability

... between separately administered and designed clouds ...

Backbone Networks(vBNS, Abilene, …)

… and between multiple

implementations of network elements ...

… is crucial if we are to provide end-to-end QoS.


Diffserv for internet2

DiffServ for Internet2

  • July 1997 - February 1998

    • WG struggled to understand needs of advanced applications / realities of QoS engineering

    • Frustrations with RSVP give birth to IETF DiffServ

  • May 1998

    • WG recommends EF/Premium DiffServ focus for I2 QoS

    • First Internet2 Joint Applications/Engineering Workshop, Santa Clara, CA (report on web site)

  • October 1998

    • QBone initiative launched


Internet2 qbone building a testbed for ip differentiated services

MREN /STAR TAP

MAGPI

Texas GP

NYSERNet

SingAREN

NCNI

Merit

PSC

APAN

NREN

...

...

Initial QIG** 11 February 1999 (actual connectivity and participating networks may vary as deployment progresses)

CRC

UNB

UBC

NWU

iCAIR

EVL

CTIT

CA*Net2

UMN

ARDNOC

IU

RISQ

SURFNet

NTU

NUS

ANL

Ames

KDD Labs

ESNet

Other NASA Labs

OtherNGIXs

Korea

UMass

Other DOE Labs

LBNL

Abilene

vBNS

UNC

CMU

UMich

TAMU

Duke

NCSU

UPenn


Qbone bb group

QBone BB Group

  • Open group chartered to recommend BB solutions for the QBone

  • Lead by Sue Hares - Merit Networks

  • Six R&E proto-BBs:

    • Merit

    • BCIT

    • UCLA

  • Extensive participation from corporate partners

  • QBone BB requirements draft on web site

  • Prototype inter-BB signaling protocol due soon

  • Telia / Luleå University of Technology

  • Globus Scheduler

  • LBNL Clipper


Qbone milestones 1998 1999

QBone Milestones 1998 - 1999

  • Sep 25th - Call for participation

  • Oct 30th - WG recommends initial QIG participants

  • Dec 1st - 1st QIG / QBone BB meeting (Evanston)

  • Jan 1st - WG makes major push on architecture draft

  • Jan 26th - 2nd QIG / QBone BB Meeting (RTP)

  • Mar 7th - Measurement sub-group drafts QMA

  • Mar 9th - 3rd QIG / QBone BB Meeting (Las Cruces)

  • May 21st - WG opens QIG

  • June 8th - Open QBone interop BOF (Pittsburgh)

  • June 11th - QBone Architecture draft in “last-call”


Qbone architecture 10km view

QBone Architecture (10km view)

  • IETF “Diff” (EF PHB) + QBone “Serv” (QPS)

  • QBone Premium Service

    • Idea: converge on Jacobson’s VLL “Premium” service

    • Well-defined SLS:

      • Peak rate R & “Service MTU” M implying a token bucket meter

      • Near-zero loss

      • Low jitter

        • Delay variation due to queuing effects should be no greater than the packet transmission time of a service MTU sized packet

        • All bets are off if the reserved interdomain route flaps

  • Plus important value-adds:

    • Integrated measurement/dissemination infrastructure

    • Experimentation with pre-standards inter-domain bandwidth brokering and signaling


Why premium first

Assured

Premium

Olympic (CoS)

Why Premium First?

  • Simplest absolute service to understand

  • Strongest flavor of DiffServ

    • Could support our most demanding applications

    • Less demanding applications should work fine on emerging high-performance BE infrastructure

  • Explore other PHBs (AF) later

  • To understand this, consider “typical” Internet2 performance:

    • I2 networks are largely uncongested

    • Jitter and loss still occur

    • Route flaps to the commodity Internet still occur


Typical 1999 internet2 performance

Typical 1999 Internet2 Performance

East Coast University to West Coast DOE Lab

  • Minimum Delay

  • 50th Percentile Delay

  • 90th Percentile Delay


Qbone measurement architecture 1

Collection

metrics,EFandBE...

Active metrics

One-way delay-variation

One-way loss

Traceroutes

e.g. IPPM Surveyors

Passive metrics

Load

Discards (suggested)

Link bandwidths (suggested)

EF reservation load

e.g. OCxMon, RTFM, MIBs

ActiveMeasurements

MIB-basedstatistics

Boundary Router

AM node

Intra-Domain Premium Path

Inter-Domain Premium Path

PM node

PM node

PassiveMeasurements

PassiveMeasurements

QBone Domain2

QBone Domain1

QBone Domain3

QBone Measurement Architecture1


Qbone measurement architecture 2

QBone Measurement Architecture2

  • Dissemination

    • HTTP, even for raw data

    • real-time + archived measurements

    • Canonical names for:

      • Metrics

      • Domains

    • Standard metric aggregations:

      • Mostly 5-minute aggregations

    • Standard URL name space for:

      • MRTG-style plots

      • Raw ASCII data

      • http://<root_URL>/<source_domain>/<dest_domain>/<first_hop>/<date>/<type>.<aggregation>.{html | gif | txt}


Starting simply

H

H

...

...

H

H

GigaPoP

GigaPoP

Starting Simply

  • Intradomain

    • Van’s campus example

    • At least 10Mbps everywhere

    • “Count to ten” admissions control with no topological knowledge

  • Interdomain

    • Could we do something similar in the early QBone?

    • Problem: Worst case down-stream provisioning starts to look pretty bad even with small initial participant set.


Generic internet2 topology

ESNet, NREN, Int’l, ...

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

L

L

GigaPoP

GigaPoP

GigaPoP

GigaPoP

GigaPoP

GigaPoP

Abilene

vBNS

Generic Internet2 Topology

NGIXs


Phase 0 demand assessment

ESNet, NREN, STARTAP, ...

Int’l

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

C

L

L

GigaPoP

GigaPoP

GigaPoP

GigaPoP

GigaPoP

GigaPoP

Abilene

vBNS

Phase 0 Demand Assessment

NGIXs


Phase 0 deployment planning

Phase 0 Deployment Planning

  • Converge on a consensus reservation matrix

  • Reservations will be static for period of phase

  • Reservation = {S, D, R, M, TR}

    • S = source

    • D = dest

    • R = peak rate

  • S, D are on campus network demarks

  • All bets are off if routing between S and D changes

  • All SLSs still bi-lateral, but Internet2 engineering will facilitate convergence

  • M = service MTU

  • TR = inter-domain traceroute


Phase 0 demand matrix

Phase 0 Demand Matrix

… to here

Campus EF Ingress Load

Implies

Maximum EF load to be offered from here

R

D

Campus Policer Config


Coming attractions

Coming Attractions...

  • Jun 99: QBone Architecture in last call

  • Jun 99: QBone BB Advisory Council will recommend a prototype inter-BB protocol

  • Jun/Jul 99: “Phase 0” rollout planning

  • Aug/Sep 99: Interdisciplinary QBone workshop

  • Fall 99: QBone Connect-a-thon (“QCon”) event

  • Fall 99: “Phase 0”


For more information

For more information...

  • QBone home page:http://www.internet2.edu/qbone

  • Internet2 QoS Working Group home page:http://www.internet2.edu/qos/wg


  • Login