engineering for qos and the limits of service differentiation
Download
Skip this Video
Download Presentation
Engineering for QoS and the limits of service differentiation

Loading in 2 Seconds...

play fullscreen
1 / 91

Engineering for QoS and the limits of service differentiation - PowerPoint PPT Presentation


  • 191 Views
  • Uploaded on

IWQoS June 2000. Engineering for QoS and the limits of service differentiation. Jim Roberts ([email protected]). The central role of QoS. feasible technology. quality of service transparency response time accessibility. service model resource sharing

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Engineering for QoS and the limits of service differentiation' - Jeffrey


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
the central role of qos
The central role of QoS

feasible technology

  • quality of service
  • transparency
  • response time
  • accessibility
  • service model
  • resource sharing
  • priorities,...
  • network engineering
  • provisioning
  • routing,...

a viable business model

engineering for qos a probabilistic point of view
Engineering for QoS: a probabilistic point of view
  • statistical characterization of traffic
    • notions of expected demand and random processes
    • for packets, bursts, flows, aggregates
  • QoS in statistical terms
    • transparency: Pr [packet loss], mean delay, Pr [delay > x],...
    • response time: E [response time],...
    • accessibility: Pr [blocking],...
  • QoS engineering, based on a three-way relationship:

demand

performance

capacity

outline
Outline
  • traffic characteristics
  • QoS engineering for streaming flows
  • QoS engineering for elastic traffic
  • service differentiation
internet traffic is self similar
Internet traffic is self-similar
  • a self-similar process
    • variability at all time scales
  • due to:
    • infinite variance of flow size
    • TCP induced burstiness
  • a practical consequence
    • difficult to characterize a traffic aggregate

Ethernet traffic, Bellcore 1989

traffic on a us backbone link thomson et al 1997
Traffic on a US backbone link (Thomson et al, 1997)
  • traffic intensity is predictable ...
  • ... and stationary in the busy hour
traffic on a french backbone link
Traffic on a French backbone link
  • traffic intensity is predictable ...
  • ... and stationary in the busy hour

tue wed thu fri sat sun mon

12h 18h 00h 06h

ip flows
IP flows
  • a flow = one instance of a given application
    • a "continuous flow" of packets
    • basically two kinds of flow, streaming and elastic
ip flows9
IP flows
  • a flow = one instance of a given application
    • a "continuous flow" of packets
    • basically two kinds of flow, streaming and elastic
  • streaming flows
    • audio and video, real time and playback
    • rate and duration are intrinsic characteristics
    • not rate adaptive (an assumption)
    • QoS  negligible loss, delay, jitter
ip flows10
IP flows
  • a flow = one instance of a given application
    • a "continuous flow" of packets
    • basically two kinds of flow, streaming and elastic
  • streaming flows
    • audio and video, real time and playback
    • rate and duration are intrinsic characteristics
    • not rate adaptive (an assumption)
    • QoS  negligible loss, delay, jitter
  • elastic flows
    • digital documents ( Web pages, files, ...)
    • rate and duration are measures of performance
    • QoS  adequate throughput (response time)
flow traffic characteristics
variable rate videoFlow traffic characteristics
  • streaming flows
    • constant or variable rate
      • compressed audio (O[103 bps]) and video (O[106 bps])
    • highly variable duration
    • a Poisson flow arrival process (?)
flow traffic characteristics12
Flow traffic characteristics
  • streaming flows
    • constant or variable rate
      • compressed audio (O[103 bps]) and video (O[106 bps])
    • highly variable duration
    • a Poisson flow arrival process (?)
  • elastic flows
    • infinite variance size distribution
    • rate adaptive
    • a Poisson flow arrival process (??)

variable rate video

modelling traffic demand
Modelling traffic demand
  • stream traffic demand
    • arrival rate x bit rate x duration
  • elastic traffic demand
    • arrival rate x size
  • a stationary process in the "busy hour"
    • eg, Poisson flow arrivals, independent flow size

traffic

demand

Mbit/s

busy hour

time of day

outline14
Outline
  • traffic characteristics
  • QoS engineering for streaming flows
  • QoS engineering for elastic traffic
  • service differentiation
open loop control for streaming traffic
Open loop control for streaming traffic
  • a "traffic contract"
    • QoS guarantees rely on
    • traffic descriptors + admission control + policing
  • time scale decomposition for performance analysis
    • packet scale
    • burst scale
    • flow scale

user-network

interface

user-network

interface

network-network

interface

packet scale a superposition of constant rate flows
Packet scale: a superposition of constant rate flows
  • constant rate flows
    • packet size/inter-packet interval = flow rate
    • maximum packet size = MTU
packet scale a superposition of constant rate flows17
buffer size

log Pr [saturation]

Packet scale: a superposition of constant rate flows
  • constant rate flows
    • packet size/inter-packet interval = flow rate
    • maximum packet size = MTU
  • buffer size for negligible overflow?
    • over all phase alignments...
    • ...assuming independence between flows
packet scale a superposition of constant rate flows18
Packet scale: a superposition of constant rate flows
  • constant rate flows
    • packet size/inter-packet interval = flow rate
    • maximum packet size = MTU
  • buffer size for negligible overflow?
    • over all phase alignments...
    • ...assuming independence between flows
  • worst case assumptions:
    • many low rate flows
    • MTU-sized packets

buffer size

increasing number,

increasing pkt size

log Pr [saturation]

packet scale a superposition of constant rate flows19
Packet scale: a superposition of constant rate flows
  • constant rate flows
    • packet size/inter-packet interval = flow rate
    • maximum packet size = MTU
  • buffer size for negligible overflow?
    • over all phase alignments...
    • ...assuming independence between flows
  • worst case assumptions:
    • many low rate flows
    • MTU-sized packets
  •  buffer sizing for M/DMTU/1 queue
    • Pr [queue > x] ~ C e -r x

buffer size

M/DMTU/1

increasing number,

increasing pkt size

log Pr [saturation]

the negligible jitter conjecture
The "negligible jitter conjecture"
  • constant rate flows acquire jitter
    • notably in multiplexer queues
the negligible jitter conjecture21
The "negligible jitter conjecture"
  • constant rate flows acquire jitter
    • notably in multiplexer queues
  • conjecture:
    • if all flows are initially CBR and in all queues: S flow rates < service rate
    • they never acquire sufficient jitter to become worse for performance than a Poisson stream of MTU packets
the negligible jitter conjecture22
The "negligible jitter conjecture"
  • constant rate flows acquire jitter
    • notably in multiplexer queues
  • conjecture:
    • if all flows are initially CBR and in all queues: S flow rates < service rate
    • they never acquire sufficient jitter to become worse for performance than a Poisson stream of MTU packets
  • M/DMTU/1 buffer sizing remains conservative
burst scale fluid queueing models
burstsBurst scale: fluid queueing models
  • assume flows have an intantaneous rate
    • eg, rate of on/off sources

packets

arrival rate

burst scale fluid queueing models24
packets

bursts

arrival rate

Burst scale: fluid queueing models
  • assume flows have an intantaneous rate
    • eg, rate of on/off sources
  • bufferless or buffered multiplexing?
    • Pr [arrival rate < service rate] < e
    • E [arrival rate] < service rate
buffered multiplexing performance impact of burst parameters
buffer size

0

0

log Pr [saturation]

Buffered multiplexing performance: impact of burst parameters

Pr [rate overload]

buffered multiplexing performance impact of burst parameters26
buffer size

0

0

log Pr [saturation]

Buffered multiplexing performance: impact of burst parameters

longer

burst length

shorter

buffered multiplexing performance impact of burst parameters27
buffer size

0

0

log Pr [saturation]

Buffered multiplexing performance: impact of burst parameters

more variable

burst length

less variable

buffered multiplexing performance impact of burst parameters28
buffer size

0

0

log Pr [saturation]

Buffered multiplexing performance: impact of burst parameters

long range dependence

burst length

short range dependence

choice of token bucket parameters
Choice of token bucket parameters?
  • the token bucket is a virtual queue
    • service rate r
    • buffer size b

r

b

choice of token bucket parameters30
b

b'

non-

conformance

probability

Choice of token bucket parameters?
  • the token bucket is a virtual queue
    • service rate r
    • buffer size b
  • non-conformance depends on
    • burst size and variability
    • and long range dependence

r

b

choice of token bucket parameters31
Choice of token bucket parameters?
  • the token bucket is a virtual queue
    • service rate r
    • buffer size b
  • non-conformance depends on
    • burst size and variability
    • and long range dependence
  • a difficult choice for conformance
    • r >> mean rate...
    • ...or b very large

b

b'

non-

conformance

probability

r

b

bufferless multiplexing alias rate envelope multiplexing
timeBufferless multiplexing: alias "rate envelope multiplexing"
  • provisioning and/or admission control to ensure Pr [Lt>C] < e
  • performance depends only on stationary rate distribution
    • loss rate  E [(Lt -C)+] / E [Lt]
  • insensitivity to self-similarity

output

rate C

combined

input

rate Lt

efficiency of bufferless multiplexing
Efficiency of bufferless multiplexing
  • small amplitude of rate variations ...
    • peak rate << link rate (eg, 1%)
efficiency of bufferless multiplexing34
Efficiency of bufferless multiplexing
  • small amplitude of rate variations ...
    • peak rate << link rate (eg, 1%)
  • ... or low utilisation
    • overall mean rate << link rate
efficiency of bufferless multiplexing35
Efficiency of bufferless multiplexing
  • small amplitude of rate variations ...
    • peak rate << link rate (eg, 1%)
  • ... or low utilisation
    • overall mean rate << link rate
  • we may have both in an integrated network
    • priority to streaming traffic
    • residue shared by elastic flows
flow scale admission control
Flow scale: admission control
  • accept new flow only if transparency preserved
    • given flow traffic descriptor
    • current link status
  • no satisfactory solution for buffered multiplexing
    • (we do not consider deterministic guarantees)
    • unpredictable statistical performance
  • measurement-based control for bufferless multiplexing
    • given flow peak rate
    • current measured rate (instantaneous rate, mean, variance,...)
flow scale admission control37
Flow scale: admission control
  • accept new flow only if transparency preserved
    • given flow traffic descriptor
    • current link status
  • no satisfactory solution for buffered multiplexing
    • (we do not consider deterministic guarantees)
    • unpredictable statistical performance
  • measurement-based control for bufferless multiplexing
    • given flow peak rate
    • current measured rate (instantaneous rate, mean, variance,...)
  • uncritical decision threshold if streaming traffic is light
    • in an integrated network
provisioning for negligible blocking
utilization (r=a/m) for E(m,a) = 0.01

r

0.8

0.6

0.4

0.2

m

0 20 40 60 80 100

Provisioning for negligible blocking
  • "classical" teletraffic theory; assume
    • Poisson arrivals, rate l
    • constant rate per flow r
    • mean duration 1/m
    •  mean demand, A = l/m r bits/s
  • blocking probability for capacity C
    • B = E(C/r,A/r)
    • E(m,a) is Erlang's formula:
      • E(m,a)=
    •  scale economies
provisioning for negligible blocking39
utilization (r=a/m) for E(m,a) = 0.01

r

0.8

0.6

0.4

0.2

m

0 20 40 60 80 100

Provisioning for negligible blocking
  • "classical" teletraffic theory; assume
    • Poisson arrivals, rate l
    • constant rate per flow r
    • mean duration 1/m
    •  mean demand, A = l/m r bits/s
  • blocking probability for capacity C
    • B = E(C/r,A/r)
    • E(m,a) is Erlang's formula:
      • E(m,a)=
    •  scale economies
  • generalizations exist:
    • for different rates
    • for variable rates
outline40
Outline
  • traffic characteristics
  • QoS engineering for streaming flows
  • QoS engineering for elastic traffic
  • service differentiation
closed loop control for elastic traffic
Closed loop control for elastic traffic
  • reactive control
    • end-to-end protocols (eg, TCP)
    • queue management
  • time scale decomposition for performance analysis
    • packet scale
    • flow scale
packet scale bandwidth and loss rate
Packet scale: bandwidth and loss rate
  • a multi-fractal arrival process
packet scale bandwidth and loss rate43
Packet scale: bandwidth and loss rate
  • a multi-fractal arrival process
    • but loss and bandwidth related by TCP (cf. Padhye et al.)

congestion

avoidance

loss rate

p

B(p)

packet scale bandwidth and loss rate44
Packet scale: bandwidth and loss rate
  • a multi-fractal arrival process
    • but loss and bandwidth related by TCP (cf. Padhye et al.)

congestion

avoidance

loss rate

p

B(p)

packet scale bandwidth and loss rate45
Packet scale: bandwidth and loss rate
  • a multi-fractal arrival process
    • but loss and bandwidth related by TCP (cf. Padhye et al.)
    • thus, p = B-1(p): ie, loss rate depends on bandwidth share

congestion

avoidance

loss rate

p

B(p)

packet scale bandwidth sharing
Packet scale: bandwidth sharing
  • reactive control (TCP, scheduling) shares bottleneck bandwidth unequally
    • depending on RTT, protocol implementation, etc.
    • and differentiated services parameters
packet scale bandwidth sharing47
Example: a linear network

route 0

route 1

route L

Packet scale: bandwidth sharing
  • reactive control (TCP, scheduling) shares bottleneck bandwidth unequally
    • depending on RTT, protocol implementation, etc.
    • and differentiated services parameters
  • optimal sharing in a network: objectives and algorithms...
    • max-min fairness, proportional fairness, maximal utility,...
packet scale bandwidth sharing48
Packet scale: bandwidth sharing
  • reactive control (TCP, scheduling) shares bottleneck bandwidth unequally
    • depending on RTT, protocol implementation, etc.
    • and differentiated services parameters
  • optimal sharing in a network: objectives and algorithms...
    • max-min fairness, proportional fairness, maximal utility,...
  • ... but response time depends more on traffic process than the static sharing algorithm!

Example: a linear network

route 0

route 1

route L

flow scale performance of a bottleneck link
link capacity C

fair shares

Flow scale: performance of a bottleneck link
  • assume perfect fair shares
    • link rate C, n elastic flows 
    • each flow served at rate C/n
flow scale performance of a bottleneck link50
link capacity C

fair shares

Flow scale: performance of a bottleneck link
  • assume perfect fair shares
    • link rate C, n elastic flows 
    • each flow served at rate C/n
  • assume Poisson flow arrivals
    • an M/G/1 processor sharing queue
    • load, r = arrival rate x size / C

 a processor sharing queue

flow scale performance of a bottleneck link51
link capacity C

fair shares

 a processor sharing queue

throughput

C

r

0

0

1

Flow scale: performance of a bottleneck link
  • assume perfect fair shares
    • link rate C, n elastic flows 
    • each flow served at rate C/n
  • assume Poisson flow arrivals
    • an M/G/1 processor sharing queue
    • load, r = arrival rate x size / C
  • performance insensitive to size distribution
    • Pr [n transfers] = rn(1-r)
    • E [response time] = size / C(1-r)
flow scale performance of a bottleneck link52
Flow scale: performance of a bottleneck link

link capacity C

  • assume perfect fair shares
    • link rate C, n elastic flows 
    • each flow served at rate C/n
  • assume Poisson flow arrivals
    • an M/G/1 processor sharing queue
    • load, r = arrival rate x size / C
  • performance insensitive to size distribution
    • Pr [n transfers] = rn(1-r)
    • E [response time] = size / C(1-r)
  • instability if r > 1
    • i.e., unbounded response time
    • stabilized by aborted transfers...
    • ... or by admission control

fair shares

 a processor sharing queue

throughput

C

r

0

0

1

generalizations of ps model
transfer

1-p

flows

Poisson

session

arrivals

p

think time

Generalizations of PS model
  • non-Poisson arrivals
    • Poisson sessions
    • Bernoulli feedback

processor

sharing

infinite

server

generalizations of ps model54
transfer

1-p

flows

Poisson

session

arrivals

processor

sharing

p

think time

infinite

server

Generalizations of PS model
  • non-Poisson arrivals
    • Poisson sessions
    • Bernoulli feedback
  • discriminatory processor sharing
    • weight fi for class i flows
    • service rate fi
generalizations of ps model55
transfer

1-p

flows

Poisson

session

arrivals

processor

sharing

p

think time

infinite

server

Generalizations of PS model
  • non-Poisson arrivals
    • Poisson sessions
    • Bernoulli feedback
  • discriminatory processor sharing
    • weight fi for class i flows
    • service rate fi
  • rate limitations (same for all flows)
    • maximum rate per flow (eg, access rate)
    • minimum rate per flow (by admission control)
admission control can be useful58
Admission control can be useful ...

... to prevent disasters at sea !

admission control can also be useful for ip flows
Admission control can also be useful for IP flows
  • improve efficiency of TCP
    • reduce retransmissions overhead ...
    • ... by maintaining throughput
  • prevent instability
    • due to overload (r > 1)...
    • ...and retransmissions
  • avoid aborted transfers
    • user impatience
    • "broken connections"
  • a means for service differentiation...
choosing an admission control threshold
1

.8

.6

.4

.2

0

300

200

100

0

Blocking probability

E [Response time]/size

0 100 200 N

0 100 200 N

Choosing an admission control threshold
  • N = the maximum number of flows admitted
    • negligible blocking when r<1, maintain quality when r>1
choosing an admission control threshold61
r = 1.5

r = 1.5

r = 0.9

r = 0.9

Choosing an admission control threshold
  • N = the maximum number of flows admitted
    • negligible blocking when r<1, maintain quality when r>1
  • M/G/1/N processor sharing system
    • min bandwidth = C/N
    • Pr [blocking] = rN(1 - r)/(1 - rN+1)  (1 - 1/r) , for r>1

1

.8

.6

.4

.2

0

300

200

100

0

Blocking probability

E [Response time]/size

0 100 200 N

0 100 200 N

choosing an admission control threshold62
1

.8

.6

.4

.2

0

300

200

100

0

Blocking probability

E [Response time]/size

r = 1.5

r = 1.5

r = 0.9

r = 0.9

0 100 200 N

0 100 200 N

Choosing an admission control threshold
  • N = the maximum number of flows admitted
    • negligible blocking when r<1, maintain quality when r>1
  • M/G/1/N processor sharing system
    • min bandwidth = C/N
    • Pr [blocking] = rN(1 - r)/(1 - rN+1)  (1 - 1/r) , for r>1
  • uncritical choice of threshold
    • eg, 1% of link capacity (N=100)
impact of access rate on backbone sharing
throughput

C

backbone link

(rate C)

access

rate

access links

(rate<

0

0

r

1

Impact of access rate on backbone sharing
  • TCP throughput is limited by access rate...
    • modem, DSL, cable
  • ... and by server performance
impact of access rate on backbone sharing64
throughput

C

backbone link

(rate C)

access

rate

access links

(rate<

0

0

r

1

Impact of access rate on backbone sharing
  • TCP throughput is limited by access rate...
    • modem, DSL, cable
  • ... and by server performance
  •  backbone link is a bottleneck only if saturated!
    • ie, if r > 1
provisioning for negligible blocking for elastic flows
utilization (r) for B = 0.01

r

0.8

0.6

0.4

0.2

m

0 20 40 60 80 100

Provisioning for negligible blocking for elastic flows
  • "elastic" teletraffic theory; assume
    • Poisson arrivals, rate l
    • mean size s
  • blocking probability for capacity C
    • utilization r= ls/C
    • m = admission control limit
    • B(r,m) =rm(1-r)/(1-rm+1)
provisioning for negligible blocking for elastic flows66
Provisioning for negligible blocking for elastic flows
  • "elastic" teletraffic theory; assume
    • Poisson arrivals, rate l
    • mean size s
  • blocking probability for capacity C
    • utilization r= ls/C
    • m = admission control limit
    • B(r,m) =rm(1-r)/(1-rm+1)
  • impact of access rate
    • C/access rate = m
    • B(r,m) E(m,rm)

utilization (r) for B = 0.01

r

0.8

E(m,rm)

0.6

0.4

0.2

m

0 20 40 60 80 100

outline67
Outline
  • traffic characteristics
  • QoS engineering for streaming flows
  • QoS engineering for elastic traffic
  • service differentiation
service differentiation
Service differentiation
  • discriminating between stream and elastic flows
    • transparency for streaming flows
    • response time for elastic flows
  • discriminating between stream flows
    • different delay and loss requirements
    • ... or the best quality for all?
  • discriminating between elastic flows
    • different response time requirements
    • ... but how?
integrating streaming and elastic traffic
Integrating streaming and elastic traffic
  • priority to packets of streaming flows
    • low utilization  negligible loss and delay
integrating streaming and elastic traffic70
Integrating streaming and elastic traffic
  • priority to packets of streaming flows
    • low utilization  negligible loss and delay
  • elastic flows use all remaining capacity
    • better response times
    • per flow fair queueing (?)
integrating streaming and elastic traffic71
Integrating streaming and elastic traffic
  • priority to packets of streaming flows
    • low utilization  negligible loss and delay
  • elastic flows use all remaining capacity
    • better response times
    • per flow fair queueing (?)
  • to prevent overload
    • flow based admission control...
    • ...and adaptive routing
integrating streaming and elastic traffic72
Integrating streaming and elastic traffic
  • priority to packets of streaming flows
    • low utilization  negligible loss and delay
  • elastic flows use all remaining capacity
    • better response times
    • per flow fair queueing (?)
  • to prevent overload
    • flow based admission control...
    • ...and adaptive routing
  • an identical admission criterion for streaming and elastic flows
    • available rate > R
differentiation for stream traffic
Differentiation for stream traffic
  • different delays?
    • priority queues, WFQ, ...
    • but what guarantees?

delay

delay

differentiation for stream traffic74
Differentiation for stream traffic
  • different delays?
    • priority queues, WFQ, ...
    • but what guarantees?
  • different loss?
    • different utilization (CBQ, ...)
    • "spatial queue priority"
      • partial buffer sharing, push out

delay

delay

loss

loss

differentiation for stream traffic75
Differentiation for stream traffic
  • different delays?
    • priority queues, WFQ, ...
    • but what guarantees?
  • different loss?
    • different utilization (CBQ, ...)
    • "spatial queue priority"
      • partial buffer sharing, push out
  • or negligible loss and delay for all
    • elastic-stream integration ...
    • ... and low stream utilization

delay

delay

loss

loss

loss

delay

differentiation for elastic traffic
throughput

C

access

rate

r

0

0

1

1st class

3rd class

2nd class

Differentiation for elastic traffic
  • different utilization
    • separate pipes
    • class based queuing
differentiation for elastic traffic77
throughput

C

access

rate

r

0

0

1

1st class

3rd class

2nd class

throughput

C

access

rate

r

0

0

1

Differentiation for elastic traffic
  • different utilization
    • separate pipes
    • class based queuing
  • different per flow shares
    • WFQ
    • impact of RTT,...
differentiation for elastic traffic78
Differentiation for elastic traffic

throughput

  • different utilization
    • separate pipes
    • class based queuing
  • different per flow shares
    • WFQ
    • impact of RTT,...
  • discrimination in overload
    • impact of aborts (?)
    • or by admission control

C

access

rate

r

0

0

1

1st class

3rd class

2nd class

throughput

C

access

rate

r

0

0

1

different accessibility
Different accessibility
  • block class 1 when 100 flows in progress - block class 2 when N2 flows in progress

1

0

Blocking probability

r = 1.5

r = 0.9

0 100 200 N

different accessibility80
1

r1 = r2 = 0.4

0

100

N2

0

Different accessibility
  • block class 1 when 100 flows in progress - block class 2 when N2 flows in progress
different accessibility81
1

r1 = r2 = 0.4

0

100

N2

0

Different accessibility
  • block class 1 when 100 flows in progress - block class 2 when N2 flows in progress
  • in underload: both classes have negligible blocking (B1» B2» 0)

B2B10

different accessibility82
1

r1 = r2 = 0.6

B2

.33

B1

0

N2

0

100

Different accessibility
  • block class 1 when 100 flows in progress - block class 2 when N2 flows in progress
  • in underload: both classes have negligible blocking (B1» B2» 0)
  • in overload: discrimination is effective
    • if r1 < 1 < r1 + r2, B1» 0, B2» (r1+r2-1)/r2

1

r1 = r2 = 0.4

B2B10

0

N2

0

different accessibility83
1

1

1

B2

r1 = r2 = 0.4

r1 = r2 = 0.6

r1 = r2 = 1.2

B2

.33

B1

.17

B2B10

B1

0

0

0

100

N2

N2

0

0

100

N2

Different accessibility
  • block class 1 when 100 flows in progress - block class 2 when N2 flows in progress
  • in underload: both classes have negligible blocking (B1» B2» 0)
  • in overload: discrimination is effective
    • if r1 < 1 < r1 + r2, B1» 0, B2» (r1+r2-1)/r2
    • if 1 < r1, B1» (r1-1)/r1, B2» 1
service differentiation and pricing
Service differentiation and pricing
  • different QoS requires different prices...
    • or users will always choose the best
  • ...but streaming and elastic applications are qualitatively different
    • choose streaming class for transparency
    • choose elastic class for throughput
  •  no need for streaming/elastic price differentiation
  • different prices exploit different "willingness to pay"...
    • bringing greater economic efficiency
  • ...but QoS is not stable or predictable
    • depends on route, time of day,..
    • and on factors outside network control: access, server, other networks,...
  •  network QoS is not a sound basis for price discrimination
pricing to pay for the network
Pricing to pay for the network
  • fix a price per byte
    • to cover the cost of infrastructure and operation
  • estimate demand
    • at that price
  • provision network to handle that demand
    • with excellent quality of service

capacity

demand

time of day

pricing to pay for the network86
capacity

demand

demand

capacity

$$$

time of day

time of day

Pricing to pay for the network
  • fix a price per byte
    • to cover the cost of infrastructure and operation
  • estimate demand
    • at that price
  • provision network to handle that demand
    • with excellent quality of service

optimal price

 revenue = cost

$$$

outline87
Outline
  • traffic characteristics
  • QoS engineering for streaming flows
  • QoS engineering for elastic traffic
  • service differentiation
  • conclusions
conclusions
Conclusions
  • a statistical characterization of demand
    • a stationary random process in the busy period
    • a flow level characterization (streaming and elastic flows)
conclusions89
Conclusions
  • a statistical characterization of demand
    • a stationary random process in the busy period
    • a flow level characterization (streaming and elastic flows)
  • transparency for streaming flows
    • rate envelope ("bufferless") multiplexing
    • the "negligible jitter conjecture"
conclusions90
C

r

0

0

1

Conclusions
  • a statistical characterization of demand
    • a stationary random process in the busy period
    • a flow level characterization (streaming and elastic flows)
  • transparency for streaming flows
    • rate envelope ("bufferless") multiplexing
    • the "negligible jitter conjecture"
  • response time for elastic flows
    • a "processor sharing" flow scale model
    • instability in overload (i.e., E [demand]> capacity)
conclusions91
C

r

0

0

1

Conclusions
  • a statistical characterization of demand
    • a stationary random process in the busy period
    • a flow level characterization (streaming and elastic flows)
  • transparency for streaming flows
    • rate envelope ("bufferless") multiplexing
    • the "negligible jitter conjecture"
  • response time for elastic flows
    • a "processor sharing" flow scale model
    • instability in overload (i.e., E [demand]> capacity)
  • service differentiation
    • distinguish streaming and elastic classes
    • limited scope for within-class differentiation
    • flow admission control in case of overload
ad