Loading in 5 sec....

Engineering for QoS and the limits of service differentiationPowerPoint Presentation

Engineering for QoS and the limits of service differentiation

Download Presentation

Engineering for QoS and the limits of service differentiation

Loading in 2 Seconds...

- 180 Views
- Uploaded on
- Presentation posted in: Travel / Places

Engineering for QoS and the limits of service differentiation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

IWQoS

June 2000

Engineering for QoS and the limits of service differentiation

Jim Roberts

(james.roberts@francetelecom.fr)

feasible technology

- quality of service
- transparency
- response time
- accessibility

- service model
- resource sharing
- priorities,...

- network engineering
- provisioning
- routing,...

a viable business model

- statistical characterization of traffic
- notions of expected demand and random processes
- for packets, bursts, flows, aggregates

- QoS in statistical terms
- transparency: Pr [packet loss], mean delay, Pr [delay > x],...
- response time: E [response time],...
- accessibility:Pr [blocking],...

- QoS engineering, based on a three-way relationship:

demand

performance

capacity

- traffic characteristics
- QoS engineering for streaming flows
- QoS engineering for elastic traffic
- service differentiation

- a self-similar process
- variability at all time scales

- due to:
- infinite variance of flow size
- TCP induced burstiness

- a practical consequence
- difficult to characterize a traffic aggregate

Ethernet traffic, Bellcore 1989

- traffic intensity is predictable ...
- ... and stationary in the busy hour

- traffic intensity is predictable ...
- ... and stationary in the busy hour

tue wed thu fri sat sun mon

12h 18h 00h 06h

- a flow = one instance of a given application
- a "continuous flow" of packets
- basically two kinds of flow, streaming and elastic

- a flow = one instance of a given application
- a "continuous flow" of packets
- basically two kinds of flow, streaming and elastic

- streaming flows
- audio and video, real time and playback
- rate and duration are intrinsic characteristics
- not rate adaptive (an assumption)
- QoS negligible loss, delay, jitter

- a flow = one instance of a given application
- a "continuous flow" of packets
- basically two kinds of flow, streaming and elastic

- streaming flows
- audio and video, real time and playback
- rate and duration are intrinsic characteristics
- not rate adaptive (an assumption)
- QoS negligible loss, delay, jitter

- elastic flows
- digital documents ( Web pages, files, ...)
- rate and duration are measures of performance
- QoS adequate throughput (response time)

variable rate video

- streaming flows
- constant or variable rate
- compressed audio (O[103 bps]) and video (O[106 bps])

- highly variable duration
- a Poisson flow arrival process (?)

- constant or variable rate

- streaming flows
- constant or variable rate
- compressed audio (O[103 bps]) and video (O[106 bps])

- highly variable duration
- a Poisson flow arrival process (?)

- constant or variable rate
- elastic flows
- infinite variance size distribution
- rate adaptive
- a Poisson flow arrival process (??)

variable rate video

- stream traffic demand
- arrival rate x bit rate x duration

- elastic traffic demand
- arrival rate x size

- a stationary process in the "busy hour"
- eg, Poisson flow arrivals, independent flow size

traffic

demand

Mbit/s

busy hour

time of day

- traffic characteristics
- QoS engineering for streaming flows
- QoS engineering for elastic traffic
- service differentiation

- a "traffic contract"
- QoS guarantees rely on
- traffic descriptors + admission control + policing

- time scale decomposition for performance analysis
- packet scale
- burst scale
- flow scale

user-network

interface

user-network

interface

network-network

interface

- constant rate flows
- packet size/inter-packet interval = flow rate
- maximum packet size = MTU

buffer size

log Pr [saturation]

- constant rate flows
- packet size/inter-packet interval = flow rate
- maximum packet size = MTU

- buffer size for negligible overflow?
- over all phase alignments...
- ...assuming independence between flows

- constant rate flows
- packet size/inter-packet interval = flow rate
- maximum packet size = MTU

- buffer size for negligible overflow?
- over all phase alignments...
- ...assuming independence between flows

- worst case assumptions:
- many low rate flows
- MTU-sized packets

buffer size

increasing number,

increasing pkt size

log Pr [saturation]

- constant rate flows
- packet size/inter-packet interval = flow rate
- maximum packet size = MTU

- buffer size for negligible overflow?
- over all phase alignments...
- ...assuming independence between flows

- worst case assumptions:
- many low rate flows
- MTU-sized packets

- buffer sizing for M/DMTU/1 queue
- Pr [queue > x] ~ C e -r x

buffer size

M/DMTU/1

increasing number,

increasing pkt size

log Pr [saturation]

- constant rate flows acquire jitter
- notably in multiplexer queues

- constant rate flows acquire jitter
- notably in multiplexer queues

- conjecture:
- if all flows are initially CBR and in all queues: S flow rates < service rate
- they never acquire sufficient jitter to become worse for performance than a Poisson stream of MTU packets

- constant rate flows acquire jitter
- notably in multiplexer queues

- conjecture:
- if all flows are initially CBR and in all queues: S flow rates < service rate
- they never acquire sufficient jitter to become worse for performance than a Poisson stream of MTU packets

- M/DMTU/1 buffer sizing remains conservative

bursts

- assume flows have an intantaneous rate
- eg, rate of on/off sources

packets

arrival rate

packets

bursts

arrival rate

- assume flows have an intantaneous rate
- eg, rate of on/off sources

- bufferless or buffered multiplexing?
- Pr [arrival rate < service rate] < e
- E [arrival rate] < service rate

buffer size

0

0

log Pr [saturation]

Pr [rate overload]

buffer size

0

0

log Pr [saturation]

longer

burst length

shorter

buffer size

0

0

log Pr [saturation]

more variable

burst length

less variable

buffer size

0

0

log Pr [saturation]

long range dependence

burst length

short range dependence

- the token bucket is a virtual queue
- service rate r
- buffer size b

r

b

b

b'

non-

conformance

probability

- the token bucket is a virtual queue
- service rate r
- buffer size b

- non-conformance depends on
- burst size and variability
- and long range dependence

r

b

- the token bucket is a virtual queue
- service rate r
- buffer size b

- non-conformance depends on
- burst size and variability
- and long range dependence

- a difficult choice for conformance
- r >> mean rate...
- ...or b very large

b

b'

non-

conformance

probability

r

b

time

- provisioning and/or admission control to ensure Pr [Lt>C] < e
- performance depends only on stationary rate distribution
- loss rate E [(Lt -C)+] / E [Lt]

- insensitivity to self-similarity

output

rate C

combined

input

rate Lt

- small amplitude of rate variations ...
- peak rate << link rate (eg, 1%)

- small amplitude of rate variations ...
- peak rate << link rate (eg, 1%)

- ... or low utilisation
- overall mean rate << link rate

- small amplitude of rate variations ...
- peak rate << link rate (eg, 1%)

- ... or low utilisation
- overall mean rate << link rate

- we may have both in an integrated network
- priority to streaming traffic
- residue shared by elastic flows

- accept new flow only if transparency preserved
- given flow traffic descriptor
- current link status

- no satisfactory solution for buffered multiplexing
- (we do not consider deterministic guarantees)
- unpredictable statistical performance

- measurement-based control for bufferless multiplexing
- given flow peak rate
- current measured rate (instantaneous rate, mean, variance,...)

- accept new flow only if transparency preserved
- given flow traffic descriptor
- current link status

- no satisfactory solution for buffered multiplexing
- (we do not consider deterministic guarantees)
- unpredictable statistical performance

- measurement-based control for bufferless multiplexing
- given flow peak rate
- current measured rate (instantaneous rate, mean, variance,...)

- uncritical decision threshold if streaming traffic is light
- in an integrated network

utilization (r=a/m) for E(m,a) = 0.01

r

0.8

0.6

0.4

0.2

m

0 20 40 60 80 100

- "classical" teletraffic theory; assume
- Poisson arrivals, rate l
- constant rate per flow r
- mean duration 1/m
- mean demand, A = l/m r bits/s

- blocking probability for capacity C
- B = E(C/r,A/r)
- E(m,a) is Erlang's formula:
- E(m,a)=

- scale economies

utilization (r=a/m) for E(m,a) = 0.01

r

0.8

0.6

0.4

0.2

m

0 20 40 60 80 100

- "classical" teletraffic theory; assume
- Poisson arrivals, rate l
- constant rate per flow r
- mean duration 1/m
- mean demand, A = l/m r bits/s

- blocking probability for capacity C
- B = E(C/r,A/r)
- E(m,a) is Erlang's formula:
- E(m,a)=

- scale economies

- generalizations exist:
- for different rates
- for variable rates

- traffic characteristics
- QoS engineering for streaming flows
- QoS engineering for elastic traffic
- service differentiation

- reactive control
- end-to-end protocols (eg, TCP)
- queue management

- time scale decomposition for performance analysis
- packet scale
- flow scale

- a multi-fractal arrival process

- a multi-fractal arrival process
- but loss and bandwidth related by TCP (cf. Padhye et al.)

congestion

avoidance

loss rate

p

B(p)

- a multi-fractal arrival process
- but loss and bandwidth related by TCP (cf. Padhye et al.)

congestion

avoidance

loss rate

p

B(p)

- a multi-fractal arrival process
- but loss and bandwidth related by TCP (cf. Padhye et al.)
- thus, p = B-1(p): ie, loss rate depends on bandwidth share

congestion

avoidance

loss rate

p

B(p)

- reactive control (TCP, scheduling) shares bottleneck bandwidth unequally
- depending on RTT, protocol implementation, etc.
- and differentiated services parameters

Example: a linear network

route 0

route 1

route L

- reactive control (TCP, scheduling) shares bottleneck bandwidth unequally
- depending on RTT, protocol implementation, etc.
- and differentiated services parameters

- optimal sharing in a network: objectives and algorithms...
- max-min fairness, proportional fairness, maximal utility,...

- reactive control (TCP, scheduling) shares bottleneck bandwidth unequally
- depending on RTT, protocol implementation, etc.
- and differentiated services parameters

- optimal sharing in a network: objectives and algorithms...
- max-min fairness, proportional fairness, maximal utility,...

- ... but response time depends more on traffic process than the static sharing algorithm!

Example: a linear network

route 0

route 1

route L

link capacity C

fair shares

- assume perfect fair shares
- link rate C, n elastic flows
- each flow served at rate C/n

link capacity C

fair shares

- assume perfect fair shares
- link rate C, n elastic flows
- each flow served at rate C/n

- assume Poisson flow arrivals
- an M/G/1 processor sharing queue
- load, r = arrival rate x size / C

a processor sharing queue

link capacity C

fair shares

a processor sharing queue

throughput

C

r

0

0

1

- assume perfect fair shares
- link rate C, n elastic flows
- each flow served at rate C/n

- assume Poisson flow arrivals
- an M/G/1 processor sharing queue
- load, r = arrival rate x size / C

- performance insensitive to size distribution
- Pr [n transfers] = rn(1-r)
- E [response time] = size / C(1-r)

link capacity C

- assume perfect fair shares
- link rate C, n elastic flows
- each flow served at rate C/n

- assume Poisson flow arrivals
- an M/G/1 processor sharing queue
- load, r = arrival rate x size / C

- performance insensitive to size distribution
- Pr [n transfers] = rn(1-r)
- E [response time] = size / C(1-r)

- instability if r > 1
- i.e., unbounded response time
- stabilized by aborted transfers...
- ... or by admission control

fair shares

a processor sharing queue

throughput

C

r

0

0

1

transfer

1-p

flows

Poisson

session

arrivals

p

think time

- non-Poisson arrivals
- Poisson sessions
- Bernoulli feedback

processor

sharing

infinite

server

transfer

1-p

flows

Poisson

session

arrivals

processor

sharing

p

think time

infinite

server

- non-Poisson arrivals
- Poisson sessions
- Bernoulli feedback

- discriminatory processor sharing
- weight fi for class i flows
- service rate fi

transfer

1-p

flows

Poisson

session

arrivals

processor

sharing

p

think time

infinite

server

- non-Poisson arrivals
- Poisson sessions
- Bernoulli feedback

- discriminatory processor sharing
- weight fi for class i flows
- service rate fi

- rate limitations (same for all flows)
- maximum rate per flow (eg, access rate)
- minimum rate per flow (by admission control)

... to prevent disasters at sea !

- improve efficiency of TCP
- reduce retransmissions overhead ...
- ... by maintaining throughput

- prevent instability
- due to overload (r > 1)...
- ...and retransmissions

- avoid aborted transfers
- user impatience
- "broken connections"

- a means for service differentiation...

1

.8

.6

.4

.2

0

300

200

100

0

Blocking probability

E [Response time]/size

0 100 200 N

0 100 200 N

- N = the maximum number of flows admitted
- negligible blocking when r<1, maintain quality when r>1

r = 1.5

r = 1.5

r = 0.9

r = 0.9

- N = the maximum number of flows admitted
- negligible blocking when r<1, maintain quality when r>1

- M/G/1/N processor sharing system
- min bandwidth = C/N
- Pr [blocking] = rN(1 - r)/(1 - rN+1) (1 - 1/r) , for r>1

1

.8

.6

.4

.2

0

300

200

100

0

Blocking probability

E [Response time]/size

0 100 200 N

0 100 200 N

1

.8

.6

.4

.2

0

300

200

100

0

Blocking probability

E [Response time]/size

r = 1.5

r = 1.5

r = 0.9

r = 0.9

0 100 200 N

0 100 200 N

- N = the maximum number of flows admitted
- negligible blocking when r<1, maintain quality when r>1

- M/G/1/N processor sharing system
- min bandwidth = C/N
- Pr [blocking] = rN(1 - r)/(1 - rN+1) (1 - 1/r) , for r>1

- uncritical choice of threshold
- eg, 1% of link capacity (N=100)

throughput

C

backbone link

(rate C)

access

rate

access links

(rate<<C)

0

0

r

1

- TCP throughput is limited by access rate...
- modem, DSL, cable

- ... and by server performance

throughput

C

backbone link

(rate C)

access

rate

access links

(rate<<C)

0

0

r

1

- TCP throughput is limited by access rate...
- modem, DSL, cable

- ... and by server performance
- backbone link is a bottleneck only if saturated!
- ie, if r > 1

utilization (r) for B = 0.01

r

0.8

0.6

0.4

0.2

m

0 20 40 60 80 100

- "elastic" teletraffic theory; assume
- Poisson arrivals, rate l
- mean size s

- blocking probability for capacity C
- utilization r= ls/C
- m = admission control limit
- B(r,m) =rm(1-r)/(1-rm+1)

- "elastic" teletraffic theory; assume
- Poisson arrivals, rate l
- mean size s

- blocking probability for capacity C
- utilization r= ls/C
- m = admission control limit
- B(r,m) =rm(1-r)/(1-rm+1)

- impact of access rate
- C/access rate = m
- B(r,m) E(m,rm)

utilization (r) for B = 0.01

r

0.8

E(m,rm)

0.6

0.4

0.2

m

0 20 40 60 80 100

- traffic characteristics
- QoS engineering for streaming flows
- QoS engineering for elastic traffic
- service differentiation

- discriminating between stream and elastic flows
- transparency for streaming flows
- response time for elastic flows

- discriminating between stream flows
- different delay and loss requirements
- ... or the best quality for all?

- discriminating between elastic flows
- different response time requirements
- ... but how?

- priority to packets of streaming flows
- low utilization negligible loss and delay

- priority to packets of streaming flows
- low utilization negligible loss and delay

- elastic flows use all remaining capacity
- better response times
- per flow fair queueing (?)

- priority to packets of streaming flows
- low utilization negligible loss and delay

- elastic flows use all remaining capacity
- better response times
- per flow fair queueing (?)

- to prevent overload
- flow based admission control...
- ...and adaptive routing

- priority to packets of streaming flows
- low utilization negligible loss and delay

- elastic flows use all remaining capacity
- better response times
- per flow fair queueing (?)

- to prevent overload
- flow based admission control...
- ...and adaptive routing

- an identical admission criterion for streaming and elastic flows
- available rate > R

- different delays?
- priority queues, WFQ, ...
- but what guarantees?

delay

delay

- different delays?
- priority queues, WFQ, ...
- but what guarantees?

- different loss?
- different utilization (CBQ, ...)
- "spatial queue priority"
- partial buffer sharing, push out

delay

delay

loss

loss

- different delays?
- priority queues, WFQ, ...
- but what guarantees?

- different loss?
- different utilization (CBQ, ...)
- "spatial queue priority"
- partial buffer sharing, push out

- or negligible loss and delay for all
- elastic-stream integration ...
- ... and low stream utilization

delay

delay

loss

loss

loss

delay

throughput

C

access

rate

r

0

0

1

1st class

3rd class

2nd class

- different utilization
- separate pipes
- class based queuing

throughput

C

access

rate

r

0

0

1

1st class

3rd class

2nd class

throughput

C

access

rate

r

0

0

1

- different utilization
- separate pipes
- class based queuing

- different per flow shares
- WFQ
- impact of RTT,...

throughput

- different utilization
- separate pipes
- class based queuing

- different per flow shares
- WFQ
- impact of RTT,...

- discrimination in overload
- impact of aborts (?)
- or by admission control

C

access

rate

r

0

0

1

1st class

3rd class

2nd class

throughput

C

access

rate

r

0

0

1

- block class 1 when 100 flows in progress - block class 2 when N2 flows in progress

1

0

Blocking probability

r = 1.5

r = 0.9

0 100 200 N

1

r1 = r2 = 0.4

0

100

N2

0

- block class 1 when 100 flows in progress - block class 2 when N2 flows in progress

1

r1 = r2 = 0.4

0

100

N2

0

- block class 1 when 100 flows in progress - block class 2 when N2 flows in progress
- in underload: both classes have negligible blocking (B1» B2» 0)

B2B10

1

r1 = r2 = 0.6

B2

.33

B1

0

N2

0

100

- block class 1 when 100 flows in progress - block class 2 when N2 flows in progress
- in underload: both classes have negligible blocking (B1» B2» 0)
- in overload: discrimination is effective
- if r1 < 1 < r1 + r2, B1» 0, B2» (r1+r2-1)/r2

1

r1 = r2 = 0.4

B2B10

0

N2

0

1

1

1

B2

r1 = r2 = 0.4

r1 = r2 = 0.6

r1 = r2 = 1.2

B2

.33

B1

.17

B2B10

B1

0

0

0

100

N2

N2

0

0

100

N2

- block class 1 when 100 flows in progress - block class 2 when N2 flows in progress
- in underload: both classes have negligible blocking (B1» B2» 0)
- in overload: discrimination is effective
- if r1 < 1 < r1 + r2, B1» 0, B2» (r1+r2-1)/r2
- if 1 < r1, B1» (r1-1)/r1, B2» 1

- different QoS requires different prices...
- or users will always choose the best

- ...but streaming and elastic applications are qualitatively different
- choose streaming class for transparency
- choose elastic class for throughput

- no need for streaming/elastic price differentiation
- different prices exploit different "willingness to pay"...
- bringing greater economic efficiency

- ...but QoS is not stable or predictable
- depends on route, time of day,..
- and on factors outside network control: access, server, other networks,...

- network QoS is not a sound basis for price discrimination

- fix a price per byte
- to cover the cost of infrastructure and operation

- estimate demand
- at that price

- provision network to handle that demand
- with excellent quality of service

capacity

demand

time of day

capacity

demand

demand

capacity

$$$

time of day

time of day

- fix a price per byte
- to cover the cost of infrastructure and operation

- estimate demand
- at that price

- provision network to handle that demand
- with excellent quality of service

optimal price

revenue = cost

$$$

- traffic characteristics
- QoS engineering for streaming flows
- QoS engineering for elastic traffic
- service differentiation
- conclusions

- a statistical characterization of demand
- a stationary random process in the busy period
- a flow level characterization (streaming and elastic flows)

- a statistical characterization of demand
- a stationary random process in the busy period
- a flow level characterization (streaming and elastic flows)

- transparency for streaming flows
- rate envelope ("bufferless") multiplexing
- the "negligible jitter conjecture"

C

r

0

0

1

- a statistical characterization of demand
- a stationary random process in the busy period
- a flow level characterization (streaming and elastic flows)

- transparency for streaming flows
- rate envelope ("bufferless") multiplexing
- the "negligible jitter conjecture"

- response time for elastic flows
- a "processor sharing" flow scale model
- instability in overload (i.e., E [demand]> capacity)

C

r

0

0

1

- a statistical characterization of demand
- a stationary random process in the busy period
- a flow level characterization (streaming and elastic flows)

- transparency for streaming flows
- rate envelope ("bufferless") multiplexing
- the "negligible jitter conjecture"

- response time for elastic flows
- a "processor sharing" flow scale model
- instability in overload (i.e., E [demand]> capacity)

- service differentiation
- distinguish streaming and elastic classes
- limited scope for within-class differentiation
- flow admission control in case of overload