Quality of Service(QoS). Acknowledgement. Slides from. Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford University [email protected] http://www.stanford.edu/~nickm. The problems caused by FIFO queues in routers.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Quality of Service(QoS)
Acknowledgement
Slides from
Nick McKeown
Professor of Electrical Engineering
and Computer Science, Stanford University
http://www.stanford.edu/~nickm
Fairness
Delay
Guarantees
10
Mb/s
0.55
Mb/s
A
1.1
Mb/s
100
Mb/s
C
R1
e.g. an http flow with a given
(IP SA, IP DA, TCP SP, TCP DP)
0.55
Mb/s
B
What is the “fair” allocation:
(0.55Mb/s, 0.55Mb/s) or (0.1Mb/s, 1Mb/s)?
10
Mb/s
A
1.1
Mb/s
R1
100
Mb/s
D
B
0.2
Mb/s
What is the “fair” allocation?
C
N flows share a link of rate C. Flow f wishes to send at rate W(f), and is allocated rate R(f).
W(f1) = 0.1
1
Round 1: Set R(f1) = 0.1
Round 2: Set R(f2) = 0.9/3 = 0.3
Round 3: Set R(f4) = 0.6/2 = 0.3
Round 4: Set R(f3) = 0.3/1 = 0.3
W(f2) = 0.5
C
R1
W(f3) = 10
W(f4) = 5
Flow 1
Bit-by-bit round robin
Classification
Scheduling
Flow N
Likewise, flows can be allocated different rates by servicing a different number of bits for each flow during each round.
R(f1) = 0.1
1
R(f2) = 0.3
C
R1
R(f3) = 0.3
R(f4) = 0.3
Order of service for the four queues:
… f1, f2, f2, f2, f3, f3, f3, f4, f4, f4, f1,…
Also called “Generalized Processor Sharing (GPS)”
Problem: We need to serve a whole packet at a time.
Solution:
Theorem: Packet p will depart before Fp+ TRANSPmax
Also called “Packetized Generalized Processor Sharing (PGPS)”
6 5 4 3 2 1 0
Time
A1 = 4
1
1
1
1
1
1
1
1
1
1
1
1
B1 = 3
C2 = 1
C1 = 1
D1 = 1
D2 = 2
Weights : 1:1:1:1
6 5 4 3 2 1 0
D1, C1 Depart at R=1
Time
A2, C3 arrive
A2 = 2
A1 = 4
D1
C1
B1
A1
B1 = 3
C3 = 2
C2 = 1
C1 = 1
Round 1
D1 = 1
D2 = 2
Weights : 1:1:1:1
6 5 4 3 2 1 0
Time
C2 Departs at R=2
A2 = 2
A1 = 4
D2
C2
B1
A1
D1
C1
B1
A1
B1 = 3
C3 = 2
C2 = 1
C1 = 1
Round 2
Round 1
D1 = 1
D2 = 2
Weights : 1:1:1:1
6 5 4 3 2 1 0
Time
D2, B1 Depart at R=3
A2 = 2
A1 = 4
1
1
1
1
1
1
1
1
1
1
1
1
D2
C2
B1
A1
D2
C3
B1
A1
D1
C1
B1
A1
B1 = 3
C3 = 2
C2 = 1
C1 = 1
D1 = 1
D2 = 2
6 5 4 3 2 1 0
A2
Departs at R=6
C3,
A1 Depart at R=4
Weights : 1:1:1:1
Time
A2 = 2
A1 = 4
D2
C2
B1
A1
A2
A2
C3
A1
D2
C3
B1
A1
D1
C1
B1
A1
B1 = 3
C3 = 2
C2 = 1
C1 = 1
6
5
Round 4
Round 3
D1 = 1
D2 = 2
Sort packets
Weights : 1:1:1:1
Time
6 5 4 3 2 1 0
Departure order for packet by packet WFQ: Sort by finish round of packets
A2 = 2
A1 = 4
A2
A2
C3
C3
A 1
A 1
A1
A 1
D2
D2
B1
B1
B1
C2
D1
C1
B1 = 3
Round 2
Round 2
Round 1
Round 1
C3 = 2
C2 = 1
C1 = 1
D1 = 1
D2 = 2
Weights : 1:1:1:1
Round 3
Fairness
Delay
Guarantees
e.g. 10fps, 600x500 bytes
per frame ~= 24Mb/s
45
45
24Mb/s
45
100
100
Source
Destination
Cumulative
Bytes
24Mb/s
24Mb/s
Total delay
Time
Prob
99%
Delay/latency
e.g. 200ms
Min
Source
Destination
Cumulative
Bytes
24Mb/s
delay
buffer
Time
Playback
point
24Mb/s
24Mb/s
24Mb/s
24Mb/s
Cumulative
Bytes
latency
buffer
Time
Cum.
Bytes
latency
buffer
Time
45
45
24Mb/s
45
100
100
FIFO delay, d(t)
Cumulative
bytes
Model of router queue
A(t)
D(t)
A(t)
D(t)
R
B(t)
B(t)
R
time
Assume continuous time, bit-by-bit flows for a moment…
Model of FIFO router queue
Router
Without QoS:
A(t)
D(t)
R
B(t)
Model of WFQ router queues
R(f1)
A1(t)
Router
With QoS:
D(t)
A(t)
A2(t)
R(f2)
AN(t)
R(fN)
Cumulative
bytes
Key idea:
In general, we don’t know the arrival process. So let’s constrain it.
A1(t)
D1(t)
R(f1)
time
r
Cumulative
bytes
Number of bytes that can
arrive in any period of length t
is bounded by:
This is called “(s,r) regulation”
A1(t)
s
time
dmax
Bmax
Cumulative
bytes
A1(t)
D1(t)
r
s
R(f1)
time
Theorem [Parekh,Gallager ’93]: If flows are leaky-bucket constrained,and routers use WFQ, then end-to-end delay guarantees are possible.
Tokens
at rate,r
Token bucket
size,s
Packets
Packets
One byte (or packet) per token
Packet buffer
Tokens
at rate,r
Token bucket
sizes
To network
Variable bit-rate
compression
C
r
bytes
bytes
bytes
time
time
time
“Can the network accept the call and provide the QoS?”
The network needs to know the TSpec, the RSpec and the Path followed by packets.
So, the sender periodically sends the Tspec to the whole multicast group (“PATH messages”).
To initiate a new reservation, a receiver sends messages to reserve resources “up” the multicast tree (“RESV messages”).
R1
R2
R4
R3
Path
R1
Path
Path messages sent periodically to replenish soft-state.
R2
Path
Path
R4
R3
R1
RESV messages sent periodically to replenish soft-state.
RESV
R2
RESV
RESV
RESV
R4
RESV
R3
Examples:
RSVP supports many styles of RESV merging.
The jury is still out as to what scheme makes most sense, if any.