Probabilistic packet scheduling pps
This presentation is the property of its rightful owner.
Sponsored Links
1 / 20

Probabilistic Packet Scheduling (PPS) PowerPoint PPT Presentation


  • 41 Views
  • Uploaded on
  • Presentation posted in: General

Probabilistic Packet Scheduling (PPS). Ming Zhang, Randy Wang, Larry Peterson, Arvind Krishnamurthy Department of Computer Science Princeton University. OS. 900. 300. P1. P2. 1000. 500. 1000. P3. P4. P5. Motivation – Lottery Scheduling.

Download Presentation

Probabilistic Packet Scheduling (PPS)

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Probabilistic packet scheduling pps

Probabilistic Packet Scheduling (PPS)

Ming Zhang, Randy Wang, Larry Peterson, Arvind Krishnamurthy

Department of Computer Science

Princeton University


Motivation lottery scheduling

OS

900

300

P1

P2

1000

500

1000

P3

P4

P5

Motivation – Lottery Scheduling

  • OS defines currency and assigns lottery tickets to processes

  • Processes proportionally divide CPU cycles

  • A Process can make local CPU allocation decision


Proportional bandwidth allocation

S0

S3

S1

S5

S6

S4

S2

Proportional Bandwidth Allocation

  • Router defines currency in tickets/s and assigns tickets to its inputs

  • Link maintains currency exchange rate

  • Bandwidth at bottleneck is proportional to ticket share

  • Local bandwidth allocation decision and isolation

A 10Mb/s

1000t/s

2Mb/s

900t/s

1Mb/s

2000t/s

B 10Mb/s

500t/s

1Mb/s

300t/s

C 10Mb/s

1000t/s


Algorithm in brief

Algorithm in Brief

  • TCP source tags tickets on each packet

  • Each router runs a variant of RED to decide whether to drop or accept a packet

  • Relabel packets at each link based on currency exchange rate


Ticket tagging

Ticket Tagging

  • OutTktRate – t/s assigned to a TCP source

  • AvgRate - average throughput of a flow

  • Tag OutTktRate / AvgRate onto each packet

  • Tickets on packet are inversely proportional to the average throughput


Ticket based red tred

Ticket-based RED (TRED)

  • InTkt is the tickets on an incoming packet. ExpectTkt is the tickets “should” be on an incoming packet

  • ExpectTkt is computed as average tickets on all incoming packets

  • Bottlenecked flows put approximately ExpectTkt tickets on their packets

    If MinThresh < AvgQLen < MaxThresh

    compute probability p the same as in RED

    p’ = p * (ExpectTkt / InTkt)3

    drop the packet with probability p’


Exchange rate

Exchange Rate

  • A multi-hop flow may go through many routers

  • Different routers have their own currencies

  • Convert tickets between different currencies

  • Exchange rate at each link

    XRate = OutTktRate / InTktRate

  • Relabel packets according to exchange rate

    OutTkt = InTkt * XRate


Receiver based algorithm

Receiver-based Algorithm

  • Controlling bandwidth allocation at receiver

  • AckOutTktRate – t/s assigned to an output

  • Tagging and relabeling of ACKs are similar

  • Compute OutTktRate from tickets on ACKs

    OutTktRate = AckInTktRate


One hop configuration

100t/s

200t/s

1

1

3000t/s

2

2

P

Q

4.65Mb/s

26ms

30

30

One-Hop Configuration

  • Simulations are run in NS-2

  • Sender-based and receiver-based


One hop results

One-Hop Results


Multi hop configuration

100t/s

200t/s

1000t/s

100t/s

200t/s

1000t/s

A1

A2

P1

11000t/s

S1

1.65Mb/s

26ms

A10

P3

P4

S2

B1

5500t/s

B2

P2

S20

B10

Multi-Hop Configuration


Multi hop results

Multi-Hop Results


Fairly share unused bandwidth

Fairly Share Unused Bandwidth


Multiple bottlenecks configuration

2000

S0

A

10Mb/s

3600

S8

S4

1.2Mb/s

10 Mb/s

S1

1000

10Mb/s

B

5Mb/s

S6

S7

? Mb/s

C10Mb/s

S2

1400

3Mb/s

S5

S9

1200

10Mb/s

D

S3

700

Multiple Bottlenecks Configuration


Multiple bottlenecks results

Multiple Bottlenecks Results


Comparison with csfq

Comparison with CSFQ


One hop configuration1

100t/s

200t/s

1

1

3000t/s

2

2

P

Q

4.65Mb/s

26ms

30

30

One-Hop Configuration


One hop results1

One-Hop Results


Related work

Related Work

  • WFQ, IntServ

  • DiffServ

  • CSFQ

  • User-share differentiation


Conclusion and future work

Conclusion and Future Work

  • Proportional bandwidth allocation

  • A modified RED algorithm (TRED), no per-flow state, scalable

  • Routers make local bandwidth allocation and isolation

  • Sender-based and receiver-based

  • Experiment with more realistic traffic load and complex topologies


  • Login