1 / 22

Finishing Flows Quickly with Preemptive Scheduling

Finishing Flows Quickly with Preemptive Scheduling. Presenter: Gong Lu. Authors. Chi-Yao Hong Ph.D., Computer Science, UIUC, 09-14 Co-advised by Matthew Caesar and Brighten Godfrey Research interests: Protocol design Network measurement Security. Authors (cont.). Matthew Caesar

alec-ingram
Download Presentation

Finishing Flows Quickly with Preemptive Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Finishing Flows Quickly with Preemptive Scheduling Presenter: Gong Lu

  2. Authors • Chi-Yao Hong • Ph.D., Computer Science, UIUC, 09-14 • Co-advised by Matthew Caesar and Brighten Godfrey • Research interests: • Protocol design • Network measurement • Security

  3. Authors (cont.) • Matthew Caesar • Assistant Professor @ UIUC • Ph.D., Computer Science, U.C. Berkeley • Philip Brighten Godfrey • Assistant Professor @ UIUC • Ph.D.,Computer Science, U.C. Berkeley

  4. Introduction • Datacenter applications • Minimize flow completion time • Meet soft-real-time deadlines • Existing works: TCP, RCP, ICTCP, DCTCP, … • Approximate fair sharing • Far from optimal

  5. Example

  6. Centralized Algorithm • : maximal sending rate of flow i • : expected flow transmission time of flow i

  7. Problem • The centralized algorithm is unrealistic • Having complete visibility of the network • Able to communicate with devices with zero delay • Introduces single point failure and significant overhead for senders to interact with the centralized coordinator

  8. The Solution • Fully distributed implementation • Sender • Receiver • Switch • Propagate flow information via explicit feedback in packet headers • When the feedback reaches the receiver, it is returned to the sender in an ACK packet

  9. PDQ Sender • Maintains some state variables: • : its current sending rate • : switches (if any) who has paused the flow • : flow deadline (optional) • : the expected flow transmission time • : the inter-probing time • : the measured round-trip time

  10. PDQ Sender (cont.) • Sends packets with rate • If , instead sends a probe packet every RTTs • Attaches a scheduling header • Remaining fields are set to its current maintained variables • When ACK packet arrives • Update by feedback • Update by the remaining flow size • Update by the packet arrival time • Remaining fields are copied from the header

  11. PDQ Receiver • Copies the scheduling header from each data packet to its corresponding ACK • Reduce if it exceeds the processing capacity • To avoid buffer overrun on receiver

  12. PDQ Switch • Maintains state about flows on each link • <, , , , > • Only store the most critical flows • Use RCP for less critical flows using the leftover bandwidth • RCP does not require per-flow state • Partial shift away from optimizing completion time and towards traditional fair sharing

  13. PDQ Switch (cont.) • Decides whether to accept or pause the flow • A flow is accepted if all switches along the path accept it • A flow is paused if any switch pauses it • Flow acceptance • In forward path, the switch computes the available bandwidth based on the flow criticality, and updates and • In the reverse path, if a switch sees an empty pauseby field in the header, it updates the global decision of acceptance to its state (and )

  14. Several Optimizations • Early start • Provide seamless flow switching • Early termination • Terminate flows which unable to meet deadlines • Dampening • Avoid frequent flow switching • Suppressed probing • Avoid large bandwidth usage from paused senders

  15. Evaluation

  16. Evaluation (cont.)

  17. Evaluation (cont.)

  18. Evaluation (cont.)

  19. Evaluation (cont.)

  20. Evaluation (cont.)

  21. Conclusion • PDQ can complete flows quickly and meet flow deadlines • PDQ provides a distributed algorithm to approximate a range of scheduling disciplines • PDQ provides significant advantages over existing schemes under extensive packet-level and flow-level simulation

  22. References

More Related