1 / 48

Packet Scheduling (The rest of the dueling bandwidth story)

Packet Scheduling (The rest of the dueling bandwidth story). Lab 9: Configuring a Linux Router. Set NICs in 10 Mbps full-duplex mode Enable IPv4 forwarding Manually configure routing tables Install tcp_sink and udp_sink Generate traffic from tcp_gen and udp_gen

aqua
Download Presentation

Packet Scheduling (The rest of the dueling bandwidth story)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Packet Scheduling(The rest of the dueling bandwidth story)

  2. Lab 9: Configuring a Linux Router • Set NICs in 10 Mbps full-duplex mode • Enable IPv4 forwarding • Manually configure routing tables • Install tcp_sink and udp_sink • Generate traffic from tcp_gen and udp_gen • TCP/UDP traffic flow measurements

  3. Lab 9 Results • What is the major issue? • What impact did TCP’s flow control have? • What impact did UDP’s flow control (or lack thereof) have? • What implications does this have for today’s Internet?

  4. Lab 9 (first part): Conclusions • TCP’s flow control mechanisms back off in the presence of UDP congestion • UDP’s lack of flow control mechanisms can cause link starvation for TCP flows • TCP application performance (e-mail, web,FTP) can be degraded significantly by UDP traffic on the same shared link

  5. Lab 9 (first part): Conclusions (cont.) • UDP is the preferred protocol for most multimedia applications. Why? • Future challenge for the Internet community: • Will multimedia applications of the Internet impair the performance of mainstay TCP applications? • How can the industry manage this new Internet traffic without stifling the growth of new applications?

  6. Lab 9 (Second part): Strict Priority Scheduling • Our first attempt to solve problem of TCP and UDP interaction: Priority Scheduling • Modify the Linux source code • Implemented a strict priority scheduler • Priority based on layer 4 protocol • Give TCP priority over UDP which has no flow control • Generate traffic from tcp_gen and udp_gen • TCP/UDP traffic flow measurements

  7. Lab 9 (Second part): Conclusions • TCP’s flow control mechanism is “greedy,” but “timid.” • Strict priority scheduling removes the “timid” aspects. TCP greedily consumes all available bandwidth. • We have not solved the problem. We have just shifted it from UDP to TCP.

  8. The “Real” Solution: Fair Scheduling

  9. Introduction • What is scheduling? • Advantages of scheduling • Scheduling “wish list” • Scheduling Policies • Generalized Processor Sharing (GPS) • Packetized GPS Algorithms • Stochastic Fair Queuing (SFQ) and Class Based Queuing (CBQ)

  10. Motivation for Scheduling • TCP application performance degraded significantly by UDP traffic on the same shared link • Different versions of TCP may not co-exist fairly (ex: TCP Reno vs. TCP Vegas) • Quality of Service (QoS) requirements for next generation Internet • Most important: Finishes the story about TCP and UDP traffic mixtures (email and web versus video teleconferencing and Voice over IP)

  11. What is Scheduling? • Sharing of bandwidth always results in contention • A scheduling discipline resolves contention: Which packet should be serviced next? • Future networks will need the capability to share resources fairly and provide performance guarantees • Implications for QoS?

  12. Where does scheduling occur? • Anywhere where contention may occur • At every layer of the protocol stack • Discussion will focus on MAC/network layer scheduling – at the output queues of switches and routers

  13. Advantages of Scheduling • Differentiation - different users can have different QoS over the same network • Performance Isolation - behavior of each flow or class is independent of all other network traffic • QoS Resource Allocation - with respect to bandwidth, delay, and loss characteristics • Fair Resource Allocation - includes both short and long term fairness

  14. Scheduling “Wish List” • An ideal scheduling discipline… • Is amenable to high speed implementation • Achieves (weighted) fairness • Supports multiple QoS classes • Provides performance bounds • Allows easy admission control decisions • Does such an algorithm exist that can satisfy all these requirements?

  15. Requirement 1: High Speed Implementation • Scheduler must make a decision once every few microseconds • Should be implementable in hardware. Critical constraint: VLSI area available. • Should be scalable and efficient in software. Critical constraint: Order of growth per flow or class.

  16. Requirement 2: Fairness • Scheduling discipline allocates a scare resource • Fairness is defined both on a short term and long term basis • Fairness is evaluated according to the max-min criteria

  17. Max-Min Fairness Criteria • Each connection gets no more bandwidth than what it needs • Excess bandwidth, if any, is shared equally • Example: Generalized Processor Sharing (GPS) scheduler managing three flows with equal priority

  18. Benefits of Fairness • Fair schedulers provide protection • Bandwidth gobbling applications are kept in check • Automatic isolation of heavy traffic flows • Fairness is a global (Internet level) objective, while scheduling is local (router or switch level) • Global fairness guarantees are beyond the scope of the course (go to grad school :>)

  19. Scheduling Policies 1) First Come First Serve (FCFS) • Packets queued in FCFS order • No fairness • Most widely adopted scheme in today’s Internet 2) Strict Priority • Multiple queues with different priorities • Packets in a given queue are served only when all higher priority queues are empty 3) Generalized Processor Sharing (GPS)

  20. Generalized Processor Sharing • Idealized fair queuing approach based on a fluid model of network traffic • Divides the link of bandwidth B into a discrete number of channels • Each channel has bandwidth bi where: B = b1 + b2 + b3 + … • Extremely simple in concept • Impossible to implement in practice. Why?

  21. Shortcomings of GPS • Reason 1: Inaccurate traffic model • Underlying model of networks is fluid-based or continuous • Actual network traffic consists of discrete units (packets) • Impossible to divide link indefinitely

  22. Shortcomings of GPS • Reason 2: Transmission is serial • GPS depicts a parallel division of link usage • Actual networks transmit bits serially • “Sending more bits” implies sending increasing the transmission rate

  23. Packetized GPS • Packetized version of GPS • Attempts to approximate the behavior of GPS as closely as possible • All schemes hereafter fall under this category

  24. Packetized GPS Algorithms • Weighted Fair Queuing (WFQ) • Weighted Round Robin (WRR) • Deficit Round Robin (DRR) • Stochastic Fair Queuing (SFQ) • Class Based Queuing (CBQ) • Many, many others…

  25. Weighted Fair Queuing • Computes the finish time of each packet under GPS • Packets tagged with each finish time • Packet with smallest finish time across queues is serviced first • Not scalable due to the overhead of computing the ideal GPS schedule

  26. WFQ: An Example 3 flows A, B, C read left to right Assume all packets are same size Given example weights A=1, B=2, C=3 Divide packet finish time by weight Weighted fair share of service results 8 7 5 4 6 3 2 1

  27. Weighted Round Robin • Simplest approximation of GPS • Queues serviced in round robin fashion, proportional to assigned weights • Max-min fair over long time scales • May cause short term unfairness A Fixed Tx Schedule: C C C B B A A B C

  28. Deficit Round Robin (DRR) • Handles varying size packets • Each queue begins with zero credits or quanta • Flow transmits a packet only when it accumulates enough quanta, subtract used quanta • A queues not served during a round accumulates a weighted number of quanta • Use of quanta permit DRR to fairly serve packets of varying size

  29. Stochastic Fair Queuing* • Traffic divided into a large number of FIFO queues serviced in a round robin fashion • Uses a “stochastic” rather than fixed allocation of flows to queues by means of a hashing algorithm to decide which queue to put flow in • Prevents unfair bandwidth usage of any one flow • Frequent recalculating of the hash necessary to ensure fairness • Extremely simple to configure in Linux

  30. Class Based Queuing* • A framework for organizing hierarchical link sharing • Link divided into different traffic classes • Each class can have its own scheduling algorithm, providing enormous flexibility • Classes can borrow spare capacity from a parent class • Most difficult scheduling discipline to configure in Linux

  31. CBQ: An Example

  32. Some results from a previous semester’s final lab • Covered SFQ and CBQ • Identical experimental setup as Lab 9 • SFQ and CBQ are already built into version 2.4.7-10 and higher of the Linux kernel • No modification of the source code required • Repeat TCP and UDP traffic measurements to determine the impact of each scheduling discipline

  33. Overview (cont.) • Do TCP and UDP flows share the link fairly in the experiment? • What are the relative advantages and disadvantages of SFQ vs. CBQ? How does each one meet the 5 requirements of the scheduling “wish list”?

  34. Overview (cont.) 3) Are these scheduling disciplines scalable to the complexity required to handle real Internet traffic? 4) How can these scheduling algorithms be used to provide QoS guarantees in tomorrow’s Internet? What might this architecture look like?

  35. How we turned on SFQ • cd /usr/src/linux-2.4.18-14 • make oldconfig • This command will save all of the options that are currently built into the kernel to a file (.config). This allows you to keep the current options you have selected and add to them, rather than erase the options you have previously turned on. • cp .config /root (y to overwrite) • make clean • make mrproper • make xconfig • Click “Load Configuration from file”; in Enter filename, type /root/.config • We need to turn on several options. • In the main menu, select Networking Options. Scroll down and select QoS and/or Fair Queuing. Select <y> for every option in this menu. This will enable every available queuing discipline that is built into the Linux Kernel. Click on OK. Click Main Menu. Click Save and Exit. Click OK. • make dep • make bzImage • Completed the remaining steps we did in lab 9 to compile the kernel

  36. How we turned on fair queuing • Opened an xterm window and type: • tc qdisc add dev eth1 root sfq perturb 5 • This line enables SFQ and installs it on the interface eth0, which is connected to your destination. The command tc sets up a traffic classifier in the router. The word qdisc stands for queuing discipline. The value perturb 5 indicates that the hashing scheme used by SFQ is reconfigured once every 5 seconds. In general, the smaller the pertrub value, the better the division of bandwidth between TCP and UDP. To change perturb value to a different value (e.g., 6), type the followings • tc qdisc del dev eth1 root sfq perturb 5 • then • tc qdisc add dev eth1 root sfq perturb 6 • Now, type the following command: • tc –s –d qdisc ls • This should return a string of text similar to the following: • qdisc sfq 800c: dev eth1 quantum 1514b limit 128p flows 128/1024 perturb 5sec • Sent 4812 bytes 62 pkts (dropped 0, overlimits 0) • The number 800c is the automatically assigned handle number. Limit means that 128 packets can wait in this queue. There are 1024 hash buckets available for accounting, of which 128 can be active at a time (no more than 128 packets would be queued!) Once every 5 seconds, the hashes are reconfigured.

  37. Stochastic Fair Queuing • Enabled SFQ and set perturb value to 5 which means hashing scheme used by SFQ is reconfigured once every 5 seconds

  38. Measured Results

  39. How we turned on CBQ • tc qdisc add dev eth1 root handle 1: cbq bandwidth 10Mbit allot 1514 cell 8 avpkt 1024 mpu 64 • This line enables CBQ and installs it on the interface eth0, which is connected to your destination. • The command tc sets up anything related to traffic controller in a router. • The word qdisc stands for queuing discipline. • Generally, the classes in CBQ can be constructed into a tree structure, starting from the root and its direct descendants. A descendant is a parent if it has its own direct descendants. Each parent can originate a CBQ with a certain amount of bandwidth available for its direct descendants. Each descendant class is identified by a class identification with the syntax handle x. In this case, the root handle 1:0 means that this CBQ is located at root and the classid of a direct descendant classes of the root has the form 1:x (e.g., 1:1, 1:2, 1:3). • The bandwidth 10Mbits is the maximum available bandwidth for this CBQ. • allot is a parameter that is used by the link sharing scheduler. • A cell value of 8 indicates that the packet transmission time will be measured in terms of 8 bytes. • mpu represents the minimum number of bytes that will be sent in a packet. Packets that are of size less than mpu are set to mpu usually set equal to 64. This is done because for ethernet-like interfaces the minimum packet size is 64.

  40. How we turned on CBQ • tc class add dev eth1 parent 1:0 classid 1:1 cbq bandwidth 10Mbit rate 10Mbit allot 1514 cell 8 avpkt 1024 mpu 64 maxburst 40 • tc class add dev eth1 parent 1:1 classid 1:2 cbq bandwidth 10Mbit rate 5Mbit allot 1514 cell 8 avpkt 1024 mpu 64 maxburst 40 • tc class add dev eth1 parent 1:1 classid 1:3 cbq bandwidth 10Mbit rate 5Mbit allot 1514 cell 8 avpkt 1024 mpu 64 maxburst 40 • First , we define a direct descendant classes of 1:0, whose classid is 1:1. Then, we define two direct descendant classes of 1:1, whose classids are 1:2 (for TCP traffic) and 1:3 (for UDP traffic) • The tc class add is a command used to define a class. • parent defines the parent class. • cbq bandwidth 10Mbits represents the maximum available bandwidth possible for the class • rate 5Mbit is the bandwidth guaranteed for the class • For each class, we enable “bandwidth borrowing” option, in which a descendant class is allowed to borrow the available bandwidth from its parent. • In CBQ, a class can send at most maxburst back-to-back packets, so the rate of a class is proportional to maxburst : rate = packetsize * maxburst * 8 / (kernel clock speed)

  41. How we turned on CBQ • Type the following commands, • tc filter add dev eth1 parent 1:0 protocol ip u32 match ip protocol 6 0xff flowid 1:2 • tc filter add dev eth1 parent 1:0 protocol ip u32 match ip protocol 17 0xff flowid 1:3 • tc filter add is a command that installs a filter for IP packets passing through a device • flowid represents the classid with which the filter is associated. If the IP protocol number in the IP header of a packet is equal to 6 (TCP), the packet belongs to class 1:2. If the IP protocol number in the IP header of a packet is equal to 17 (UDP), the packet belongs to class 1:3.

  42. Class Based Queuing • Can define separate classes for different applications and then treat them equally (or unequally if desired) • Here CBQ was enabled with each class assigned 5 Mb/s rate

  43. Measured Results

  44. Conclusions • Class based Queuing allocates bandwidth better than any other approach we have used, including SFQ. • Neither type of traffic gets more than 5 Mb/s (unless there is no other traffic class in which case more than 5 Mb/s will be allowed

More Related