1 / 65

COM 360

COM 360. Chapter 6. Congestion Control and Resource Allocation. Allocating Resources. How do we effectively allocate resources among a collection of competing users?

wlindsey
Download Presentation

COM 360

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COM 360

  2. Chapter 6 Congestion Control and Resource Allocation

  3. Allocating Resources • How do we effectively allocate resources among a collection of competing users? • These resources include the bandwidth of the links and the buffers on the routers, or switches where the packets are queued awaiting transmission. • Packets contend at a for the use of a link router. • When too many packets are queued waiting for the same link, the queue overflows and packets are dropped. When this happens often, the network is said to be congested. • Most networks provide congestion-control mechanisms

  4. Allocating Resources • Congestion control and allocating resources are two sides of the same coin: • If a network actively allocates resources, such as scheduling a virtual circuit, then congestion may be avoided. • Allocating network resources is difficult because the resources are distributed throughout the network. • On the other hand, you can send as much data as you want and recover from congestion if it occurs. This is the easier approach, but it can be disruptive. • Thus congestion control and resource allocation involve both hosts and network elements, like routers, as well as queuing algorithms.

  5. Issues in Resource Allocation • Resource allocation is complex and is partially implemented in routers or switches and partially in the transport protocol running on the end hosts. • End systems use signaling protocols to convey their resource requirements to network node, which reply with information about availability.

  6. Terminology • Resource allocation is the process by which network elements try to meet the competing demands that applications have for network elements. • Congestion control describes the effort the network nodes make to respond to overload conditions. • Flow control involves keeping a fast sender from overflowing a slow receiver. • Congestion control is intended to keep a lot of senders from sending too much data into the network because of a lack of resources at some point.

  7. Network Model • Packet Switched Network • Problem is the same for routers or switches on a network or an internet. • Source may have sufficient capacity to send a packet on its outgoing link, but an intermediate link may have heavy traffic. • For example, 2 high-speed links may feed into a low speed link as seen on the next diagram

  8. Congestion in a Packet Switched Network

  9. Congestion Control • Congestion control is not the same as routing, and routing around a congested link does not always solve the problem. • In the previous example, it is not possible to route around the router and this congested router is referred to as a bottleneck.

  10. Connectionless flows • In the Internet Model, IP provides a connectionless datagram delivery service and TCP implements an end-to-end connection abstraction. • Datagrams are switched independently, but usually a stream between a particular pair of hosts flows through a particular set of routers. • The idea of flow- a sequence of packets following the same route – is an important abstraction in connection control.

  11. Connectionless flows • Flows can be defined as host-to-host, process-to-process. • A flow is similar to a channel. • A flow is visible to routers inside the network, and a channel is an end-to-end abstraction. • A flow can be implicitly defined or explicitly established like a connection.

  12. Multiple Flows Multiple flows passing through a set of routers.

  13. Taxonomy • Resource allocation mechanisms can be characterized as: • Router-Centric versus Host-Centric • Reservation-Based versus Feedback-Based • Window-Based versus Rate-Based

  14. Router-Centric vs. Host-Centric • In router-centric design, each router takes responsibility for deciding when packets are forwarded and selecting which packets are dropped as well as for informing hosts that are generating the network traffic how many packets they are allowed to send. • In host-centric design, the end hosts observe the network conditions and adjust their behavior accordingly. • These are not mutually exclusive.

  15. Reservation-Based versus Feedback-Based • Resource allocation mechanisms are sometimes classified according to whether they use reservations or feedback. • In a reservation-based system, the end host asks the network for a certain capacity at the time a flow is established. The router allocates enough resources, or rejects the flow. • In a feedback-based system, the end hosts begin sending data and adjust their sending rate according to the feedback they receive.

  16. Window-Based versus Rate-Based • Both flow control and resource allocation mechanisms need a way to express to the sender, how much data they can transmit. They do this with a window or a rate. • In a window-based transport, such as TCP, the receiver advertises the window to the sender. This limits how much data can be sent – a form of flow control. • A rate can also be used to control the sender’s behavior. The receiver says it can process a certain number of bits per second and the sender adheres to this rate.

  17. Evaluation Criteria • How does a network effectively and fairly allocate its resources. • These are the two criteria by which we can evaluate whether a resource allocation mechanism is a good one or not.

  18. Effective Resource Allocation • Consider the two principle network metrics: throughput and delay (latency). • It may appear that increasing throughput means reducing delay, but that is not always the case. • One way to increase throughput is to allow as many packets as possible, driving the utilization up to 100%. • But increasing the number of packets, increases the length of the queues, which means packets are delayed longer in the network.

  19. Power of a Network • The power of the network describes this relationship of throughput and delay: • Power = Throughput/Delay • This is based on M/M/1 queues ( 1 server and a Markov distribution of packet arrival and service). • This assumes infinite queues, but real networks the have finite buffers and occasionally drop packets. • The objective is to maximize this ration, which is a function of the load on the network. • Ideally the resource mechanism operates at the peak of this curve.

  20. Power Curve

  21. Effective Resource Allocation • Ideally, we want to avoid the throughput going to zero because the system is thrashing. • We want a system that is stable- where packets continue to get through the network even when the network is operating under a heavy load • If the mechanism is not stable, the network may experience congestion collapse.

  22. Fair Resource Allocation • Fairness presumes that a fair or equal share of the bandwidth is allocated to each flow. • Raj Jain has proposed a metric to quantify the fairness of a congestion-control mechanism. • (See formula p. 461) • Should we consider the length of the paths being compared? • What is fair when one-four hop flow is compared with three one-hop flows?

  23. Fairness One four hop flow competing with three one-hop flows

  24. Queuing Disciplines • Each router must implement some queuing algorithm that governs how packets are buffered while waiting to be transmitted. • The queuing algorithm allocates both bandwidth (which packets get transmitted ) and buffer space (which packets get discarded). • It also directly affects delay or latency by determining how long a packet waits to be transmitted. • Two common queuing algorithms are FIFO and Fair Queuing (FQ)

  25. FIFO • FIFO – first in first out – the first packet into the router is the first to be transmitted. • Since the amount of buffer space is finite, if a packet arrives and the buffer is full, the router discards it. • This is sometimes called a tail drop, since the packets that arrive at the tail end of the FIFO are dropped.

  26. FIFO a) FIFO queuing b) tail drop at a FIFO queue

  27. FIFO and Priority • FIFO is the simplest algorithm and is the most widely used currently in Internet routers. • A simple variation is a priority queue. The idea is to mark each packet with a priority (in the IP (TOS) Type of Service field). • The routers implement multiple FIFO queues, one for each priority class and transmit from the highest priority queue first. • This can cause starvation, when low priority packets do not get serviced. • It is used to give the router updating packets highest priority.

  28. Fair Queuing (FQ) • Fair queuing maintains a separate queue for each flow currently being handled by the router. • The router services those queues in a round-robin order, giving each a chance in order. • Since the traffic sources do not know the state of the router, this must still be used in conjunction with a congestion control mechanism.

  29. Fair Queuing (FQ) A separate queue is maintained for each flow.

  30. Flow 1 Flow 2 Flow 1 Flow 2 Output (arriving) (transmitting) Output F = 10 F = 10 F = 8 F = 5 F = 2 (a) (b) Fair Queuing Example • Packets with earlier finishing times are sent first • Sending of packet already in progress is completed Algorithm selects both packets in a) from flow 1 to be transmitted, because of their earlier finishing times. In b) the router has begun to send a packet from flow 2 when, the packet from flow 1 arrives.

  31. TCP Congestion Control • TCP sends packets into the network without a reservation and then reacts to observable events that occur. • TCP assumes FIFO queues, but works with FQ also. • TCP is said to be self-clocking since it uses the ACKs to pace the transmission of packets. • It also maintains variables such as CongestionWindow and MAXwindow and increases and decreases the window size.

  32. Source Destination Packets in Transit Additive increase – one packet is added during each RTT

  33. TCP SawTooth Pattern Typical TCP Sawtooth pattern of continually increasing and decreasing the window as a function of time instead of increasing and decreasing by one as in the additive increase.

  34. Source Destination Slow Start TCP provides another mechanism used to increase the congestion window rapidly from a cold start. It adds one packet then two , etc…

  35. Behavior of TCP Congestion Control Blue line is the value of the CongestionWindow over time Bullets at top are timeouts Hash marks at top are time when each packet is transmitted Vertical bars are time when packet that is retransmitted was first transmitted.

  36. Fast Retransmit and Fast Recovery • Fast retransmit was added to TCP to trigger a retransmit sooner that the regular timeout mechanism. • When a data packet is received the receiver sends an ACK.When a packet arrives out of order it cannot be acknowledged, because the earlier packet has not been acknowledged, so TCP sends the same ACK it send last time- a duplicate ACK. • When the sender receives a duplicate ACK, it knows that a packet was missing, and retransmits. • TCP waits for 3 duplicate ACKS before retransmitting.

  37. Fast Retransmit Fast retransmit based on duplicate ACKs

  38. TCP with Fast Retransmit Blue line is the value of the CongestionWindow over time Bullets at top are timeouts Hash marks at top are time when each packet is transmitted Vertical bars are time when packet that is retransmitted was first transmitted.

  39. Congestion Avoidance Mechanisms • TCP’s strategy is to control congestion once it happens, as opposed to avoid congestion in the first place. • TCP repeatedly increases the load on the network to find the point at which congestion occurs, then it backs off from this point. (It finds the available bandwidth.) • An alternative is to predict when congestion is about to happen and to reduce the rate at which hosts send packets, just before the packets start being discarded – this is congestion avoidance.

  40. Congestion Avoidance Mechanisms • Three different avoidance mechanisms put additional functionality into the router to anticipate congestion: • DECbit – splits responsibility between router and end nodes. Router sets a bit if the average queue length >= 1 when packet arrives. • Random Early Detection (RED) – each router monitors its own queue length and notifies the source of congestion. • Source-based Congestion Avoidance- attempts to avoid congestion form the end nodes and watches for a sign from the network that some router’s queue is increasing.

  41. Average Queue Length Computing average queue length at router.

  42. Weighted Average Queue Length

  43. RED thresholds on a FIFO QUEUE If average queue length is smaller than lower threshold, no action is taken. If it is larger than the upper(MAX) threshold, the packet is dropped. If it is between the two thresholds then the packet is dropped with some probability P.

  44. Drop Probability function for RED

  45. Source-Based Congestion window vs. observed throughput rate Top: congestion window; middle observed throughput; Bottom buffer space taken up at the router

  46. TCP Vegas Congestion Avoidance Mechanism

  47. Quality of Service • Packet Switched networks have promised the support for multimedia applications which combine, audio, video and data. • One obstacle to this has been the need for higher bandwidth links. • Improvements in coding and the increasing speed of links are bringing this about.

  48. Real-time Applications • Real-time applications are sensitive to the timeliness of data delivery- they need assurance from the network that the data will arrive on time. • Non-real time applications use retransmission to be sure data arrives correctly, but this only adds to the delay. • Timely delivery must be provided by the network itself ( the routers) and not just the hosts.

  49. Quality of Service • Applications that are happy with best effort service should also be able to use the new service model which provides time assurances. • This implies that the network will treat some packets differently. • A network that can provide different levels of service is said to support Quality of Service (QoS)

  50. Application Requirements • Divide applications into real-time and non real-time or “traditional data” applications. • Non real-time applications (like telenet, ftp, email, web browsing) are also called “elastic” since they are able to stretch into increased delay. • They do not become unusable with increased delay… (users just become frustrated!)

More Related