1 / 33

Data Communication and Networks

Data Communication and Networks. Lecture 7 Congestion in Data Networks October 16, 2003 Joseph Conron Computer Science Department New York University jconron@cs.nyu.edu. Congestion: informally: “too many sources sending too much data too fast for network to handle”

anne
Download Presentation

Data Communication and Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Communication and Networks Lecture 7 Congestion in Data Networks October 16, 2003 Joseph Conron Computer Science Department New York University jconron@cs.nyu.edu

  2. Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control! manifestations: lost packets (buffer overflow at routers) long delays (queuing in router buffers) a top-10 problem! Principles of Congestion Control

  3. What Is Congestion? • Congestion occurs when the number of packets being transmitted through the network approaches the packet handling capacity of the network • Congestion control aims to keep number of packets below level at which performance falls off dramatically • Data network is a network of queues • Generally 80% utilization is critical • Finite queues mean data may be lost

  4. Queues at a Node

  5. Effects of Congestion • Packets arriving are stored at input buffers • Routing decision made • Packet moves to output buffer • Packets queued for output transmitted as fast as possible • Statistical time division multiplexing • If packets arrive to fast to be routed, or to be output, buffers will fill • Can use flow control • Can discard packets • Can propagate congestion through network

  6. Interaction of Queues

  7. Ideal NetworkUtilization

  8. two senders, two receivers one router, infinite buffers no retransmission large delays when congested maximum achievable throughput Causes/costs of congestion: scenario 1

  9. Practical Performance • Ideal assumes infinite buffers and no overhead • Buffers are finite • Overheads occur in exchanging congestion control messages

  10. Effects of Congestion -No Control

  11. one router, finite buffers sender retransmission of lost packet Causes/costs of congestion: scenario 2

  12. always: (’in = in) “perfect” retransmission only when loss: retransmission of delayed (not lost) packet makes larger (than perfect case) for same l l l > = l l l in in in out out out Causes/costs of congestion: scenario 2 “costs” of congestion: • more work (retrans) for given “goodput” • unneeded retransmissions: link carries multiple copies of pkt

  13. four senders multihop paths timeout/retransmit l l in in Causes/costs of congestion: scenario 3 Q:what happens as and increase ?

  14. Causes/costs of congestion: scenario 3 Another “cost” of congestion: • when packet dropped, any “upstream transmission capacity used for that packet was wasted!

  15. Mechanisms for Congestion Control

  16. Backpressure • If node becomes congested it can slow down or halt flow of packets from other nodes • May mean that other nodes have to apply control on incoming packet rates • Propagates back to source • Can restrict to logical connections generating most traffic in VC system. • Backpressure is of limited utility: • Used in connection oriented that allow hop by hop congestion control (e.g. X.25) • Not used in ATM

  17. Choke Packet • Control packet • Generated at congested node • Sent to source node • e.g. ICMP source quench • From router or destination • Source cuts back until no more source quench message • Sent for every discarded packet, or anticipated • Rather crude mechanism

  18. Implicit Congestion Signaling • Transmission delay may increase with congestion • Packet may be discarded • Source can detect these as implicit indications of congestion • Useful on connectionless (datagram) networks • e.g. IP based • (TCP includes congestion and flow control - see chapter 17) • Used in frame relay LAPF

  19. Explicit Congestion Signaling • Network alerts end systems of increasing congestion • End systems take steps to reduce offered load • Backwards • Congestion avoidance in opposite direction to packet required (node receiving packet must slow sending) • Forwards • Congestion avoidance in same direction as packet required (node receiving packet must tell peer to slow down) • Used in ATM by ABR Service

  20. Explicit Signaling Approaches • Binary • Bit in packet indicates to receiver that congestion exists - reduce send rate. • Credit Based • Sender is given “credit” for number of octets or packets it may send before it must stop and wait for additional credit. • Rate Based • Sender may transmit at a rate up to some limit. • Rate can be reduced by control message.

  21. Traffic Shaping • Smooth out traffic flow and reduce cell clumping • Used in rate based schemes. • Token bucket algorithm.

  22. Token Bucket for Traffic Shaping

  23. End-end congestion control: no explicit feedback from network congestion inferred from end-system observed loss, delay approach taken by TCP This is an IMPLICIT approach Network-assisted congestion control: routers provide feedback to end systems single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM) explicit rate sender should send at This is an EXPLICIT approach Approaches towards congestion control Two broad approaches towards congestion control:

  24. ABR: available bit rate: “elastic service” if sender’s path “underloaded”: sender should use available bandwidth if sender’s path congested: sender throttled to minimum guaranteed rate RM (resource management) cells: sent by sender, interspersed with data cells bits in RM cell set by switches (“network-assisted”) NI bit: no increase in rate (mild congestion) CI bit: congestion indication RM cells returned to sender by receiver, with bits intact Case study: ATM ABR congestion control

  25. two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell sender’ send rate thus minimum supportable rate on path EFCI bit in data cells: set to 1 in congested switch if data cell preceding RM cell has EFCI set, sender sets CI bit in returned RM cell EXPLICIT Case study: ATM ABR congestion control

  26. IF CI == 1 Reduce ACR to a value >= MCR ELSE IF NI == 0 Increase ACR to a value <= PCR IF ACR > ER Set ACR = max(ER, MCR) ABR Cell Rate Feedback Rules ACR = Allowed Cell Rate MCR = Minimum Cell Rate PCR = Peak Cell Rate ER = Explicit Rate If you want to know more

  27. end-end control (no network assistance) transmission rate limited by congestion window size, Congwin, over segments: w * MSS throughput = Bytes/sec RTT Implicit Case Study: TCP Congestion Control Congwin • w segments, each with MSS bytes sent in one RTT:

  28. two “phases” slow start congestion avoidance important variables: Congwin threshold: defines threshold between two slow start phase, congestion control phase “probing” for usable bandwidth: ideally: transmit as fast as possible (Congwin as large as possible) without loss increaseCongwin until loss (congestion) loss: decreaseCongwin, then begin probing (increasing) again TCP congestion control:

  29. exponential increase (per RTT) in window size (not so slow!) loss event: timeout (Tahoe TCP) and/or or three duplicate ACKs (Reno TCP) Slowstart algorithm time TCP Slowstart Host A Host B one segment RTT initialize: Congwin = 1 for (each segment ACKed) Congwin++ until (loss event OR CongWin > threshold) two segments four segments

  30. TCP Congestion Avoidance Congestion avoidance /* slowstart is over */ /* Congwin > threshold */ Until (loss event) { every w segments ACKed: Congwin++ } threshold = Congwin/2 Congwin = 1 perform slowstart 1 1: TCP Reno skips slowstart (fast recovery) after three duplicate ACKs

  31. TCP congestion avoidance (AIMD) : Additive Increase, Multiplicative Decrease increase window by 1 per RTT decrease window by factor of 2 on loss event AIMD

  32. TCP Fairness Fairness goal: if N TCP sessions share same bottleneck link, each should get 1/N of link capacity TCP connection 1 bottleneck router capacity R TCP connection 2

  33. Two competing sessions: Additive increase gives slope of 1, as throughout increases multiplicative decrease decreases throughput proportionally Why is TCP fair? equal bandwidth share R loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 2 throughput loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 1 throughput R

More Related