1 / 37

Transport layer

Transport layer. UDP/TCP. Transport Protocols Provide End-to-End Data Transfer Services. Create a logical communication channel between two communicating parties A user does not need to worry about how to route his/her packets through a network. May provide various transport services

branxton
Download Presentation

Transport layer

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Transport layer UDP/TCP

  2. Transport Protocols Provide End-to-End Data Transfer Services • Create a logical communication channel between two communicating parties • A user does not need to worry about how to route his/her packets through a network. • May provide various transport services • Error control? (packet dropping, duplication, corruption) • In-sequence delivery? • Preserve message boundary? • Out-of-band delivery? • Flow control? • Congestion control? • Example: UDP and TCP

  3. Using Port Number Is Needed to Identify a Communicating Party on a Machine • A communicating party may be an application program such as ftp, telnet, or www running on a machine. • Using IP addresses only allows us to forward and deliver packets to a machine • To further identify which application program should receive a packet, in addition to using IP addresses, we need to use port numbers. • Example: A web server uses TCP port 80.

  4. Port number IP address

  5. UDP Protocol • Connectionless • Before sending a UDP packet, we do not need to set up a connection first. • Get or lost service • No error, flow, congestion control • No packet resequencing • Preserve message boundary • Primary applications: • Multimedia streaming applications such as video conferenceing, video playback • NFS (because there is no need to set up a connection before sending a request/reply)

  6. TCP Protocol • Connection-oriented • Before sending a TCP packet, we need to set up a TCP connection first. • A TCP connection is a full-duplex logical communication channel. The sending and receiving nodes need to be configured. However, the routers on the connection path need not be configured. • Not the same as a virtual circuit or a physical circuit. • Provide a reliable, in-sequence byte-stream delivery service • With error, flow, and congestion control • Can resequence arrived packets • Do not preserve message boundary • Applications: almost used everywhere (e.g., www, telnet, ftp, etc.)

  7. UDP/TCP Header Formats TCP UDP

  8. Berkeley Sockets Are the Most Popular Transport Primitives Client Server request reply

  9. TCP Connection Setup and Termination One 3-way handshaking to set up a connection Two 2-way handshaking to close a full-duplex connection.

  10. A Small Data Transfer Spends Most Time on Connection Setup and Termination • According to measurement results, 70% Internet traffic is for WWW. • And, the average web page size is only about 4 KB. (less than 3 1500-byte Ethernet packets) • Therefore, if the network is not congested, most of a user’s waiting time is spent on the TCP connection setup and termination. (e.g., the RTT of a TCP connection to U.S. is about 200ms, (1.5 + 2) * 200 = 700 ms is needed for connection setup and termination) • The transmission time of the 4 KB web page actually is nothing. (e.g., 3 * 1.2 = 3.6 ms on a 10 Mbps Ethernet)

  11. TCP State Diagram

  12. TCP Uses Sliding Windows to Implement Error, Flow, and Congestion Controls

  13. Using Sliding Windows to Implement Error, Flow, and Congestion Controls • Error control • Do not advance the sending sliding window until the expected ACK comes back • Flow control • Let the size of the sending sliding window be always less than the receiving node’s buffer size. • Congestion control • Based on the current available bandwidth in the network, dynamically adjust the size of the sending sliding window • The instantaneous throughput of a TCP connection is (sending sliding window size / round-trip time)

  14. TCP Uses A Cumulative Acknowledgement Scheme • Sending back an ACK(n) means that all data packets with sequence number up to n are correctly received. • Another option is the individual acknowledgement scheme in which ACK(n) only means that data packet with sequence number n is correctly received. • Advantages: solve the ACK packet loss problem. • We use ACK to detect data packet losses. • Should we use ACKACK to detect ACK packet losses? • Disadvantage: if there is a packet lost, the ACK number that is sent back to the sender cannot advance even though new data packets have been received. This causes the sending window to stick and no more new data packets can be sent out. TCP throughput thus is poor.

  15. A TCP Connection Has Five Timers • Retransmit timer • If a data packet is lost, it needs to be resent. • Persist timer • 500ms . If a window open update from the receiver is lost, the sender and receiver may deadlock. The sender sends a probe if the connection has been idle for 500 ms. • Keepalive timer • 2 hours. Check if the other communicating party still alive. Good for server to release resources used by dead clients. • 2MSL (Maximum segment lifetime) time wait timer • A few minutes. May need to resend the final ACK. Also, prevent a new connection using the same port/IP address to accept packets for a previous one • Delayed ACK timer • 200 ms, hoping to use ACK piggyback to save network bandwidth

  16. RTT Estimation Is Important for Correctly Setting the Retransmit Timer • If the estimated RTT is too large • Then when a data packet is lost, we wait too long before resending it, increasing delay and decreasing throughput • If the estimated RTT is too small • We may prematurely resend data packets that are not lost but just have not reached the receiver. • Waste network bandwidth • Exacerbate the current congestion (note: if a data packet is lost, mostly likely right now the network is congested and the packet is dropped due to buffer overflow.) • So, the current TCP RTT estimation is quite conservative. • On most OS, the value used for the retransmit timer cannot be less than 1 second!

  17. Karn’s Algorithm to Filter Out Invalid RTT Samples • If a data packet is resent and then a corresponding ACK is received, then there are two possibilities, which the TCP sender cannot distinguish: • Karn’s algorithm is that we should not use the measured RTT for a retransmitted data packet to estimate the RTT. Data ? Ack

  18. Exponential Average of RTT Samplesand Its Deviation (K+1)’th RTT sample Estimated Smoothed RTT g = 0.125 h = 0.25 f = 4 The value that should be used for the RxTransmit timer Estimated Smoothed RTT Deviation (K+1)’th RTT deviation

  19. Exponential Retransmit Time Out Backoff • The calculated srtt (k) is used for sending new data packets. • For sending retransmitted data packets, the used value for the retransmitted packet is srtt(k) * 2 ^(n), if this is the nth time to resend the same packet. • Share the same spirit of Ethernet’s MAC exponential backoff.

  20. TCP Congestion Control Is Window-Based • The purpose of congestion control is to decrease a traffic source’s sending rate when the network is congested, and increase the source’s sending rate when the network is not congested. • TCP uses packet dropping as the signal of network congestion • Congestion control can be either window-based or rate-based. • Window-based: by controlling the maximum number of outstanding data packets in RTT. (That is, the size of congestion window -- cwnd.) • Rate-based: by directly controlling the sending rate.

  21. TCP Congestion Window Uses Additive-Increase and Multiplicative Decrease • When there is no congestion (no packet dropping), the congestion window cw increases itself by a constant value per RTT to probe for more available bandwidth. (The constant for TCP is 1 packet size.) • When there is one packet dropping, the cw decreases itself by a half. That is, cw = cw/2. • Research results show that only AIMD can make the network stable. MIMD, AIAD, or MIAD do not work. CW

  22. To Probe Available BW, TCP Congestion Control Itself Causes Packet Drops • Suppose that the bottleneck router’s buffer can store N packets and the links on the path of a TCP connection can store 0 packets, and there is only one greedy TCP connection using the bottleneck link’s bandwidth, then the packet drop rate induced by TCP congestion control on the router is 1 / (N/2 + (N/2+1) + (N/2+2) + (N/2 + N/2)) CW drop N N/2

  23. The Packet Drop Rate Increases With the Number of Competing TCP Connections • Using the previous example, suppose now there are M competing TCP sharing the bottleneck link’s bandwidth (also, the bottleneck router’s buffer space), then the packet drop rate increases to: • Let B = N/M • Packet drop rate becomes 1 / (B/2 + (B/2+1) + (B/2+2) + (B/2 + B/2)) • This suggests that as the number of TCP users increases in the Internet, the packet drop rates on routers will increase as well if we do not increase the link bandwidth to hold more packets on links.

  24. TCP Congestion Control’s Slow Start and Congestion Avoidance Phases • The goals of congestion control are: • High link utilization • When the network has available bandwidth, we want to use it immediately • Slow start: double the CW every RTT. • Implementation: Increase the CW by one packet when an ACK is received. • Low packet drop rate • When the network is stable, traffic sources should reduce their sending rates until their aggregate sending rate match the available bandwidth. • Congestion Avoidance: increase the CW by one packet every RTT • Implementation: Increase the CW by one packet when CW packets have been received.

  25. Slow Start Congestion Avoidance Threshold is set to be one half of the CW when packets are dropped.

  26. TCP Uses Self-Clocking to Increase Its Congestion Window and Send Out Its Packets Because a new data packet is sent out when an ACK is received, the sender thus can send its packets at the bottleneck link’s bandwidth.

  27. TCP Retransmit Time Out (RTO) Will Resend Lost Data Packets • If a data packet is dropped, it needs to be retransmitted. • The retransmit timer will expire if the corresponding ACK does not come back soon. • Most operating systems such as BSD, Linux, Windows enforce a lower bound (1 second) on the retransmit timer value. • If every dropped packet needs to be resent by RTO, the TCP throughput will be very poor.

  28. TCP Fast Retransmit and Recovery Quickly Resend Lost Packets • If the sender receives three duplicate ACK packets from the receiver, it cuts the CW by a half and immediately resends the lost data packet pointed by these duplicate ACK packets. • This scheme greatly improves a TCP connection’s throughput. • Problems: • Packet reordering caused by the network may unnecessarily trigger Fast Retransmit (cutting its CW by a half), thus causing poor throughput • Small TCP transfers cannot compete with long TCP transfers. A small TCP transfer’s CW cannot grow to a large value before it is finished. As a result, it is hard for the small TCP transfer to receive 3 duplicated ACKs. Most of the time, RTO is needed.

  29. CW drop N N/2 TCP Fast Retransmit and Recovery TCP Retransmit Timeout CW drop > 1 second > 1 second

  30. Where Does the Time Go When You Download a Web Page? • The signal propagation delays from your machine to the server. (RTT) • No way to improve unless using proxy servers • Packet transmission times spent on each links • Dump your 56 Kbps Modem and subscribe to 1.5 Mbps ADSL • Queueing delay spent on each router • Download between 2 and 6 am in the early morning. • TCP connection setup (1.5 RTT) and termination (2 RTTs) • Use proxy servers if available • TCP slow start and congestion avoidance • Use proxy servers if available • TCP retransmit timeout ( > 1 second) • Download between 2 and 6 am in the early morning. • TCP exponential retransmit timer backoff • You better press the “reload” button 

  31. The Congestion Collapse Problem of Internet • UDP traffic sources do not use any congestion control. • A bad news is that multimedia streaming applications, which uses UDP to transport A/V, are becoming more and more popular. • Even if all traffic sources use TCP, the packet drop rates on router caused by TCP congestion control increase with the number of TCP users. • Because the average web page size is too small (only 4~8 KB), most small transfers are finished before TCP congestion control has a chance to take effect.

  32. The Unfairness Problem of the Internet • UDP traffic (e.g., RealPlayer) competes with TCP traffic (e.g., FTP) • UDP does not use any congestion control. • Long TCP transfers (e.g., download IE 5.0 from MS) compete with short TCP transfers (e.g., download a 4 KB web page) • TCP fast retransmit does not work well for small transfers because CW cannot grow to a large value. • A site with a large number of TCP connections competes with a site with a small number of TCP connections • TCP exhibits per-flow fairness. That is, if there are N TCP connections competing for available bandwidth, each TCP connection will roughly achieve 1/N bandwidth.

  33. TCP Problems • Poor performance on lossy wireless links. • Every packet loss (even caused by packet corruption) is assumed to be caused by packet drop due to congestion. • TCP congestion control is unnecessarily triggered. • Performance is unnecessarily poor. • The main problem is that TCP cannot distinguish between congestion and corruption losses. • Couple error control and congestion control together • Cannot be used for regulating the sending rate of a UDP packet stream, which is commonly used by multimedia streaming applications such as RealPlayer. • Generated traffic is too bursty. Bursty traffic is hard to manage because it can cause massive packet drops at routers.

More Related