1 / 22

TCP/IP Over Satellites

TCP/IP Over Satellites. Kofi Weusi-Puryear Sridhar Dhulipala Sanjay Jain Tejaswi Redkar. SJSU, CMPE 206, Spring 1998. Why TCP/IP over satellite?. Ease of installation, remote accessibility, and low cost of running/maintenance.

porter-cobb
Download Presentation

TCP/IP Over Satellites

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TCP/IP Over Satellites Kofi Weusi-Puryear Sridhar Dhulipala Sanjay Jain Tejaswi Redkar SJSU, CMPE 206, Spring 1998

  2. Why TCP/IP over satellite? • Ease of installation, remote accessibility, and low cost of running/maintenance. • Useful for areas or countries that have little terrestrial infrastructure. • Mobility of the devices; ease of adding/removing subsystems • Applications: • Internet connectivity. • Telecom industries (pagers, cell phone, wireless modem..) • Defense

  3. Disadvantages: • Long delay, especially in geosynchronous systems • High probability of bit errors. Objectives: • Highlight TCP/IP performance issues with improvements/suggestions • The following mechanisms(current and proposed are discussed): • Path-Maximum Transmission Unit (MTU) discovery and Forward Error Correction (FEC) • Selective Acknowledgements • Congestion Control using slow start, congestion avoidance, fast transmit and fast recovery algorithms. • TCP window size.

  4. Path-MTU discovery • A non-TCP mechanism, uses Internet Control Message Protocol (ICMP) • MTU: limit on the maximum datagram size on a network media. • Path MTU discovery: Maximum packet size on a network path without IP packet fragmentation. • Allows TCP to use the largest possible packet size, without incurring the cost of fragmentation and reassembly. • Disadvantage: may cause a long pause before TCP is able to start sending data. • In practice, not very time consuming due to wide support of common MTU values.

  5. Forward Error Correction(FEC) • Sender adds repair data to a media stream so packet loss can be repaired by the receiver. • Repair data can be media specific or media independent. • Media specific: Employs knowledge of a media compression scheme. + Low latency. Only a single-packet delay being added. Suitable for interactive applications which require low end to end delays. - Complex encoding/decoding slows sender & receiver • Media independent: + Uses parity based FEC, simplified encoding/decoding - High latency (due to more lost packets).

  6. Selective Acknowledgements (SACKs) • Recently approved extension to TCP. • Makes it possible for TCP to acknowledge data received out of order. • Previously TCP had only been able to acknowledge data received in order. This could lead to needless retransmissions, in case the sender transmits out of order. • Improves the efficiency of TCP retransmissions by reducing the retransmission period. • Helps TCP better evaluate the available path bandwidth in a period of successive losses and avoids doing a slow start.

  7. TCP Congestion Control Mechanisms • TCPs start a connection with the sender injecting multiple segments into the network, up to the window size advertised by the receiver. • Problem with this Approach ? • Congestion at an intermediate router (which causes packets to be discarded) can reduce the throughput of a TCP connection drastically. • To avoid generating an inappropriate amount of network traffic TCP employs four congestion control mechanisms. These algorithms are slow start, congestion avoidance, fast retransmit and fast recovery • TCP uses two variables to accomplish congestion control. • The first variable is the congestion window (cwnd): The amount of data the sender can inject into the network. • The second variable is the slow start threshold (ssthresh) which is used to decide which algorithm is to be used.

  8. Slow Start Algorithm • When a host begins sending data on the network it is required to use the slow start algorithm at the beginning of the transfer. • Slow start begins by initializing cwnd to 1 segment (i.e., the segment size announced by the other end, or the default, typically 536 or 512 bytes). • This forces TCP to transmit one segment and wait for the corresponding ACK. • For each ACK that is received, the value of cwnd is increased by 1 segment. • For Example after the first ACK is received the cwnd will be 2 segments and the sender will be allowed to transmit 2 data packets • This continues until either the cwnd reaches the maximum window size or packet loss is detected.

  9. Congestion Avoidance • Congestion avoidance algorithm deals with packet loss which occurs due to congestion of the network. • In the congestion avoidance mode the cwnd is increased only one segment at a time until the max window size is reached. • When congestion occurs (indicated by a timeout or the reception of duplicate ACKs), one-half of the current window size is saved in into the ssthresh . • If cwnd <= ssthresh, TCP is in slow start; otherwise TCP is performing congestion avoidance. • Congestion avoidance and slow start are independent algorithms with different objectives, but in practice are always implemented together. The following diagram explains how the algorithms work together.

  10. Slow Start and Congestion Avoidance combined implementation.

  11. Slow Start and Congestion Avoidance (Summary) • The TCP protocol reacts to packet loss by using Slow Start and Congestion Avoidance because it assumes there is network congestion. • The above method works well for shared networks, but not for wireless networks (for example Satellite links) where the cause of the problem could be due to bit errors. The solution for bit errors is retransmission. • Therefore the Slow Start and Congestion Avoidance Algorithms do not enhance the TCP performance over Satellite links, but are required in case of a network collapse.

  12. Fast Retransmit and Fast Recovery • Found in many Unix variants of TCP for several years. • Not documented as a Standards Track TCP mechanism until Stevens in 1997 (RFC 2001). • TCP Reno does use Fast Retransmit and Fast Recovery.

  13. Fast Retransmit • TCP's default mechanism to detect dropped segments is the retransmission timeout (RTO) based on observations of the Round-Trip Timeout. • TCP ACKs always acknowledge the highest in-order segment that has arrived. If a segment arrives out-of-order the ACK triggered will be for the highest in-order segment, rather than the segment that just arrived. Thus the receiver is going to send a duplicate ACK if a segment is lost.

  14. Fast Retransmit (continued) • The fast retransmit algorithm uses these duplicate ACKs to detect lost segments. • If 3 duplicate ACKs arrive at the data originator, TCP assumes that a segment has been lost and retransmits the missing segment without waiting for the RTO to expire. • Thus Fast Retransmit reduces the time it takes a TCP sender to detect (and react to) a single dropped segment.

  15. Fast Recovery • After a segment is resent using fast retransmit, the fast recovery algorithm is used to adjust the congestion window. • Fast recovery halves the segment sending rate and begins congestion avoidance phase immediately, skipping slow start. • By skipping slow start Fast Recovery reduces wasting satellite pipe bandwidth.

  16. Fast Retransmit and Recovery (continued) • When multiple segments are lost in a given window of data, only one of the segments will be resent using Fast Retransmit. The rest of the dropped segments must wait for the RTO to expire, which causes TCP to revert to slow start. • TCP knows that packets are still flowing through the network when retransmitting due to duplicate ACKs. However, when resending a packet due to a the expiration of the RTO, TCP cannot infer anything about the state of the network and therefore must proceed conservatively by using the slow start algorithm.

  17. Delay and Window Size • One of the limiting factors for throughput is the delay of a connection. • TCP is designed as a sliding window protocol, hence, a fixed maximum amount of data (“receive window size”) is on the link between sender and receiver as shown in the figure below. • As a consequence the maximum throughput is limited by the Round Trip Time (RTT) of the link: throughput(max) = receive buffer size/round trip time If this equation is rearranged to round trip time x throughput <= receive buffer size one will get the condition for the “delay bandwidth product” (DBP) that is often used to characterize connections.

  18. Delay and Window Size (examples) • For a terrestrial connection with 8 KBytes window size and 10ms RTT this leads to throughput(max) = 8KBytes/10ms ~ 800,000 Bytes/s. • For a satellite connection with 64 KBytes window size maximum in standard TCP) and 560 ms RTT follows: throughput(max) = 64KBytes/560ms ~ 115,00 Bytes/s • As a result, at least a 100 KByte window is needed to fully utilize a T1-link (1.544 Mbps) with 560 ms RTT.

  19. Solutions to Round-Trip Time Measurement (RTTM) Problem:Window Scale Option • The window scale extension expands the definition of the TCP window to 32 bits and then uses a scale factor to carry this 32-bit value in the 16-bit Window field of the TCP header (SEG.WND in RFC-793). • The scale factor is carried in a new TCP option, Window Scale. This option is sent only in a SYN segment (a segment with the SYN bit on), hence the window scale is fixed in each direction when a connection is opened. • The maximum receive window, and therefore the scale factor, is determined by the maximum receive buffer space. • In a typical modern implementation, this maximum buffer space is set by default but can be overridden by a user program before a TCP connection is opened. • This determines the scale factor, and therefore no new user interface is needed for window scaling.

  20. Round-Trip Time Measurement (RTTM) • Accurate and current RTT estimates are necessary to adapt to changing traffic conditions and to avoid an instability known as "congestion collapse" in a busy network. • When the window size is tens or hundreds of packets, the RTT estimator may be seriously in error, resulting in spurious retransmissions. • It is vitally important to use the RTTM mechanism with big windows; otherwise, the door is opened to some dangerous instabilities due to aliasing. Furthermore, the option is probably useful for all TCP's, since it simplifies the sender.

  21. PAWS:Protect against Wrapped Sequence • The PAWS Mechanism a simple mechanism to reject old duplicate segments that might corrupt an open TCP connection. • PAWS operates within a single TCP connection, using state that is saved in the connection control block. • PAWS uses the same TCP Timestamps option as the RTTM mechanism and assumes that every received TCP segment (including data and ACK segments) contains a timestamp SEG.TSval whose values are monotone non-decreasing in time. • The basic idea is that a segment can be discarded as an old duplicate if it is received with a timestamp SEG.TSval less than some timestamp recently received on this connection.

  22. Conclusions Problems: Long delay, especially in geosynchronous systems Inefficient use of satellite channel High probability of bit errors

More Related