1 / 31

Transport Layer

Transport Layer. Moving Segments. Transport Layer Protocols. Provide a logical communication link between processes running on different hosts as if directly connected Implemented in end systems but not in network routers

cain-rose
Download Presentation

Transport Layer

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Transport Layer Moving Segments

  2. Transport Layer Protocols • Provide a logical communication link between processes running on different hosts as if directly connected • Implemented in end systems but not in network routers • (Possibly) break messages into smaller units, adding transport layer header to create segment • We have two protocols: TCP and UDP

  3. An Analogy • Ann, her brothers and sisters on West Coast; Bill and family on East Coast • Application messages = letters in envelopes • Processes = cousins • Hosts (end systems) = houses • Transport Layer Protocol = Ann and Bill • Network Layer Protocol = Postal Service

  4. UDP • User Datagram Protocol • Provides an unreliable, connectionless service to a process • Provides integrity checking by including error detection fields in segment header

  5. TCP • Transmission Control Protocol • Provides reliable data transfer • Flow control • Sequence numbers • Acknowledgments • Timers • Provides congestion control • Provides integrity through error checking

  6. Multiplexing and Demultiplexing • Multi- is the job of gathering data chunks thru sockets and creating segments • Demulti- is delivering data chunks (segments minus Transport header) to correct socket

  7. Segment Identification • UDP: destination IP address and port number • TCP: source IP, source port, destination IP and destination port

  8. TCP Handshake • Server application has a “welcome socket” that waits for connection requests • Client generates a connection-establishment request (includes source IP and port at client) • Server creates new port (socket) for client • Both sides allocate resources for connection

  9. UDP • Defined in RFC 768 • Does about as little as a transport protocol can do • Attaches source and destination port numbers and passes segment to network layer • No handshaking before segment is sent • DNS uses UDP

  10. Why use UDP? • Finer application-level control over what data is sent and when • No connection establishment (thus no delays) • No connection state information • Only 8 bytes of packet overhead • Out of order receipt can be discarded • Lack of congestion control can lead to high loss rates if network is busy

  11. UDP Checksum • For error detection, can’t fix error(s) • Add (with wrap-around) 16-bit words • Take 1’s compliment (invert 0/1) • Send this value • At receiver, all words are added (including checksum) and result should be 11111111111111111

  12. Principles of Reliable Data Transfer • No transferred data bits are corrupted • All are delivered in the order sent • This gets complicated because lower layer (IP) is a best-effort (no guarantees) delivery service

  13. Stop and Wait protocol • Sender sends packet • Receiver gets packet, checks for accuracy • Receiver sends acknowledgement back • If sender times out, presume NAK and resend packet • Use sequence number to identify packets sent/resent

  14. A little math • West coast to East coast transfer; RTT = 30msec; Channel with 1GHz rate; packet size of 1000 bytes (8000 bits) • Time needed to send packet is 8 microsec • 15.008 msec for packet to get to East coast • Ack packet back to sender after 30.008msec • Utilization is .00027; effective throughput is 267 kbps P 215

  15. Pipelining • Range of sequence numbers must increase • May have to buffer packets on both sides of link • Error response either Go-Back-N or Selective Repeat

  16. Go-Back-N (GBN) • Sender allowed to transmit multiple packets but is constrained to have no more than some maximum, N, not ACK’ed • N is window size and GBN is sliding window protocol

  17. Selective Repeat • Avoids unnecessary retransmission by having the sender retransmit only those packets that it suspects were in error • Big difference is that we will buffer (keep) out-of-order packets

  18. TCP • Client first sends a TCP segment • Server responds with a second segment • Client responds with a third segment (that can optionally have message data) • Connection is point-to-point, not one to many • Can be full duplex

  19. TCP Timer • We need to know when data is lost • We can measure round trip time • Timer expiration could be due to congestion in the network, so… • If timeout, we double timeout value next interval and go back to original value when ACK received.

  20. Flow Control • We have receive buffer. If application is slow to read, we can overflow buffer • Receiver sends value of receive window to sender (with each ACK) • Sender makes sure un-ACK’ed data does not exceed receive window size.

  21. Closing a connection • Client issues a close command (FIN=1) • Server sends ACK • Server sends a close command (FIN=1) • Client sends ACK • Resources are now deallocated

  22. Congestion Control • Theory: As supply (feed) rate increases, output increases to limit of output line and then levels off • T2: As feed rate increases, delay grows exponentially • As feed rate grows, we start loosing packets at router – forcing retransmission • TCP has to infer that this is congestion

  23. TCP Congestion Control • Additive-increase, multiplicative-decrease • Slow start • Reaction to timeout events

  24. Speed Control • Congestion window = amount of data “in the pipeline” • With congestion (lost packet) we halve the window for each occurrence • With ACKs, we increase window by set amount (Maximum Segment Size)

  25. Slow Start • Start at one MSS – send one packet • Double that value each time an ACK comes back – send two, then four, then …

More Related