1 / 52

School of Computing Science Simon Fraser University

School of Computing Science Simon Fraser University CMPT 771/471: Internet Architecture and Protocols Transport Layer Instructor: Dr. Mohamed Hefeeda. Review of Basic Networking Concepts. Internet structure Protocol layering and encapsulation Internet services and socket programming

Download Presentation

School of Computing Science Simon Fraser University

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. School of Computing Science Simon Fraser University CMPT 771/471: Internet Architecture and Protocols Transport Layer Instructor: Dr. Mohamed Hefeeda

  2. Review of Basic Networking Concepts • Internet structure • Protocol layering and encapsulation • Internet services and socket programming • Network Layer • Network types: Circuit switching, Packet switching • Addressing, Forwarding, Routing • Transport layer • Reliability, congestion and flow control • TCP, UDP • Link Layer • Multiple Access Protocols • Ethernet

  3. providelogical communication between app processes running on different hosts transport protocols run in end systems send side: breaks app messages into segments, passes to network layer rcv side: reassembles segments into messages, passes to app layer more than one transport protocol available to apps Internet: TCP and UDP application transport network data link physical application transport network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical logical end-end transport Transport services and protocols

  4. network layer: logical communication between hosts transport layer: logical communication between processes relies on, enhances, network layer services Household analogy: 12 kids sending letters to 12 kids processes = kids app messages = letters in envelopes hosts = houses transport protocol = Ann and Bill network-layer protocol = postal service Transport vs. network layer

  5. Multiplexing at send host: Demultiplexing at rcv host: Multiplexing/demultiplexing delivering received segments to correct socket gathering data from multiple sockets, enveloping data with header (later used for demultiplexing) = socket = process application P4 application application P1 P2 P3 P1 transport transport transport network network network link link link physical physical physical host 3 host 1 host 2

  6. SP: 9157 P2 P1 P1 P3 DP: 6428 SP: 5775 SP: 6428 SP: 6428 DP: 6428 DP: 9157 DP: 5775 Connectionless demux client IP: A Client IP:B server IP: C UDP socket identified by: (dst IP, dst Port)  datagrams with different src IPs and/or src ports are directed to same socket

  7. SP: 5775 SP: 9157 P1 P1 P2 P4 P3 P6 P5 client IP: A DP: 80 DP: 80 Connection-oriented demux S-IP: B D-IP:C SP: 9157 DP: 80 Client IP:B server IP: C S-IP: A S-IP: B D-IP:C D-IP:C • TCP socket identified by 4-tuple: (src IP, src Port, dst IP, dst Port)

  8. “no frills,” “bare bones” Internet transport protocol “best effort” service, UDP segments may be: lost delivered out of order to app Connectionless: no handshaking between UDP sender, receiver each UDP segment handled independently of others Why is there a UDP? no connection establishment (which can add delay) simple: no connection state at sender, receiver small segment header no congestion control: UDP can blast away as fast as desired UDP: User Datagram Protocol [RFC 768]

  9. often used for streaming multimedia apps loss tolerant rate sensitive other UDP uses DNS SNMP reliable transfer over UDP: add reliability at application layer application-specific error recovery! UDP 32 bits source port # dest port # Length, in bytes of UDP segment, including header checksum length Application data (message) UDP segment format

  10. important in application, transport, and link layers top-10 list of important networking topics! characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt) Reliable data transfer

  11. Pipelining: sender allows multiple, “in-flight”, yet-to-be-acknowledged pkts range of sequence numbers must be increased buffering at sender and/or receiver Two generic forms of pipelined protocols: go-Back-N, selective repeat Pipelined (Sliding Window) Protocols

  12. Sender: k-bit seq # in pkt header “window” of up to N, consecutive unack’ed pkts allowed Go-Back-N • ACK(n): ACKs all pkts up to and including seq # n -- cumulative ACK • may receive duplicate ACKs (see receiver) • timer for each in-flight pkt • timeout(n): retransmit pkt n and all higher seq # pkts in window • i.e., go back to n

  13. GBN inaction Go back to 2 Window size, N = 4

  14. Do you see potential problems with GBN? Consider high-speed links with long delays (called large bandwidth-delay product pipes) GBN can fill that pipe by having large N  many unACKed pkts could be in the pipe A single lost pkt could cause re-transmission of huge number (up to N) of pkts  waste of bandwidth Solutions?? Go-Back-N

  15. receiver individually acknowledges correctly received pkts buffers pkts, as needed, for eventual in-order delivery to upper layer sender only resends pkts for which ACK not received sender timer for each unACKed pkt sender window N consecutive seq #’s again limits seq #s of sent, unACKed pkts Selective Repeat

  16. Selective repeat: sender, receiver windows

  17. full duplex data: bi-directional data flow in same connection MSS: maximum segment size connection-oriented: handshaking (exchange of control msgs) init’s sender, receiver state before data exchange flow controlled: sender will not overwhelm receiver point-to-point: one sender, one receiver reliable, in-order byte stream: no “message boundaries” pipelined: TCP congestion and flow control set window size send & receive buffers TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581

  18. 32 bits source port # dest port # sequence number acknowledgement number head len not used Receive window U A P R S F checksum Urg data pnter Options (variable length) application data (variable length) TCP segment structure URG: urgent data (generally not used) counting by bytes of data (notsegments!) ACK: ACK # valid PSH: push data now (generally not used) # bytes rcvr willing to accept RST, SYN, FIN: connection estab (setup, teardown commands) Internet checksum (as in UDP)

  19. TCP creates rdt service on top of IP’s unreliable service Pipelined segments Cumulative acks TCP uses single retransmission timer Retransmissions are triggered by: timeout events duplicate acks Initially consider simplified TCP sender: ignore duplicate acks ignore flow control, congestion control TCP reliable data transfer

  20. data rcvd from app: Create segment with seq # seq # is byte-stream number of first data byte in segment start timer if not already running (think of timer as for oldest unacked segment) expiration interval: TimeOutInterval timeout: retransmit segment that caused timeout restart timer Ack rcvd: If acknowledges previously unacked segments update what is known to be acked start timer if there are outstanding segments TCP sender events:

  21. TCP sender(simplified) NextSeqNum = InitialSeqNum SendBase = InitialSeqNum loop (forever) { switch(event) event: data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data) event: timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer event: ACK received, with ACK field value of y if (y > SendBase) { SendBase = y if (there are currently not-yet-acknowledged segments) start timer } } /* end of loop forever */

  22. Host A Host B Seq=92, 8 bytes data ACK=100 Seq=92 timeout timeout X loss Seq=92, 8 bytes data ACK=100 time time lost ACK scenario TCP: retransmission scenarios Host A Host B Seq=92, 8 bytes data Seq=100, 20 bytes data ACK=100 ACK=120 Seq=92, 8 bytes data Sendbase = 100 SendBase = 120 ACK=120 Seq=92 timeout SendBase = 100 SendBase = 120 premature timeout

  23. Host A Host B Seq=92, 8 bytes data ACK=100 Seq=100, 20 bytes data timeout X loss ACK=120 time Cumulative ACK scenario TCP retransmission scenarios (more) SendBase = 120

  24. If TCP timeout is too short: premature timeout  unnecessary retransmissions too long: slow reaction to segment loss Q:how to set TCP timeout value? Based on Round Trip Time (RTT), but RTT itself varies with time! We need to estimate current RTT RTT Estimation SampleRTT: measured time from segment transmission until ACK receipt ignore retransmissions SampleRTT will vary, want estimated RTT “smoother” average several recent measurements, not just current SampleRTT TCP Round Trip Time and Timeout

  25. TCP Round Trip Time and Timeout EstimatedRTT = (1- )*EstimatedRTT + *SampleRTT • Exponential weighted moving average • influence of past sample decreases exponentially fast • typical value:  = 0.125

  26. Example RTT estimation:

  27. Setting the timeout EstimtedRTT plus safety margin large variation in EstimatedRTT -> larger safety margin first estimate how much SampleRTT deviates from EstimatedRTT: TCP Round Trip Time and Timeout DevRTT = (1-)*DevRTT + *|SampleRTT - EstimatedRTT| (typically,  = 0.25) Then set timeout interval: TimeoutInterval = EstimatedRTT + 4*DevRTT

  28. Time-out period often relatively long: long delay before resending lost packet Detect lost segments via duplicate ACKs. Sender often sends many segments back-to-back If segment is lost, there will likely be many duplicate ACKs. If sender receives 3 ACKs for the same data, it supposes that segment after ACKed data was lost: fast retransmit:resend segment before timer expires Fast Retransmit

  29. TCP: 3-way handshake Step 1:client host sends TCP SYN segment to server specifies initial seq # no data Step 2:server host receives SYN, replies with SYNACK segment server allocates buffers specifies server initial seq. # Step 3: client receives SYNACK, replies with ACK segment, which may contain data client server conn. request SYN=1, seq= client_isn conn. granted SYN=1, seq=server_isn, ack=client_isn+1 SYN=0, seq=client_isn+1, ack=server_isn+1 TCP Connection Management: opening A. SYN Flood DoS attack Q. How would a hacker exploit TCP 3-way handshake to bring a server down?

  30. Step 1:client end system sends TCP FIN segment to server Step 2:server receives FIN, replies with ACK. Closes connection, sends FIN Step 3:client receives FIN, replies with ACK Enters “timed wait” – may need to re-send ACK to received FINs Step 4:server, receives ACK Connection closed client server closing FIN ACK closing FIN ACK timed wait closed closed TCP Connection Management: closing

  31. TCP Connection Management TCP server lifecycle TCP client lifecycle

  32. receive side of TCP connection has a receive buffer: speed-matching service: matching the send rate to the receiving app’s drain rate flow control sender won’t overflow receiver’s buffer by transmitting too much, too fast TCP Flow Control • app process may be slow at reading from buffer

  33. (Suppose TCP receiver discards out-of-order segments) spare room in buffer = RcvWindow = RcvBuffer-[LastByteRcvd - LastByteRead] Rcvr advertises spare room by including value of RcvWindow in segments Sender limits unACKed data to RcvWindow guarantees receive buffer doesn’t overflow TCP Flow control: how it works

  34. Congestion: sources send too much data for network to handle different from flow control, which is e2e Congestion results in … lost packets (buffer overflow at routers) more work (retransmissions) for given “goodput” long delays (queueing in router buffers) Premature (unneeded) retransmissions Waste of upstream links’ capacity Pkt traversed several links, then dropped at congested router Congestion Control

  35. End-end congestion control: no explicit feedback from network congestion inferred from end-system observed loss, delay approach taken by TCP Network-assisted congestion control: routers provide feedback to end systems single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM) explicit rate sender should send at Approaches towards congestion control Two broad approaches towards congestion control:

  36. TCP congestion control: Approach • Approach: probe for usable bandwidth in network • increase transmission rate until loss occurs then decrease • Additive increase, multiplicative decrease (AIMD) Saw tooth behavior: probing for bandwidth Rate (CongWin) time

  37. Sender keeps a new variable, Congestion Window (CongWin), and limits unacked bytes to: LastByteSent - LastByteAcked  min {CongWin, RcvWin} For our discussion: assume RcvWin is large enough Roughly, what is the sending rate as a function of CongWin? Ignore loss and transmission delay Rate = CongWin/RTT (bytes/sec) So, rate and CongWin are somewhat synonymous TCP Congestion Control

  38. Congestion occurs at routers (inside the network) Routers do not provide any feedback to TCP How can TCP infer congestion? From its symptoms: timeout or duplicate acks Define loss event≡ timeout or 3 duplicate acks TCP decreases its CongWin (rate) after a loss event TCP Congestion Control Algorithm: three components AIMD: additive increase, multiplicative decrease slow start Reaction to timeout events TCP Congestion Control

  39. additive increase: (congestion avoidance phase) increase CongWin by 1 MSS every RTT until loss detected TCP increases CongWin by: MSS x (MSS/CongWin) for every ACK received Ex. MSS = 1,460 bytes and CongWin = 14,600 bytes With every ACK, CongWin is increased by 146 bytes multiplicative decrease: cut CongWin in half after loss CongWin AIMD

  40. When connection begins, CongWin = 1 MSS Example: MSS = 500 bytes & RTT = 200 msec initial rate = CongWin/RTT = 20 kbps available bandwidth may be >> MSS/RTT desirable to quickly ramp up to respectable rate Slow start: When connection begins, increase rate exponentiallyfast until first loss event. How can we do that? double CongWin every RTT. How? Increment CongWin by 1 MSS for every ACK received TCP Slow Start

  41. Increment CongWin by 1 MSS for every ACK Summary: initial rate is slow but ramps up exponentially fast time TCP Slow Start (cont’d) Host A Host B one segment RTT two segments four segments

  42. TCP Tahoe (Old) Threshold = CongWin / 2 Set CongWin = 1 Slow start till threshold Then Additive Increase // congestion avoidance TCP Reno (most current TCP implementations) If 3 dup acks // fast retransmit Threshold = CongWin / 2 Set CongWin = Threshold // fast recovery Additive Increase Else // timeout Same as TCP Tahoe Reaction to a Loss event

  43. Why differentiate between 3 dup acks and timeout? 3 dup ACKs indicate network capable of delivering some segments timeout indicates a “more alarming” congestion scenario Reaction to a Loss event (cont’d) 3 dup acks CongWin

  44. TCP Congestion Control: Summary • Initially • Threshold is set to large value (65 Kbytes), has no effect • CongWin = 1 MSS • Slow Start (SS): CongWin grows exponentially • till a loss event occurs (timeout or 3 dup ack) or reaches Threshold • Congestion Avoidance (CA): CongWin grows linearly • 3 duplicate ACK occurs: • Threshold = CongWin/2; CongWin = Threshold; CA • Timeout occurs: • Threshold = CongWin/2; CongWin = 1 MSS; SS till Threshold

  45. TCP Throughput Analysis • Understand the fundamental relationship between • Packet loss probability, • RTT, and • TCP performance (throughput) • We present simple model, with several assumptions • Yet it still provides useful insights • See Ch 5 of [HJ04] for a summary of more detailed models with references to the original papers

  46. TCP Throughput Analysis • Any TCP model must capture • Window Dynamics (internal and deterministic) • Controlled internally by the TCP algorithms. • Depends on the particular flavor of TCP • We assume TCP Reno (the most common) • Packet Loss Process (external and uncertain) • Models the aggregate of network conditions at all nodes in the TCP connection path • Typically modeled as a Stochastic Process with probability p that a packet loss occurs • TCP responds by reducing the window size • We usually analyze the steady state • Ignore the slow start phase (transient) • Although many connections finish within slow start, because they send only a few kilobytes

  47. Notations • X(t): Throughput at time t (transmission rate) • W(t): window size at time t • RTT: Round Trip Time X(t) = W(t)/RTT • What does the above equation implicitly assume? • Increasing X(t) has negligible effects on the queuing delay in thenetwork  RTT remains constant

  48. Simple (Periodic) Model loss occurs • Packet losses occur with constant probability p • TCP window starts at W/2 grows to W, then halves, repeat forever … • W(t) packets transmitted each RTT • W(t+1) = W(t) + 1 each round until a loss occurs W W/2 period time (RTT)

  49. T Simple (Periodic) Model Compute the steady state throughput as a function of average loss probability p.

  50. T Simple (Periodic) Model • T: period between detecting packet losses  T = RTT * W /2 • Now, we find W as a function of p. How? • Compute the number of packets sent during a period and equate it to 1/p. (Size of the green area): • W/2 * (W/2 + W) / 2 = 1/p  W = sqrt(8/3p)

More Related