Lecture 7
This presentation is the property of its rightful owner.
Sponsored Links
1 / 50

Lecture 7 PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

Lecture 7. TCP Connection Management. Opening a TCP Connection Closing a TCP Connection Special Scenarios State Diagram. TCP Connection Management. Recall: TCP sender, receiver establish “connection” before exchanging data segments initialize TCP variables: seq. #s

Download Presentation

Lecture 7

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Lecture 7

Lecture 7

Tcp connection management

TCP Connection Management

  • Opening a TCP Connection

  • Closing a TCP Connection

  • Special Scenarios

  • State Diagram

Tcp connection management1

TCP Connection Management

Recall:TCP sender, receiver establish “connection” before exchanging data segments

  • initialize TCP variables:

    • seq. #s

    • buffers, flow control info (e.g. RcvWindow)

  • client: connection initiator

    Socket clientSocket = new Socket("hostname","port number");

  • server: contacted by client

    Socket connectionSocket = welcomeSocket.accept();

Three way handshake

Three way handshake:

Step 1:client host sends TCP SYN segment to server

  • specifies initial seq #

  • no data

    Step 2:server host receives SYN, replies with SYNACK segment

  • server allocates buffers

  • specifies server initial seq. #

    Step 3: client receives SYNACK, replies with ACK segment, which may contain data

Lecture 7









timed wait



Closing a connection:

client closes socket:clientSocket.close();

Step 1:client end system sends TCP FIN control segment to server

Step 2:server receives FIN, replies with ACK. Closes connection, sends FIN.

Lecture 7


Step 3:client receives FIN, replies with ACK.

  • Enters “timed wait” - will respond with ACK to received FINs

    Step 4:server, receives ACK. Connection closed.

    Note:with small modification, can handle simultaneous FINs.









timed wait


Tcp client life cycle

TCP Client Life Cycle

Tcp server life cycle

TCP Server Life Cycle

Why is a two way handshake not enough

Why is a Two-Way Handshake not enough?

When aida initiates the data transfer (starting with SeqNo=15322112355), mng will reject all data.

Tcp connection termination

TCP Connection Termination

  • Each end of the data flow must be shut down independently (“half-close”)

  • If one end is done it sends a FIN segment. This means that no more data will be sent

  • Four steps involved:

    (1) X sends a FIN to Y (active close)

    (2) Y ACKs the FIN,

    (at this time: Y can still send data to X)

    (3) and Y sends a FIN to X (passive close)

    (4) X ACKs the FIN.

Connection termination with tcpdump

Connection termination with tcpdump

1 mng.poly.edu.telnet > aida.poly.edu.1121: F 172488734:172488734(0) ack 1031880221 win 8733

2 aida.poly.edu.1121 > mng.poly.edu.telnet: . ack 172488735 win 17484

3 aida.poly.edu.1121 > mng.poly.edu.telnet: F 1031880221:1031880221(0) ack 172488735 win 17520

4mng.poly.edu.telnet > aida.poly.edu.1121: . ack 1031880222 win 8733

Tcp connection termination1

TCP Connection Termination

Tcp states

TCP States

Tcp states in normal connection lifetime

TCP States in “Normal” Connection Lifetime

Tcp state transition diagram opening a connection

TCP State Transition DiagramOpening A Connection

Tcp state transition diagram closing a connection

TCP State Transition DiagramClosing A Connection

Principles of congestion control

Principles of Congestion Control


  • informally: “too many sources sending too much data too fast for network to handle”

  • different from flow control!

  • manifestations:

    • lost packets (buffer overflow at routers)

    • long delays (queueing in router buffers)

  • a top-10 problem!

Lecture 7


  • (a) A fast network feeding a low capacity receiver.

  • (b) A slow network feeding a high-capacity receiver.

The causes and the costs of congestion

The Causes and the "Costs" of Congestion

  • Scenario 1: Two senders, a router with infintebuffers

  • two senders, two receivers

  • one router,

    infinite buffers

  • no retransmission

  • large delays when congested

  • maximum achievable throughput

Lecture 7


Scenario 2 two senders a router with finite buffers

Scenario 2: Two senders, a router with finite buffers

  • one router, finite buffers

  • sender retransmission of lost packet

Causes costs of congestion scenario 2


















Causes/costs of congestion scenario 2

  • always: lin= lout(goodput)

  • “perfect” retransmission only when loss:l’in> lout

  • retransmission of delayed (not lost) packet makes l’in

  • larger (than perfect case) for same l out

Lecture 7


costs” of congestion:

  • more work (retrans) for given “goodput”

  • unneeded retransmissions: link carries multiple copies of pkt

Scenario 3 four senders routers with finite buffers and multihop paths

Scenario 3: Four senders, routers with finite buffers, and multihop paths

  • four senders

  • multihop paths

  • timeout/retransmit

Lecture 7


  • Another “cost” of congestion:

  • when packet dropped, any “upstream transmission capacity used for that packet was wasted

Approaches toward congestion control

Approaches Toward Congestion Control

Two broad approaches towards congestion control:

  • End-end congestion control:

    • no explicit feedback from network

    • congestion inferred from end-system observed loss, delay

    • approach taken by TCP

  • Network-assisted congestion control:

    • routers provide feedback to end systems

    • – single bit indicating congestion (SNA,DECbit, TCP/IP ECN,


    • – explicit rate sender should send at

Case study atm abr congestion control

Case study: ATM ABR congestion control

  • ABR: available bit rate:

    • “elastic service”

    • if sender’s path “underloaded”:

      • – sender should use available bandwidth

  • if sender’s path congested:

    • – sender throttled to minimum guaranteed rate

  • RM (resource management) cells:

    • sent by sender, interspersed with data cells

    • bits in RM cell set by switches (“network-assisted”)

      • NI bit: no increase in rate (mild congestion)

      • CI bit: congestion indication

  • RM cells returned to sender by receiver, with bits intact

  • Case study atm abr congestion control1

    Case study: ATM ABR congestioncontrol

    • two-byte ER (explicit rate) field in RM cell

      • – congested switch may lower ER value in cell

      • – sender’ send rate thus minimum supportable rate on path

    • EFCI bit in data cells: set to 1 in congested switch

      • – if data cell preceding RM cell has EFCI set, receiver sets CI bit in returned RM cell

    Tcp aimd


    How does sender perceive congestion?

    • loss event = timeout or 3 duplicate acks

    • TCP sender reduces rate (CongWin) after loss event

      three mechanisms:

      • AIMD

      • slow start

      • conservative after timeout events

    Tcp aimd1


    • multiplicative decrease: cut CongWin in half after loss event

    • additive increase: increase CongWin by 1 MSS every RTT in the absence of loss events: probing

    Long-lived TCP connection

    Tcp slow start

    TCP Slow Start

    • When connection begins, CongWin = 1 MSS

      • Example: MSS = 500 bytes & RTT = 200 msec

      • initial rate = 20 kbps

    • available bandwidth may be >> MSS/RTT

      • desirable to quickly ramp up to respectable rate

    • When connection begins, increase rate exponentially fast until first loss event

    Tcp slow start more

    TCP Slow Start (more)

    • When connection begins, increase rate exponentially until first loss event:

      • double CongWin every RTT

      • done by incrementing CongWin for every ACK received

    • Summary: initial rate is slow but ramps up exponentially fast

    Host A

    Host B

    one segment


    two segments

    four segments




    • After 3 dup ACKs:

      • CongWin is cut in half

      • window then grows linearly

    • But after timeout event:

      • CongWin instead set to 1 MSS;

      • window then grows exponentially

      • to a threshold, then grows linearly

    • 3 dup ACKs indicates network capable of delivering some segments

    • timeout before 3 dup ACKs is “more alarming”

    Refinement more

    Refinement (more)

    When should the exponential increase switch to linear?

    A: When CongWin gets to 1/2 of its value before timeout.


    • Variable Threshold

    • At loss event, Threshold is set to 1/2 of CongWin just before loss event

    Summary tcp congestion control

    Summary: TCP Congestion Control

    • When CongWin is below Threshold, sender in slow-start phase, window grows exponentially.

    • When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly.

    • When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold.

    • When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS.

    Tcp throughput

    TCP throughput

    • What’s the average throughout ot TCP as a function of window size and RTT?

      • Ignore slow start

    • Let W be the window size when loss occurs.

    • When window is W, throughput is W/RTT

    • Just after loss, window drops to W/2, throughput to W/2RTT.

    • Average throughout: .75 W/RTT

    Tcp futures

    TCP Futures

    • Example: 1500 byte segments, 100ms RTT, want 10 Gbpsthroughput

    • Requires window size W = 83,333 in-flight segments

    • Throughput in terms of loss rate:

    ➜ L = 2·10-10 Wow

    New versions of TCP for high-speed needed

    Tcp fairness

    TCP connection 1



    capacity R


    connection 2

    TCP Fairness

    • Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K

    Why is tcp fair

    Why is TCP fair?

    Two competing sessions:

    • Additive increase gives slope of 1, as throughout increases

    • multiplicative decrease decreases throughput proportionally

    Fairness more

    Fairness (more)

    Fairness and UDP

    • Multimedia apps often do not use TCP

      • do not want rate throttled by congestion control

    • Instead use UDP:

      • pump audio/video at constant rate, tolerate packet loss

        Research area: TCP friendly

        Fairness and parallel TCP connections

    • nothing prevents app from opening parallel cnctions between 2 hosts.

    • Web browsers do this

    • Example: link of rate R supporting 9 cnctions;

      • new app asks for 1 TCP, gets rate R/10

      • new app asks for 11 TCPs, gets R/2 !

    Delay modeling

    Delay modeling

    Q:How long does it take to receive an object from a Web server after sending a request?

    Ignoring congestion, delay is influenced by:

    • TCP connection establishment

    • data transmission delay

    • slow start

      Notation, assumptions:

    • Assume one link between client and server of rate R

    • S: MSS (bits)

    • O: object size (bits)

    • no retransmissions (no loss, no corruption)

      Window size:

    • First assume: fixed congestion window, W segments

    • Then dynamic window, modeling slow start

    Fixed congestion window 1

    Fixed congestion window (1)

    First case:

    WS/R > RTT + S/R: ACK for first segment in window returns before window’s worth of data sent

    delay = 2RTT + O/R

    Fixed congestion window 2

    Fixed congestion window (2)

    Second case:

    • WS/R < RTT + S/R: wait for ACK after sending window’s worth of data sent

    • delay = 2RTT + O/R

    • + (K-1)[S/R + RTT - WS/R]

    Tcp delay modeling slow start 1

    TCP Delay Modeling: Slow Start (1)

    Now suppose window grows according to slow start

    Will show that the delay for one object is:

    where P is the number of times TCP idles at server:

    - where Q is the number of times the server idles if the object were of infinite size.

    - and K is the number of windows that cover the object.

    Tcp delay modeling slow start 2

    TCP Delay Modeling: Slow Start (2)

    • Delay components:

    • 2 RTT for connection estab and request

    • O/R to transmit object

    • time server idles due to slow start

    • Server idles: P =min{K-1,Q} times

    • Example:

    • O/S = 15 segments

    • K = 4 windows

    • Q = 2

    • P = min{K-1,Q} = 2

    • Server idles P=2 times

    Tcp delay modeling 3

    TCP Delay Modeling (3)

    Tcp delay modeling 4

    TCP Delay Modeling (4)

    • Recall K = number of windows that cover object

    • How do we calculate K ?

    Calculation of Q, number of idles for infinite-size object, is similar

    Http modeling

    HTTP Modeling

    • Assume Web page consists of:

      • 1 base HTML page (of size O bits)

      • M images (each of size O bits)

    • Non-persistent HTTP:

      • M+1 TCP connections in series

      • Response time = (M+1)O/R + (M+1)2RTT + sum of idle times

    • Persistent HTTP:

      • 2 RTT to request and receive base HTML file

      • 1 RTT to request and receive M images

      • Response time = (M+1)O/R + 3RTT + sum of idle times

    • Non-persistent HTTP with X parallel connections

      • Suppose M/X integer.

      • 1 TCP connection for base file

      • M/X sets of parallel connections for images.

      • Response time = (M+1)O/R + (M/X + 1)2RTT + sum of idle times

    Http response time in seconds

    HTTP Response time (in seconds)

    • RTT = 100 msec, O = 5 Kbytes, M=10 and X=5

    For low bandwidth, connection & response time dominated by transmission time.

    Persistent connections only give minor improvement over parallel


    Http response time in seconds1

    HTTP Response time (in seconds)

    • RTT =1 sec, O = 5 Kbytes, M=10 and X=5

    For larger RTT, response time dominated by TCP establishment & slow start delays. Persistent connections now give important improvement: particularly in high delaybandwidth networks.

  • Login