Outline
Download
1 / 45

Outline - PowerPoint PPT Presentation


  • 109 Views
  • Uploaded on

Outline. Principles of congestion control TCP congestion control. Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control! manifestations: lost packets (buffer overflow at routers)

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Outline' - tyanne


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Outline
Outline

  • Principles of congestion control

  • TCP congestion control


Principles of congestion control

Congestion:

informally: “too many sources sending too much data too fast for network to handle”

different from flow control!

manifestations:

lost packets (buffer overflow at routers)

long delays (queueing in router buffers)

a top-10 problem!

Principles of Congestion Control


Approaches towards congestion control

End-end congestion control:

no explicit feedback from network

congestion inferred from end-system observed loss, delay

approach taken by TCP

Network-assisted congestion control:

routers provide feedback to end systems

single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM)

explicit rate sender should send at

Approaches towards congestion control

Two broad approaches towards congestion control:


Tcp congestion control

end-end control (no network assistance)

sender limits transmission:

LastByteSent-LastByteAcked

 CongWin

Roughly,

CongWin is dynamic, function of perceived network congestion

How does sender perceive congestion?

loss event = timeout or 3 duplicate acks

TCP sender reduces rate (CongWin) after loss event

three mechanisms:

AIMD

slow start

conservative after timeout events

CongWin

rate =

Bytes/sec

RTT

TCP Congestion Control


Tcp aimd

multiplicative decrease: cut CongWin in half after loss event

TCP AIMD

additive increase: increase CongWin by 1 MSS every RTT in the absence of loss events: probing

Long-lived TCP connection


Tcp slow start

When connection begins, CongWin = 1 MSS

Example: MSS = 500 bytes & RTT = 200 msec

initial rate = 20 kbps

available bandwidth may be >> MSS/RTT

desirable to quickly ramp up to respectable rate

TCP Slow Start

  • When connection begins, increase rate exponentially fast until first loss event


Tcp slow start more

When connection begins, increase rate exponentially until first loss event:

double CongWin every RTT

done by incrementing CongWin for every ACK received

Summary: initial rate is slow but ramps up exponentially fast

time

TCP Slow Start (more)

Host A

Host B

one segment

RTT

two segments

four segments


Refinement

After 3 dup ACKs: first loss event:

CongWin is cut in half

window then grows linearly

But after timeout event:

CongWin instead set to 1 MSS;

window then grows exponentially

to a threshold, then grows linearly

Refinement

Philosophy:

  • 3 dup ACKs indicates network capable of delivering some segments

  • timeout before 3 dup ACKs is “more alarming”


Refinement more

Q: first loss event: When should the exponential increase switch to linear?

A: When CongWin gets to 1/2 of its value before timeout.

Implementation:

Variable Threshold

At loss event, Threshold is set to 1/2 of CongWin just before loss event

Refinement (more)


Summary tcp congestion control
Summary: TCP Congestion Control first loss event:

  • When CongWin is below Threshold, sender in slow-start phase, window grows exponentially.

  • When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly.

  • When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold.

  • When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS.



Tcp throughput
TCP throughput first loss event:

  • What’s the average throughout ot TCP as a function of window size and RTT?

    • Ignore slow start

  • Let W be the window size when loss occurs.

  • When window is W, throughput is W/RTT

  • Just after loss, window drops to W/2, throughput to W/2RTT.

  • Average throughout: .75 W/RTT


Tcp fairness

Fairness goal: first loss event: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K

TCP connection 1

bottleneck

router

capacity R

TCP

connection 2

TCP Fairness


Why is tcp fair

Two competing sessions: first loss event:

Additive increase gives slope of 1, as throughout increases

multiplicative decrease decreases throughput proportionally

Why is TCP fair?

equal bandwidth share

R

loss: decrease window by factor of 2

congestion avoidance: additive increase

Connection 2 throughput

loss: decrease window by factor of 2

congestion avoidance: additive increase

Connection 1 throughput

R


Fairness more

Fairness and UDP first loss event:

Multimedia apps often do not use TCP

do not want rate throttled by congestion control

Instead use UDP:

pump audio/video at constant rate, tolerate packet loss

Research area: TCP friendly

Fairness and parallel TCP connections

nothing prevents app from opening parallel cnctions between 2 hosts.

Web browsers do this

Example: link of rate R supporting 9 cnctions;

new app asks for 1 TCP, gets rate R/10

new app asks for 11 TCPs, gets R/2 !

Fairness (more)


Midterm review

Midterm Review first loss event:

In class, 3:30-5:00 pm, Mon. 2/9

Closed Book

One 8.5” by 11” sheet of paper permitted (single side)


Lecture 1
Lecture 1 first loss event:

  • Internet Architecture

  • Network Protocols

  • Network Edge

  • A taxonomy of communication networks


A taxonomy of communication networks
A Taxonomy of Communication Networks first loss event:

  • The fundamental question: how is data transferred through net (including edge & core)?

  • Communication networks can be classified based on how the nodes exchange information:

Communication Networks

SwitchedCommunication Network

BroadcastCommunication Network

Packet-SwitchedCommunication Network

Circuit-SwitchedCommunication Network

TDM

FDM

Datagram Network

Virtual Circuit Network


Packet switching statistical multiplexing

Sequence of A & B packets does not have fixed pattern first loss event: statistical multiplexing.

In TDM each host gets same slot in revolving TDM frame.

D

E

Packet Switching: Statistical Multiplexing

10 Mbs

Ethernet

C

A

statistical multiplexing

1.5 Mbs

B

queue of packets

waiting for output

link


Packet switching versus circuit switching

Circuit Switching first loss event:

Network resources (e.g., bandwidth) divided into “pieces” for allocation

Resource piece idle if not used by owning call (no sharing)

NOT efficient !

Packet Switching:

Great for bursty data

Excessive congestion: packet delay and loss

protocols needed for reliable data transfer, congestion control

Packet Switching versus Circuit Switching


Datagram packet switching
Datagram Packet Switching first loss event:

  • Each packet is independently switched

    • Each packet header contains destination address which determines next hop

    • Routes may change during session

  • No resources are pre-allocated (reserved) in advance

  • Example: IP networks


Virtual circuit packet switching
Virtual-Circuit Packet Switching first loss event:

  • Hybrid of circuit switching and packet switching

    • All packets from one packet stream are sent along a pre-established path (= virtual circuit)

    • Each packet carries tag (virtual circuit ID), tag determines next hop

  • Guarantees in-sequence delivery of packets

  • However, packets from different virtual circuits may be interleaved


Lecture 2
Lecture 2 first loss event:

  • Network access and physical media

  • Internet structure and ISPs

  • Delay & loss in packet-switched networks

  • Protocol layers, service models


Internet structure network of networks

“Tier-3” ISPs and local ISPs first loss event:

last hop (“access”) network (closest to end systems)

Tier-3: Turkish Telecom, Minnesota Regional Network

Tier 3

ISP

local

ISP

local

ISP

local

ISP

local

ISP

local

ISP

local

ISP

local

ISP

local

ISP

NAP

Local and tier- 3 ISPs are customers of

higher tier ISPs

connecting them to rest of Internet

Tier-2 ISP

Tier-2 ISP

Tier-2 ISP

Tier-2 ISP

Tier-2 ISP

Internet structure: network of networks

Tier 1 ISP

Tier 1 ISP

Tier 1 ISP


Four sources of packet delay

1. processing: first loss event:

check bit errors

determine output link

transmission

A

propagation

B

processing

queueing

Four sources of packet delay

  • 2. queueing

    • time waiting at output link for transmission

    • depends on congestion level of router


Delay in packet switched networks

3. Transmission delay: first loss event:

R=link bandwidth (bps)

L=packet length (bits)

time to send bits into link = L/R

4. Propagation delay:

d = length of physical link

s = propagation speed in medium (~2x108 m/sec)

propagation delay = d/s

transmission

A

propagation

B

processing

queueing

Delay in packet-switched networks

Note: s and R are very different quantities!


Internet protocol stack

application: first loss event: supporting network applications

FTP, SMTP, STTP

transport: host-host data transfer

TCP, UDP

network: routing of datagrams from source to destination

IP, routing protocols

link: data transfer between neighboring network elements

PPP, Ethernet

physical: bits “on the wire”

application

transport

network

link

physical

Internet protocol stack


Application layer

Principles of app layer protocols first loss event:

Web and HTTP

FTP

Electronic Mail: SMTP, POP3, IMAP

DNS

Socket Programming

Web Caching

Application Layer


Http connections

Nonpersistent HTTP first loss event:

At most one object is sent over a TCP connection.

HTTP/1.0 uses nonpersistent HTTP

Persistent HTTP

Multiple objects can be sent over single TCP connection between client and server.

HTTP/1.1 uses persistent connections in default mode

HTTP connections

  • HTTP Message, Format, Response, Methods

  • HTTP cookies


Response time of http

Nonpersistent HTTP issues: first loss event:

requires 2 RTTs per object

OS must work and allocate host resources for each TCP connection

but browsers often open parallel TCP connections to fetch referenced objects

Persistent HTTP

server leaves connection open after sending response

subsequent HTTP messages between same client/server are sent over connection

Persistent without pipelining:

client issues new request only when previous response has been received

one RTT for each referenced object

Persistent with pipelining:

default in HTTP/1.1

client sends requests as soon as it encounters a referenced object

as little as one RTT for all the referenced objects

Response Time of HTTP


Ftp separate control data connections

FTP client contacts FTP server at port 21, specifying TCP as transport protocol

Client obtains authorization over control connection

Client browses remote directory by sending commands over control connection.

When server receives a command for a file transfer, the server opens a TCP data connection to client

After transferring one file, server closes connection.

TCP control connection

port 21

TCP data connection

port 20

FTP

client

FTP

server

FTP: separate control, data connections

  • Server opens a second TCP data connection to transfer another file.

  • Control connection: “out of band”

  • FTP server maintains “state”: current directory, earlier authentication


Electronic mail smtp rfc 2821

uses TCP to reliably transfer transport protocol

email message from client to

server, port 25

direct transfer: sending

server to receiving server

user

agent

user

agent

user

agent

user

agent

user

agent

user

agent

SMTP

SMTP

mail

server

mail

server

mail

server

outgoing

message queue

user mailbox

Electronic Mail: SMTP [RFC 2821]


Dns name servers

no server has all name-to-IP address mappings transport protocol

local name servers:

each ISP, company has local (default) name server

host DNS query first goes to local name server

authoritative name server:

for a host: stores that host’s IP address, name

can perform name/address translation for that host’s name

Why not centralize DNS?

single point of failure

traffic volume

distant centralized database

maintenance

doesn’t scale!

DNS name servers


Dns example

Root name server: transport protocol

may not know authoritative name server

may know intermediate name server: who to contact to find authoritative name server

local name server

dns.eurecom.fr

intermediate name server

dns.nwu.edu

DNS example

root name server

6

2

3

7

5

4

1

8

authoritative name server

dns.cs.nwu.edu

requesting host

surf.eurecom.fr

www.cs.nwu.edu


Dns iterated queries

recursive query: transport protocol

puts burden of name resolution on contacted name server

heavy load?

iterated query:

contacted server replies with name of server to contact

“I don’t know this name, but ask this server”

local name server

dns.eurecom.fr

intermediate name server

dns.umass.edu

DNS: iterated queries

root name server

iterated query

2

3

4

7

5

6

1

8

authoritative name server

dns.cs.umass.edu

requesting host

surf.eurecom.fr

gaia.cs.umass.edu


Web caches proxy server

user sets browser: Web accesses via cache transport protocol

browser sends all HTTP requests to cache

object in cache: cache returns object

else cache requests object from origin server, then returns object to client

Why web caching?

Web caches (proxy server)

Goal: satisfy client request without involving origin server

origin

server

Proxy

server

HTTP request

HTTP request

client

HTTP response

HTTP response

HTTP request

HTTP response

client

origin

server


Caching example 3

Install cache transport protocol

suppose hit rate is .4

Consequence

40% requests will be satisfied almost immediately

60% requests satisfied by origin server

utilization of access link reduced to 60%, resulting in negligible delays (say 10 msec)

total delay = Internet delay + access delay + LAN delay

= .6*2 sec + .6*.01 secs + milliseconds < 1.3 secs

Caching example (3)

origin

servers

public

Internet

1.5 Mbps

access link

institutional

network

10 Mbps LAN

institutional

cache


Transport layer

Transport-layer services transport protocol

Multiplexing and demultiplexing

Connectionless transport: UDP

Principles of reliable data transfer

TCP

Segment structures

Flow control

Congestion control

Transport Layer


Demultiplexing

UDP socket identified by two-tuple: transport protocol

(dest IP address, dest port number)

When host receives UDP segment:

checks destination port number in segment

directs UDP segment to socket with that port number

Demultiplexing

  • TCP socket identified by 4-tuple:

    • source IP address

    • source port number

    • dest IP address

    • dest port number

  • recv host uses all four values to direct segment to appropriate socket


Udp user datagram protocol rfc 768

Why is there a UDP? transport protocol

no connection establishment (which can add delay)

simple: no connection state at sender, receiver

small segment header

no congestion control: UDP can blast away as fast as desired

UDP: User Datagram Protocol [RFC 768]

32 bits

source port #

dest port #

checksum

length

Application

data

(message)

UDP segment format


Udp checksum

Sender: transport protocol

treat segment contents as sequence of 16-bit integers

checksum: addition (1’s complement sum) of segment contents

sender puts checksum value into UDP checksum field

Receiver:

addition of all segment contents + checksum

check if all bits are 1:

NO - error detected

YES - no error detected. But maybe errors nonetheless? More later ….

UDP checksum

  • Goal: detect “errors” (e.g., flipped bits) in transmitted segment

0110

0101

1011

0100

0110

0101

0100

1111

1’s complement sum:

Addition:

Addition:

1’s complement sum:


Go back n

Sender: transport protocol

k-bit seq # in pkt header

“window” of up to N, consecutive unack’ed pkts allowed

Go-Back-N

  • ACK(n): ACKs all pkts up to, including seq # n - “cumulative ACK”

    • may deceive duplicate ACKs (see receiver)

  • Single timer for all in-flight pkts

  • timeout(n): retransmit pkt n and all higher seq # pkts in window


Selective repeat

receiver transport protocolindividually acknowledges all correctly received pkts

buffers pkts, as needed, for eventual in-order delivery to upper layer

sender only resends pkts for which ACK not received

sender timer for each unACKed pkt

sender window

N consecutive seq #’s

again limits seq #s of sent, unACKed pkts

Selective Repeat



Tcp segment structure

32 bits transport protocol

source port #

dest port #

sequence number

acknowledgement number

head

len

not

used

Receive window

U

A

P

R

S

F

checksum

Urg data pnter

Options (variable length)

application

data

(variable length)

TCP segment structure

URG: urgent data

(generally not used)

counting

by bytes

of data

(not segments!)

ACK: ACK #

valid

PSH: push data now

(generally not used)

# bytes

rcvr willing

to accept

RST, SYN, FIN:

connection estab

(setup, teardown

commands)

Internet

checksum

(as in UDP)


ad