network simulator 2 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Network Simulator - 2 PowerPoint Presentation
Download Presentation
Network Simulator - 2

Loading in 2 Seconds...

play fullscreen
1 / 53

Network Simulator - 2 - PowerPoint PPT Presentation


  • 74 Views
  • Uploaded on

Network Simulator - 2. Source Analysis Made by Min-Soo Kim and Kang-Yong Lee Ajou University, Division of Information & Computer Engineering. Content. Development Environment Structure of Source Directory We Focus On Event Scheduler Layered View of NS-2 Internal Node of NS-2

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Network Simulator - 2' - denzel


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
network simulator 2

Network Simulator - 2

Source Analysis

Made by Min-Soo Kim and Kang-Yong Lee

Ajou University, Division of Information & Computer Engineering

content
Content
  • Development Environment
  • Structure of Source Directory
  • We Focus On
  • Event Scheduler
  • Layered View of NS-2 Internal
  • Node of NS-2
  • Link of NS-2
  • Agent
  • Overview of Packet Flow
  • TcpAgent
slide3

Development Environment

  • Operating System
    • Unix-like System (FreeBSD, Linux, SunOS)
  • Program Code
    • C++ , OTcl
  • Version of Our NS-2
    • Ns-2 2.6
structure of source directory 1
Structure of Source Directory (1)
  • ns-allinone-2.26

version of WEB fordocumenting C,C++ (Optional)

Georgia Tech Internetwork Topology Modeler (Optional)

Network Animator (Optional)

NS Main Compoment(Required)

Otcl Library Source (Required)

Stanford GraphBase package (Optional)

Tclcl Library Source (Required)

Tcl/C++ Interface[Linkage] (Required)

Tk Library Source (Required)

X-graph Source(Optional)

Data Compression Library (Optional)

structure of source directory 2
Structure of Source Directory (2)
  • ns-allinone/ns-2.26

AODV Routing Protocol

Application Protocol Classes(FTP, PING..)

Common Classes (node, agent, scheduler, Timer-handler, bi-connector, packet, encapsulator, decapsulator classes)

Link related Classes

Mac Layer Protocol(Wired and Wireless)

Multicast related Classes

Various Queue Model Classes

Routing Algorithm

TCP Protocol related Classes

Trace & Result file related Classes

we focus on
We Focus on
  • Inside of “/ns-2.26” Directory
    • Event Scheduler
    • Basic Network Components :
      • Node, Link, Packet
    • Traffic models and applications :
      • Web, FTP, telnet, Constant-bit rate, real audio
    • Transport protocols :
      • Unicast: TCP, UDP
      • Multicast
    • Routing and queueing :
      • Wired Routing, Wireless Routing
      • Queuing protocols : RED, drop-tail
    • Physical media :
      • Wired (LANs, P-to-P), Wireless Channel, Satellite Channel
  • Inside of “/tclcl-1.0b13” Directory
    • Otcl/C++ Linkage Classes
slide7

Event Scheduler (1)

  • NS-2 “event-driven” simulator
  • Characteristics of Event Scheduler
    • Single-Threaded  Only one event in execution at any given time
    • Unit of time  seconds
    • Execution Policy  First scheduled – First dispatched manner
  • Related classes & Functions of Event Scheduler
    • In “/ns-2.26/common/scheduler.h”“/ns-2.26/common/scheduler.cc”
      • Class Event {}

 double time_ : time at which event is ready

 int uid_ : unique ID of event

 Event* next_ : event list

 Handler* handler_ : handler to call when event’s scheduled time arrived

      • Class Handler {}

 virtual void handle (Event* event) : handling the event which received as parameter (dispatch)

      • Class Scheduler {}

Next Page

slide8

Event Scheduler (2)

  • Class Scheduler {}

 void schedule(Handler*, Event*, double delay) : Schedule later event

 void dispatch(Event*) : execute an event

 void dispatch(Event*, double) : execute an event at specific time

 void cancel (Event* ) : cancel event

 void insert (Event* ) : schedule event

 Event* deque(void) : next event (removes from queue)

 Event* lookup(scheduler_uid_t) : look for event

  • Classes derived from Scheduler {}
    • Class ListScheduler {} : implements the scheduler using a simple linked-list structure
    • Class HeapScheduler {} : implements the scheduler using a heap structure
    • Class CalendarScheduler {} :implements the scheduler using a one-year calendar on which events on the same month/day of multiple years can be recorded in one day.
    • Class RealTimeScheduler {} : implements the scheduler using synchronization of events with real-time
event scheduler 3

time_, uid_, next_, handler_

Scheduler.dispatch(Event *, Double Time)

Scheduler.deque(void)

head_ ->

Handler.handle(Event *)

Network Object

rescheduling

Scheduler.insert(Event *)

Scheduler.schedule(Handler h, Event *, Double delay)

time_, uid_, next_, handler_

Event Scheduler (3)
slide10

recv()

sendDown()

Layered View of NS-2 Internal (1)

A : Sender

Packet Flow

  • Related Files
  • Higher Layer :
  • /ns-2.26/agent.cc
  • /ns-2.26/tcp.cc
  • /ns-2.26/userfiles
  • Link Layer :
      • /ns-2.26/ll.cc
  • /ns-2.26/ll.h
  • Mac Layer :
      • /ns-2.26/mac-802_3.cc
  • /ns-2.26/mac-802.3.h
  • Phy Layer :
      • /ns-2.26/phy.cc
      • /ns-2.26/phy.h
      • /ns-2.26/channel.cc
      • /ns-2.26/channel.h

Application::virtual send()

Higher Layer

(1)

Transport TCPagent or UDPagent

(2)

Network::schedule()

(3)

Link Layer (LL)

(4)

(6)

Schedule()

Schedule with Delay

slide11

Layered View of NS-2 Internal (2)

From Upper Layer

Mac Layer

(802.3)

recv()

(5)

sendDown()

(6)

To Upper Layer

Physical Layer

(phy)

recv()

rrecv()

(10)

(7)

recv()

(9)

Physical Layer

(channel)

get_pdelay()

sendUp()

Schedule()

(8)

slide12

recv()

sendUp()

sendUp()

recv()

recv()

mac-recv()

Layered View of NS-2 Internal (3)

B : Receiver

Application::virtual recv()

(16)

Transport TCPagent or UDPagent

(10)

(15)

From lower Layer

Network::schedule()

Mac Layer

(Classifier/Mac)

(14)

Schedule()

(11)

(13)

Mac Layer

(802.3)

Link Layer (LL)

(12)

slide13

Higher Layers

Agent

Agent

Agent

Queue

Queue

Queue

Link Layer

LL (Link Layer)

LL

LL

Mac Layer

Mac

Mac

Mac

Physical Layer

Channel

Classifer/Mac

Layered View of NS-2 Internal (4)

  • Connectivity within LAN environment
slide14

Network Components - Node (1)

  • Node basic
    • NS Node is essentially a collection of classifier
    • Unicast Node and Multicast Node
  • Classifier
    • Classifier doing Packet forwarding related works
    • When it receives a packet, it examine the packet’s field, usually its destination address. It should then map the value to an outgoing interface object that is the next downstream recipient of this packet.
    • Each classifier contains a table of simulation objects indexed by slot number
    • The job of a classfier is to determine the slot number associated with a received packet and forward that packet to the object referenced by that particular slot.
    • The C++ class Classifer(/ns/classifier/classifier.cc) provieds a base class from which other classifier are derived.
slide15
Source of Classifier

class Classifier : public NsObject {

public:

Classifier();

virtual ~Classifier();

void recv(Packet* p, Handler* h);

virtual int classify(Packet *);

virtual void clear(int slot);

virtual void install(int slot, NsObject*);

// function to set the rtg table size

void set_table_size(int nn);

protected:

void alloc(int);

NsObject** slot_; //table that maps slot number to a NsObject

int nslot_;

int maxslot_;

int offset_;// offset for NsObject *default_target_;

int nsize_; //what size of nslot_ should be

};

void Classifier::recv(Packet* p, Handler*h)

{

NsObject* node;

int cl = classify(p);

if (cl < 0 || cl >= nslot_ || (node = slot_[cl]) == 0)

{

if (default_target_)

return default_target_;

Tcl::instance().evalf("%s no-slot %ld", name(), cl);

if (cl == TWICE)

{

cl = classify(p);

if (cl < 0 || cl >= nslot_ || (node = slot_[cl])==0)

return (NULL);

}

}

node->recv(p,h);

}

Network Components - Node (2)

slide16

Network Components - Node (3)

  • Unicast Node
    • Port Classifier : Distribute incomming packet to the correct agent
    • Addr Classifier : Distribute incomming packet to the correct outgoing link

NODE

Agent

Port Classifier

Agent

Agent

Addr Classifier

dmux_

agents_

Node entry

entry_

Link

Link

Link

slide17

Network Components - Node (4)

  • Multicast Node
    • Switch : determine unicast packet or multicast packet
    • Multicast Classifier : classfies packets according to both source and destination group address
    • Replicator : produce n copy of packet that are deliverd to to all n objects referenced in table

dmux_

Agent

MUTICAST

NODE

Agent

classifier_

agents_

Agent

Node entry

<S1,G1>

entry_

switch_

Muticast

Classifier

Replicators

muticastclassifier_

<S2,G2>

Link

Link

Link

slide18

Network Components - Link (1)

  • Link basic
    • Link is built up from a sequence of connectors
    • The Class Link is implemented entirely in Otcl. Link Class is base class of other Link Classes.
    • The Class SimpleLink implements simple point-to-point link with an associated queue and delay. It is derived from the base Otcl class Link.
  • SimpleLink
    • head_ : entry point to the link, it point first object in the link.
    • queue_ : reference to the main queue element of the link.
    • link_ : reference to the element that simulate packet delivery delays.
    • ttl_ : reference to the element that manipulates the ttl in every packet.
    • drophead_ : reference to an object that is the head of a queue of elements that process link drops.
    • enqT_, deqT_, drpT_, rcvT_ : reference to the element that traces packets.
slide19

Link

head_

enqT_

queue_

deqT_

link_

ttl_

rcvT_

drophead_

drpT_

Network Components - Link (2)

  • Composite Construction of a Undirectional Link (SimpleLink)
slide20

Network Components - Link (3)

  • Connectors
    • Connectors, unlike classifiers, only generate data for one recipient.
    • either the packet is delivered to the the neighbor ( target_ ), or it is sent to the drop-target
    • A connector will receive a packet, perform some function, and deliver the packet to its neighbor, or drop the packet.
    • There are a number of different types of connectors in ns. Each connector performs a difference function.
  • Different types of Connectors
    • networkinterface : it labels packets with incomming interface identifier.
    • DynaLink : it decides whether or not a packet should forwarded depeding on whether the link is up or down.
    • DelayLink : it models the link’s delay and bandwidth characteristic
    • Queues : it models the output buffers attached to a link in a “real” router in a network.
    • TTLChecker : it will decrement the ttl in each packet that it receives.
agent what is the agent
Agent – What is the “Agent” ?
  • Agent
    • Agents represent endpoints where network-layer packets are constructed or consumed.
    • Agent are used in the implementation of protocol at various layers.
    • Supports Packet generation and reception
    • The Related Source File

 “~ns/agent.cc”

 “~ns/agent.h”

 “~ns/tcl/lib/ns-agent.tcl”

agent protocol agents
Agent – Protocol Agents
  • There are several agents supported in the NS-2
    • TCP
    • TCP/Reno
    • TCP/Fack
    • TCP/Vegas/RBP
    • TCP/Sack1
    • TCP/FullTcp
    • TCPSink : “One-way” TCP Connection

 TCP source sends data packets and the TCP sink sends ACK packets

    • UDP : A basic UDP agent
    • SRM
    • RTP
    • RTCP
    • LossMonitor
agent agent state
Agent – Agent state
  • Agent state

 Information of Simulated packet

 To assign various fields before packet is sent

1) here_ : node address of myself ( source address in packets)

2) dst_ : destination address

3) size_ : packet size in bytes

4) type_ : type of packet

5) fid_ : the IP flow identifier (For IPv6)

6) prio_ : the IP priority (For IPv6)

7) flags_ : packet flags

8) defttl_ : default IP ttl value

agent agent class
Agent – Agent Class

class Agent : public Connector {

public:

virtual ~Agent();

virtual void attachApp(Application* app);

inline nsaddr_t& addr() { return here_.addr_; }

inline nsaddr_t& port() { return here_.port_; }

inline nsaddr_t& daddr() { return dst_.addr_; }

inline nsaddr_t& dport() { return dst_.port_; }

protected:

Packet* allocpkt() const;

Packet* allocpkt(int) const;

void initpkt(Packet*) const;

ns_addr_t here_; // address of this agent

ns_addr_t dst_; // destination address for pkt flow

int size_; // fixed packet size

packet_t type_; // type to place in packet header

int fid_; // for IPv6 flow id field

int prio_; // for IPv6 prio field

int flags_; // for experiments (see ip.h)

int defttl_; // default ttl for outgoing pkts

Application *app_; // ptr to application for callback

Access Function

Agent State

Application Pointer

agent agent class methods 1
Agent – Agent Class Methods (1)
  • Packet* allocpkt (void)
  • Packet* allocpkt (int n)
    • Parameter : void or n
    • Create new packet and assign its fields
    • Create new packet with a data payload of n bytes and assign its fields

Packet* Agent::allocpkt ()

{

Packet* p = Packet::alloc();

initpkt(p);

return p;

}

Packet* Agent::allocpkt (int n)

{

Packet* p = allocpkt ();

if (n > 0)

p -> allocdata (n);

return p;

}

Packet Initiate

Adding data payload

agent agent class methods 2
Agent – Agent Class Methods (2)
  • void initpkt (Packet* p)
    • Parameter : Packet struct pointer
    • Fill in all fields of a packet

void Agent::initpkt (Packet *p)

{

hdr_cmn* ch = hdr_cmn::access(p);

ch->ptype() = type_;

ch->size() = size_;

ch->timestamp() = Scheduler::instance().clock();

hdr_ip* iph = hdr_ip::access(p);

iph->saddr() = here_.addr_;

iph->sport() = here_.port_;

iph->daddr() = dst_.addr_;

iph->dport() = dst_.port_;

iph->flowid() = fid_;

iph->prio() = prio_;

iph->ttl() = defttl_;

……..

}

Packet information

  • Agent state setting
  • Address
  • port number
  • IPv6 option
agent agent class methods 3
Agent – Agent Class Methods (3)
  • void attachAPP (Application *app)
    • Parameter : Application pointer
    • Associate Application with Agent

void Agent::attachApp (Application *app)

{

app_ = app;

}

Associate Application-Agent

agent examples tcp tcp sink

dst_=0.0

dst_=1.0

0

1

1

0

Agent – Examples : TCP, TCP Sink

n0

n1

Application/FTP

Port Classifier

Port Classifier

Agent/TCPSink

Addr Classifier

Addr Classifier

Agent/TCP

0

0

Link n0-n1

entry_

entry_

Link n1-n0

agent examples tcp tcp sink agent
Agent – Examples : TCP, TCP Sink Agent
  • Creating the Agent

Otcl code

set tcp [new Agent/TCP]# Create sender Agent

set sink [new Agent/TCPSink] # Create receiver Agent

$ns attach-agent $n0 $tcp # Put sender on node 0

$ns attach-agent $n1 $sink # Put receiver on node 3

$ns connect $tcp $sink # Establish TCP connection

set ftp [new Application/FTP] # Create an FTP source “Application”

$ftp attach-agent $tcp# Associate FTP with the TCP sender

$ns at 1.5 “$ftp start” # Start at time 1.5

slide30

Agent – Examples : TCP, TCP Sink Agent

2. Invokes the Constructor of The Agent & TcpAgent

Agent Constructor

Agent::Agent (int pkttype)

{

bind (“addr_”, (int *) &addr_);

bind (“dst_”, (int *) &dst_);

bind (“fid_”, (int *) &fid_);

bind (“prio_”, (int *) &prio_);

bind (“flags_”, (int *) &flags_);

}

TcpAgent Constructor

TcpAgent::TcpAgent() : Agent()

{

bind (“windowOption_”, &wnd_option_);

bind (“windowConstant_”, &wnd_const_);

………

}

Binding

Otcl variable

with

C++ variable (Agent state)

slide31

Agent – Examples : TCP, TCP Sink Agent

3. Starting the Agent

 Generating Packets

TcpAgent

void TcpAgent::output (int seqno, int reason)

{

Packet *p = allocpkt ();

hdr_tcp *tcph = (hdr_tcp*) p->access (off_tcp_);

…………

Connector :: send (p, 0);

}

Generating Packets

slide32

Agent – Examples : TCP, TCP Sink Agent

4. Processing Input

 Receiving Packets

TcpSink Agent

void TcpSink::recv (Packet* pkt, Handler*)

{

hdr_tcp *th = (hdr_tcp *) pkt->access (off_tcp_);

ack(pkt)

packet::free(pkt);

}

void TcpSink::ack (Packet *opkt)

{

Packet* npkt = allocpkt();

hdr_tcp *otcp = (hdr_tcp *) opkt -> access (off_tcp_);

hdr_tcp *ntcp = (hdr_tcp *) npkt-> access (off_tcp_);

………………

send (npkt, 0); /* Overrides the Agent::recv() methods * invoke Connector::send() */

}

slide33

Agent – Summary

Sender Agent class

allocpkt ( int size )

connector::send ()

Receiver Agent class

recv ( packet )

alloc ()

To Low level

  • initpkt ( packet )
  • destination Addr
  • destination port No.
  • source Addr
  • source port No.
  • ttl
  • pkt Size
  • pkt type
  • timestamp

classifier::recv ( packet )

slide34

TcpAgent - Basic

  • TcpAgent
    • One-Way TCP Sender Agent
    • Implementation of Basic TCP Tahoe version
    • Base class of other version of TCP Agent
      • RenoTcpAgent
      • NewRenoTcpAgent
      • VegasTcpAgent
      • Sack1TcpAgent
    • The related source file
      • “ns/tcp/tcp.cc”
      • “ns/tcp/tcp.h”
slide35

TcpAgent – Header field of TCP

  • struct hdr_tcp (in “ns/tcp.h” )

struct hdr_tcp {

double ts; //packet generated time

double ts_echo_; //echoed timestamp

int seqno; //sequence number

int reason; //reason for a retransmission

int sack_area_[NSA+1][2]; //selective ack blocks

int sa_length_; //indicates the number of SACKs in this packet

int ackno_; //ack number

int hlen_; //header length

int tcp_flags_; //TCP flags

int last_rtt_; //more recent RTT measurement in ms

static int offset_; //offset for this header

}

slide36

TcpAgent - Characteristic

  • Characteristic of TCP in TcpAgent (Tahoe version of TCP)
    • Sliding Window
    • Several Timer
      • Retransmission Timer
        • Measurement of RTT (Round Trip Time)
        • Calculation of RTO (Retransmission TimeOut)
        • Backoff Strategy (Karn’s Algorithm)
      • Delay Send Timer
      • Burst Send Timer
    • Congestion Control Mechanism
      • Slow Start
      • Congestion Avoidance
      • Fast Retransmit
      • Fast Recovery (added to TCP Reno version)
slide37

Window Advertisement

Receiver

Transmitter

TcpAgent – Sliding Window (1)

  • Sliding Window
    • Send all packets within window without waiting for an acknowledgement.
    • Increases efficiency
    • As acknowledgments for segments come in, the window is moved.
slide38

TcpAgent – Sliding Window (2)

  • Related Methods

void TcpAgent::send_much(int force, int reason, int maxburst)

{

int win = window(); //window size

int npackets = 0;

/* Save time when first packet was sent, for newreno --Allman */

if (t_seqno_ == 0)

firstsent_ = Scheduler::instance().clock();

//window size 만큼 패킷을 보냄 (output함수 호출)

while (t_seqno_ <= highest_ack_ + win && t_seqno_ <maxseq_)

{

if (overhead_ == 0 || force) {

output(t_seqno_, reason);

npackets++;

t_seqno_ ++ ;

} else if (!(delsnd_timer_.status() == TIMER_PENDING)) {

delsnd_timer_.resched(Random::uniform(overhead_));

return;

}

win = window(); //window size

…..

}

}

slide39

TcpAgent – Retransmission Timer (1)

  • Retransmission Timer
    • TCP use retransmission timer to ensure data delivery in the absence of any feedback from the remote data receiver.
    • The duration of this timer is referred to as RTO (Retransmission Timeout).
    • To compute the current RTO, a TCP sender maintain two state variable, SRTT (smoothed round-trip time) and RTTVAR (round-trip time variation).
  • Measurement of RTT
    • RTT = (α * Old_RTT) + ((1 – α) * new_RTT_sample )
      • 0 < α < 1
      • α close to 1 => no change in a short time
      • α close to 0 => RTT changes too quickly
  • Calculation of RTO (Timeout)
    • DIFF = sample – old_RTT
    • Smoothed_RTT = old_RTT + d * DIFF
    • DEV = old_DEV + p (|DIFF| - old_DEV)
    • Timeout = Smoothed_RTT + g * DEV
    • Continued next page =>
slide40

TcpAgent – Retransmission Timer (2)

  • Calculation of RTO (Timeout)
    • DEV estimated mean deviation
    • d, a fraction between 0 and 1 to control how quickly the new sample affects the weighted average
    • p, a fraction between 0 and 1 to control how quickly the new sample affects mean deviation
    • g, a factor controls how much deviation affects round trip timeout
    • Research suggests: d=1/8, p=1/4 and g=4
  • Karn’s Algorithm (Back-off Strategy)
    • Definition : when computing the round trip estimate, ignore samples that correspond to retransmitted segments, but use a back-off strategy, and retain the timeout value from a retransmitted packet for subsequent packets until a valid sample is obtained.
    • Timer Back-off Strategy
      • New_timeout = γ* timeout (typically, γ= 2)
      • Each time timer expires (retransmit happens), TCP increases timeout value.
slide41

TcpAgent – Retransmission Timer (3)

  • Related Class & Methods

//Retransmission Timer

class RtxTimer : public TimerHandler {

public:

RtxTimer(TcpAgent *a) : TimerHandler() { a_ = a; }

protected:

virtual void expire(Event *e);

TcpAgent *a_;

};

//Called when timeout

void RtxTimer::expire(Event*)

{

a_->timeout(TCP_TIMER_RTX);

}

//Reset retransmission timer

void TcpAgent::reset_rtx_timer(int mild, int backoff)

{

if (backoff)

rtt_backoff();//using karn’s algorithm

set_rtx_timer();

……

rtt_active_ = 0;

}

//Set retransmission timer

void TcpAgent::set_rtx_timer()

{

rtx_timer_.resched(rtt_timeout());

}

//Karn’s algorithm

void TcpAgent::rtt_backoff()

{

if (t_backoff_ < 64)

t_backoff_ <<= 1;

if (t_backoff_ > 8) {

t_rttvar_ += (t_srtt_ >> T_SRTT_BITS);

t_srtt_ = 0;

}

}

double TcpAgent::rtt_timeout() //Calculate timeout value

{

double timeout;

if (rfc2988_) { //use updated RFC2988 timers

if (t_rtxcur_ < minrto_)

timeout = minrto_ * t_backoff_;

else

timeout = t_rtxcur_ * t_backoff_;

} else {

timeout = t_rtxcur_ * t_backoff_;

if (timeout < minrto_)

timeout = minrto_;

}

if (timeout > maxrto_)

timeout = maxrto_;

…….

return (timeout);

}

slide42

TcpAgent – Retransmission Timer (4)

  • Related Methods

// This fuction measures RTT and calculates SRTT,

// RTTVAR and RTO, when every ACK is received

void TcpAgent::rtt_update(double tao)

{

double now = Scheduler::instance().clock();

if (ts_option_)

t_rtt_ = int(tao /tcp_tick_ + 0.5);

else {

double sendtime = now - tao;

sendtime += boot_time_;

double tickoff = fmod(sendtime, tcp_tick_);

t_rtt_ = int((tao + tickoff) / tcp_tick_);

}

if (t_rtt_ < 1)

t_rtt_ = 1

// Until here measurement of RTT

// Calculation of RTO using SRTT and RTTVAR

t_rtxcur_ = (((t_rttvar_ << (rttvar_exp_ +

(T_SRTT_BITS - T_RTTVAR_BITS))) +

t_srtt_) >> T_SRTT_BITS ) * tcp_tick_;

return;

}

if (t_srtt_ != 0)

{

register short delta;

// d = (m - a0)

delta = t_rtt_ - (t_srtt_ >> T_SRTT_BITS);

// a1 = 7/8 a0 + 1/8 m

if ((t_srtt_ += delta) <= 0)

t_srtt_ = 1;

if (delta < 0)

delta = -delta;

delta -= (t_rttvar_ >> T_RTTVAR_BITS);

// var1 = 3/4 var0 + 1/4 |d|

if ((t_rttvar_ += delta) <= 0)

t_rttvar_ = 1;

}

else

{

// srtt = rtt

t_srtt_ = t_rtt_ << T_SRTT_BITS;

// rttvar = rtt / 2

t_rttvar_ = t_rtt_ << (T_RTTVAR_BITS-1);

}

// Unitil here Calculation of Smoothed RTT and

// RTT variance

omitted

slide43

TcpAgent – Retransmission Timer (5)

  • Related Methods

void TcpAgent::timeout(int tno)

{

// retransmit timer

if (tno == TCP_TIMER_RTX)

{

if (cwnd_ < 1) cwnd_ = 1;

recover_ = curseq_;

…….

if (highest_ack_ == maxseq_ && restart_bugfix_)

//if there is no outstanding data, don't cut

//down ssthresh_.

slowdown(CLOSE_CWND_ONE); //when connection is idle

else

{

// timeout occur by congestion

++nrexmit_;

last_cwnd_action_ = CWND_ACTION_TIMEOUT;

slowdown(CLOSE_SSTHRESH_HALF|CLOSE_CWND_RESTART);

}

reset_rtx_timer(0,1);

last_cwnd_action_ = CWND_ACTION_TIMEOUT;

send_much(0, TCP_REASON_TIMEOUT, maxburst_);

}

}

slide44

TcpAgent – Retransmission Timer (6)

  • Overview

RtxTimer::expire()

newack()

(1)

timeout(TCP_TIMER_RTX)

slowdown()

rtt_update()

(2)

reset_rtx_timer()

(2)

(1)

(2)

rtt_backoff()

set_rtx_timer()

RtxTimer::resched(timeout)

(1)

timeout = rtt_timeout()

slide45

TcpAgent – Slow Start (1)

  • Slow Start
    • It operates by observing that the rate at which new packets should be injected into the network is the rate at which the acknowledgments are returned by the other end.
    • Slow Start adds another window to the sender’s TCP : the congestion window (cwnd).
    • The congestion window is initialized the one segment.
    • Each time an ACK is received, the congestion window is increased by one segment.
    • If congestion window value over the ssthresh(slow start thresh) value, then switch to Congestion Avoidance mode.
slide46

TcpAgent – Slow Start (2)

  • Related Methods

// Return current window size

int TcpAgent::window()

{

return (cwnd_ < wnd_ ? (int)cwnd_ : (int)wnd_);

}

// set initial window size “1”

void TcpAgent::set_initial_window()

{

if (syn_ && delay_growth_)

cwnd_ = 1.0;

else

cwnd_ = initial_window();

}

// This fuction called every ACK is received and increase congestion window size

void TcpAgent::opencwnd()

{

double increment;

if (cwnd_ < ssthresh_) {

// slow-start (exponential)

cwnd_ += 1;

}

……

}

slide47

TcpAgent – Congestion Avoidance (1)

  • Congestion Avoidance
    • Congestion avoidance is a way to deal with lost packets.
    • There are two indications of packet loss : a timeout occurring and the receipt of duplicate ACKs.
    • When congestion occurs TCP must slow down its transmission rate of packets into network, and then invoke slow start to get things going again.
    • Algorithm operates as follows
      • 1. Initialization for a given connection sets cwnd to one segment and ssthresh to 65535 bytes.
      • 2. The TCP output routine never sends more than the minimum of cwnd and the receiver’s advertised window.
      • 3. When congestion occurs (indicated by a timeout or the reception of duplicate ACKs), one-half of the current window size (the minimum of cwnd and the receiver’s advertised window) is saved in ssthresh. Additionally, if the congestion is indicated by a timeout, cwnd is set to one segment
      • 4. When new data is acknowledged by the other end, increase cwnd, but the way it increases depends on whether TCP is performing slow start or congestion avoidance. If TCP is in slow start, then cwnd is increased by one segment every time an ACK is received. Otherwise if TCP is in congestion avoidance, then cwnd be increased by segsize * segsize/cwnd each time an ACK is received.
slide49

TcpAgent – Congestion Avoidance (3)

  • Related Methods

void TcpAgent::opencwnd()

{

double increment;

if (cwnd_ < ssthresh_)

{

// slow-start (exponential)

cwnd_ += 1;

}

else

{

// linear

double f;

switch (wnd_option_) {

……

case 1:

// This is the standard algorithm.

increment = increase_num_ / cwnd_;

……

cwnd_ += increment;

break;

}

}

return;

}

// This fuction is called when timeout occur

void TcpAgent::slowdown(int how)

{

double decrease;

double win, halfwin, decreasewin;

int slowstart = 0;

++ncwndcuts_;

// we are in slowstart for sure if cwnd < ssthresh

if (cwnd_ < ssthresh_)

slowstart = 1;

……

if (how & CLOSE_SSTHRESH_HALF)

// For the first decrease, decrease by half

if (first_decrease_ == 1 || slowstart ||

last_cwnd_action_ ==

CWND_ACTION_TIMEOUT)

{

ssthresh_ = (int) halfwin;

}

else

ssthresh_ = (int) decreasewin;

……

if (how & CLOSE_CWND_ONE)

cwnd_ = 1;

if (ssthresh_ < 2)

ssthresh_ = 2;

……

}

slide50

TcpAgent – Fast Retransmit (1)

  • Fast Retransmit
    • When TCP received duplicate ACKs, TCP does not know whether a duplicate ACK is caused by a lost segment or just reordering of the segments.
    • It waits for a small number of duplicate ACKs to be received.
    • It is assumed that if there is just a reordering of the segments, there will be only one or two duplicate ACKs before the reordered segment is processed, which will then generate a new ACK.
    • If three or more duplicate ACKs are received in row, it is strong indication that a segment has been lost. TCP then performs a retransmission timer to expire.
slide51

TcpAgent – Fast Retransmit (2)

  • Related Methods

void TcpAgent::recv(Packet *pkt, Handler*)

{

……

else if (tcph->seqno() == last_ack_)

{

……

if (++dupacks_ == numdupacks_ &&

!noFastRetrans_)

{

dupack_action();

}

}

……

send_much(0, 0, maxburst_);

}

void TcpAgent::dupack_action()

{

int recovered = (highest_ack_ > recover_);

if (recovered || (!bug_fix_ && !ecn_))

{

goto tahoe_action;

}

……

tahoe_action:

recover_ = maxseq_;

last_cwnd_action_ = CWND_ACTION_DUPACK;

slowdown(CLOSE_SSTHRESH_HALF|

CLOSE_CWND_ONE);

reset_rtx_timer(0,0);

return;

}

slide52

TcpAgent – Overview

  • Flow chart

rtt_update()

finish()

(1)

(2)

newtimer()

newack()

opencwnd()

send_much()

(1)

(2)

(3)

dupack_action()

recv_newack_helper()

output()

(1.B)

(1.A)

(2)

recv()

Data packet

ACK

Receiver

TcpSink

slide53

Next Presentation

  • Source Analysis