Loading in 5 sec....

Introduction Overview of BB Architecture Admission control-per flowPowerPoint Presentation

Introduction Overview of BB Architecture Admission control-per flow

- 58 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about ' Introduction Overview of BB Architecture Admission control-per flow' - yeo-walsh

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

- Introduction
- Overview of BB Architecture
- Admission control-per flow
- Class based Admission Control-Dynamic flow aggregation
- Simulation, Investigation

Introduction

Features:

- Scalable support of guaranteed services -decouples QoS control plane from data plane
- Core routers do not maintain reservation states
- Admission control done on a entire path basis-reduces complexity
- Circumvents problem caused by dynamic flow aggregation
- Relieves core routers of QoS control functions -admission control and state management

- Uses RSVP
- Each router-QoS state database & Admission control test
- RSVP-soft QoS states-requires exchange of states between routers
- Additional communication and processing overheads

- Admission control,resource provisioning -centralized
- Uses core stateless framework(VTRS)-dynamic packet state
- Core routers-packet scheduling and forwarding using dynamic packet states in headers
- modules-
- routing:topology info,path selection,setup
- policy control:policy info base,network policy administration
- admission control:stores QoS states,admission control,reservation

Virtual Time Reference System

- Scheduling framework-per hop & end end property characterization
- 3 components:
- packet state,edge traffic conditioning,reference/update mechanism

- packet virtual time stamps-updated in core router using state in the packets-so core stateless
- edge traffic conditioning- a1j,k+1-a1j,k>=L j,k+1/rj
ensures packets are not injected into core at rate exceeding reserved rate.

- Packet state-
- packet pij-three kinds of info: rate delay pair(rj,dj), virtual time stamp(wj,k=>a1j,k), adjustment(δj)

- Reference/Update mechanism-
- core router-update mechanism- progression of virtual time
- virtual time stamp(wj,k)-2 properties:
- virtual spacing property:wij,k+1-wij,k>=Ljk+1/rj
- reality check:aij,k<= wij,k
- depends on scheduling algorithm-rate based,delay based
- Si is rate based:virtual deadline dij,k = Ljk+1/rj +δ j,k
virtual finish time Vij,k= wij,k +dij,k

- Si is delay based:virtual deadline dij,k=dj
virtual finish time Vij,k = wij,k + dij,k

- per hop behavior-characterised by error term ψ
- Si guarantees flow f its reserved rate rj or delay parameter dj with error term ψ i if for any k, fij,k<= Vij,k + ψ i

- packet leaves Si by Vij,k + ψ i= wij,k + dij,k + ψ i
- thus timestamp Wi+1j,k= Vij,k + ψ i + i = wij,k + dij,k + ψ i +π i
π i -propogation delay from ith router to next

- delay bounded by rate delay pair and error terms
- h hops,flow j,q-rate based,(h-q)delay based
- traffic profile of flow j (σj, ρj, Pj, Lj,max )
- σj-max burst size, ρj-sustained rate of flow, Pj-Peak rate of flow. Lj,max -size of packet

- Delay of packets of flow j:
- fhj,k-a1j,k <= dcore = q*(Lj,max/rj)+(h-q)dj+ ψ i +π i
- dedgej= Tonj (Pj-rj/rj)+ Lj,max /rj where Tonj=(σj- Lj,max)/(Pj- ρj)
- End to End delay = dedgej + dcorej
= Tonj (Pj-rj/rj)+(q+1) Lj,max/rj +(h-q)dj+Σ i (ψ i +π i)

- Scheduling algorithms-Core stateless
- Core Stateless Virtual Clock(CSVC)-services packets in order of virtual finish time
Error term=L*max/C, C-total capacity

- Virtual Time Earliest deadline First (VT-EDF)-guarantees delay parameter dj with min error term

QoS state information bases:

- flow info base: flow id,traffic profile,service profile,QoS reservation
- node QoS state info:router bandwidth,buffer capacity,scheduler,error term,QoS reservation
- Path QoS state info base:hop number of a path,propagation delay
Basic operation

Phases of admission control:

- admissibility test phase
- bookkeeping phase

Admission control for per flow services

1. Only Rate based Schedulers

Whether rv can be found for new flow(no dv)

Fi-set of flows thro Si and Ci-total bandwidth,then each flow j is guaranteed rate rj if ΣjεFirj<=Ci.

Residual Bandwidth Cres Si=Ci- ΣjεFirj

To meet delay req, Dv,req : ρv <=rv<=Pv and

Dv,req>= Tonv(Pv-rv/rv)+(h+1) Lv,max/rv+DtotP ----------1

where Tonv=(σv- Lv,max)/(Pv- ρv) and Dtot P =Σj P(ψi +π i)

We also have: CresP=miniε PCresSi

Let rminv smallest rv that satisfies 1,

so rminv=[ TonvPv + (h+1) Lv,max] / (Dv,req - DtotP + Tonv]

Let rlowfea= max{ ρv,rminv } & rupfea = min { Pv,CresP }then

Rfea*={ rlowfea, rupfea}

Class based services-Dynamic flow aggregation

- flow placed in delay service class-macroflow,microflow
- macroflow-aggregate reserved rate,delay bound
- dynamic flow aggregation:microflows can join and leave macroflows anytime
- so reserved rate changes,undesirable effect on end to end delay.
Impact of DFA on End to End Delay

- aggregate traffic profile for macroflow α- (σ α, ρ α, P α, L α,max )
- rate delay pair: (r α,d α)
then delay

D αe2e= Ton α(P α-r α/r α)+ L α,max/r α+h* LP ,max/r α +DtotP

where Dtot P =Σj P(ψ i +π i)

- New microflow joins at time t*,traffic profile- (σ v, ρ v, P v, L v,max )
- new macroflow α’- (σ α’, ρ α’, P α’, L α’,max ),reserved rate- rα’ at t*
- worst case delay at edge conditioner larger than
d α’edge=Ton α’(P α’-r α’/r α’)+ L α’,max/r α’

- new microflow joins existing microflows at time t*=Ton α-Ton v.
- at time t= Ton α traffic queued at edge conditioner is
Q(t)=( P α-r α)Ton α+(P v+ r α’- r α’)Ton α +L α’,max

- Thus delay in EC at timeTon α will be atleast Q(t)/ r α’ which is greater than d α’edge.
- This delay is because when the new flow is aggregated to the macro-flow, the buffer at the edge conditioner is not empty.
- Inside the core, macro-flow α’-higher rate r α’
worst case delay bounded by d αcore= h* L α ,max/r α +DtotP not by

r α’ -packets from new flow catch up with last packets of old flow.

- Same situation when micro-flow leaves.

End to End delay bounds under DFA

- admission control decisions using only traffic profile,reserved rate
- Contingency bandwidth:
- eliminate the delay effect of the backlog in the queue
- Contingency bandwidth δrvis allocated temperorily to the new macroflow α’ for a contingency period of τv time units.
- Chosen such that max delay in EC from macro-flow α’ after time t* is bounded by
dnewedge<=max{doldedge,d α’edge}

- The above condition holds if:

δrV>=PV-rV and τv >=Q(t*)/δrV (microflow join)

and δrV>=rV and τv >=Q(t*)/δrV (microflow leave)

where rV= r α’-r α and rV= r α -r α’ resp.and Q(t*) is the size of backlog in edge conditioner

We have Q(t*)<= doldedge*r(t*)=doldedge(r α + δr α(t*))

Thus contingency period τv= doldedge*(r α + δr α(t*)/ δr α)

After τv,BB can deallocate the bandwidth δr v at time t*+ τv

Two approaches:

- Contingency period bounding method
- Contingency feedback method
- Extension to VTRS and core delay bound
- At time τ*,edge shaper adjusts r to r’. pk-last packet before rate change and pk*+1 first packet after rate change
- for k<k*,a1k+1-a1k>=Lk+1+1/r and for k>k*, a1k+1-a1k>=Lk+1+1/r’
- and delay is fhk-a1k <= q*max{LP,max/r, LP,max/r’ }+(h-q)d α+ DPtot

Topology:

S1,S2-Source D1,D2-Destination I1,I2-Ingress routers,

E1,E2-Egress routers

Two settings:

1. All routers use CsVC schedulers

2.Schedulers for linksI1-R2,I2-R2,R2-R3,R5-E1are CsVC while R3-R4,R4-R5,R5-E2 are VT-EDF

- two values for delay parameters(2.44 and 2.19)
- In BB/VTRS,a single delay service class is used and delay is set to either 2.19 or 2.44.flow delay cs(0.10,0.24,0.50) are varied.
- Objective-compare max number of flows that can be admitted.
- Results-
1. IntServ/GS and per flow BB/VTRS have same no of flows,aggr BB/Vtrs has slightly better or worse performance.When req is 2.44 aggr accepts one less flow.reason-contingency bandwidth.when req is 2.19 aggr can accept one more than the others.this is because-1.each flow has exactly the same delay req. 2.aggr flow has smaller core delay bound 3.infinite life time of flows.

Average reserved bandwidth in aggr BB scheme decreases as the number of flows increase.so there is enough residual bandwidth to admit one or two more flows in the network.

- In the case of Per-flow VT-EDF,it starts with low allocation but as no. of flows increase there is higher reserved rate.SO not enough bandwidth available for another flow to be admitted.
- Another set of simulations when the flows have finite holding times:
-two versions of the BB/VTRS scheme are used

1.Contingency period bounding method,2.feedback method

-flow inter-arrival times are varied.

Results:

1.the per flow BB/VTRS scheme has the lowest flow blocking rate.

2.contingency period bounding method has worst blocking rate because of

utilization of link bandwidth for contingency bandwidth.considers worst case bound.

3.using the feedback method,the period τ is very small,so the contingency bandwidth is alloctaed and deallocated in very short period of time.

4.In general the aggr BB/VTRS scheme has high flow blocking rate than per-flow BB/VTRS scheme .

5.As offered load increases all the schemes converge.so effect of contingency bandwidth allocation is less prominent.

Conclusion but as no. of flows increase there is higher reserved rate.SO not enough bandwidth available for another flow to be admitted.

- Scalable support of guaranteed services that decouples QoS control plane from packet forwarding plane.
- Efficient admission control algorithms for per flow and class-based guaranteed delay services.
- Resolving Dynamic flow aggregation using BB.

Download Presentation

Connecting to Server..