1 / 38

QoS I

QoS I. Do Hyeong Im 2002. 04. 30. Outline. Controlling high-bandwidth flows at the congested router Providing quality of service guarantees without per-flow state. Controlling high-bandwidth flows at the congested router. Contents. Introduction Related work RED-PD

oma
Download Presentation

QoS I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. QoS I Do Hyeong Im 2002. 04. 30

  2. Outline • Controlling high-bandwidth flows at the congested router • Providing quality of service guarantees without per-flow state

  3. Controlling high-bandwidth flows at the congested router

  4. Contents • Introduction • Related work • RED-PD • Identifying high bandwidth flows • Preferential dropping • Evaluation • Conclusions

  5. Introduction • FIFO queuing at the router • Simple to implement and well-suited to the heterogeneity of the Internet • It does not protect other flows from high-bandwidth flows • Per-flow scheduling mechanisms • Providing max-min fairness • But keeping state for all the flows

  6. Related work (1) • RED • To control the average queue length • Poor performance under changing traffic load • CSFQ (Core-Stateless Fair Queuing) • To achieve fair queuing without per-flow state in the core routers • To require an extra field in the packet headers • FRED (Flow Random Early Detection) • The dropping probability of a flow depend on the number of buffered packets from that flows • SRED (Stabilized RED) • Cache of recently seen flows to determine the high bandwidth flows

  7. Related work (2) • SFB (Stochastic Fair Blue) • Multiple levels of hashing to identify high-bandwidth flows • CHOKe • An incoming packet is matched against a random packet in the queue • When the number of flows is large and the high-bandwidth flows have only a few packets in the queue

  8. RED-PD (1) • Identifying high bandwidth flows • Preferential dropping

  9. RED-PD (2) • Difference from other schemes • To improve the performance of low-bandwidth flow using a small amount of state • Predictable effect on the traffic going through the router

  10. Identifying high bandwidth flows (1) • Using the RED drop history • To identify flows that are sending more than ƒ(r,p) , the reference TCP flow’s rate( RTT r and packet drop rate p)

  11. Identifying high bandwidth flows (2) • Congestion epoch length • Maintaining the packet drop history over K x CL(r,p) seconds • Partitioning the history into M lists • RED-PD identifies flows with losses in at least K of M lists • K = 3, M = 5, r = 40ms and p = 1%

  12. Preferential dropping (1) • Pseudo code for reducing a flow’s dropping probability

  13. Preferential dropping (2) • Pseudo code for increasing a flow’s dropping probability

  14. Evaluation (1) • Probability of identification

  15. Evaluation (2) • Fairness Multiple CBR flows flow 1 : 0.1Mbps, flow 2 : 0.5 Mbps, every subsequent flow : 0.5 Mbps more than the previous flow Mix of TCP and CBR flows flow 1-9 : TCP flows with RTTs of 30,50,70 ms flow 10-12 : CBR flows with 5,3,1 Mbps respectively

  16. Evaluation (3) • Response time • The speed of RED-PD’s reaction depends on the ambient drop rate and the arrival rate of the monitored flow 1 CBR flow and 9 TCP flows The CBR flow starts with a rate of 0.25 Mbps, increases it to 4 Mbps at t=50s, and decreases it back to 0.25 Mbps at t=250s. The RTT of the TCP flows ranged from 30 to 70 ms.

  17. Evaluation (4) • Effect of R, the target RTT • Increasing R • More flows are monitored • Decreasing the ambient drop rate • Increasing the bandwidth available to the unmonitored flow

  18. Conclusions • RED-PD • Using drop history to identify high-bandwidth flows and controlling their throughput in times of congestion • Applicable to the current Internet

  19. Providing quality of service guarantees without per-flow state

  20. Contents • Introduction • Related work • Quality of service model • Signaling protocol • Fault tolerance • Dynamic packet scheduling • Region aggregation • Conclusions

  21. Introduction • Improving the QoS provided by Internet • Integrated service (Intserv) • QoS is based on scheduling protocol • Each router maintains per-flow state • Scalability problem • Difficult to maintain in a distributed environment • Differentiated service(Diffserv) • A few bits are reserved in each packet to indicate its per-hop behavior(PHB) • At each router packets are classified and forwarded according to their PHB • High levels of QoS and network utilization cannot be accomplished

  22. Related work (1) • Some attempts to provide the QoS level of Intserv without any per-flow state at the core routers • The signaling protocol and the packet scheduling protocol must function without per-flow state • Dynamic packet state • Each packet carries enough information to reproduce its deadline at each router • Unable to compute the deadline accurately if a channel has a variable delay • Flow aggregation • Cannot be used across multiple domains

  23. Related work (2) • Signaling methods without per-flow state • Observation methods • To estimate the resource requirement by observing the traffic through the router • Inaccurate estimation • Bandwidth broker methods • Resource reservation is managed by a bandwidth broker • Centralized brokers are vulnerable to faults • Distributed brokers have the difficulty of maintaining their state synchronized

  24. Quality of service model (1) • Some notations bandwidth reserved for flow f ith packet of f, i≥1 length of packet pf,i maximum of Lf,j ,where 1≤j ≤i maximum packet length at s arrival time of pf,i at scheduler s exit time of pf,i from s bandwidth of the output channel of s upper delay bound of the output channel of s

  25. Quality of service model (2) Ss,f,i the time at which the first bit of pf,i is forwarded by s Fs,f,i the time at which the last bit of pf,i is forwarded by s Rf forwarding rate of s Ss,f,1 = As,f,1 Ss,f,i = max(As,f,i, Fs,f,(i-1)) , for every i, i >1 Fs,f,i = Ss,f,i + Ls,f,i / Rf , for every i, i≥1 • Rate-guaranteed scheduler Es,f,i≤ Ss,f,i + δs,f,i , for every input flow f of s and every i, i >1 δs,f,i the delay of packet pf,i at scheduler s Ss,f,i + δs,f,i deadline of at s

  26. Quality of service model (3) • The delay of a packet across a sequence of schedulers • Let t1, t2,…,tkbe a sequence of k rate-guaranteed schedulers traversed by flow f , for all i where

  27. Quality of service model (4) • Scheduling test • To ensure packets exit by their deadline • Rate-dependent delay (1) • Rate-independent delay • For all t, t > 0, (2) where δs,fis the delay of flowfat schedulers

  28. Signaling protocol (1) • How much information is needed? • In case (1), the total of the reserved rates of flows and the rate of output channel • In case (2), a count of input flows in each (rate, delay) pair • Objective • To maintain the above information current at each node • Soft state • Each flow periodically send Refresh messages along the path to its destination

  29. Signaling protocol (2) • We assume the scheduler uses rate dependent delay and test (1) • Each scheduler s updates its state every T seconds in the following way: SumRatess := ShadowSumRatess; ShadowSumRatess := 0; SwapBitss := ¬ SwapBitss; • Whenever s receives a Refresh message from f, the following is performed ifbf,s≠ SwapBitssthen ShadowSumRatess := ShadowSumRatess + Rf; bf,s := SwapBitss end if forward Reserve towards the destination of f • When the destination receives this message, it returns a RefreshAck message back to the source of f

  30. Signaling protocol (3) • When a new flow f is created • Upon receiving a Reserve message from f at s ifSumRatess + Rf≤ Cs then ShadowSumRatess := ShadowSumRatess + Rf; SumRatess := SumRatess + Rf; bf,s := SwapBitss ; forward Reserve towards the destination of f else Return a Reject message towards the source of f end if • Upon receiving a Reject message for flow f ifSwapBitss= bf,sthen ShadowSumRatess := ShadowSumRatess - Rf ; SumRatess := SumRatess - Rf; else SumRatess := SumRatess - Rf; endif Forward Reject towards the source off

  31. Signaling protocol (4)

  32. Signaling protocol (5) • How often should the source of a flow send a Refresh message? • D : on the time for signaling message to traverse the network • The interval between the successive transmissions of Refresh messages should be at most T - D

  33. Fault tolerance • Delayed or lost signaling messages • If a source does not receive a RefreshAck, then the source terminates the flow • This should occur rarely • Link failure & Process failure • The path from source to destination may change before the flow is terminated • Routing changes • If the path of f changes, its message are dropped where the change occurred,causing the termination of f

  34. Dynamic packet scheduling (1) • Consider two consecutive schedulers, s and t, of flow f At,f,i≤ St,f,i ≤ Ss,f,i + Δs,f,i +πs assume At,f,i = Ss,f,i + Δs,f,i +πs for all pf,i then St,f,i = At,f,i = Ss,f,i + Δs,f,i +πs • Before s forwards pf,i to t, s computes Ss,f,i and store Ss,f,i in pf,i • If pf,i arrives earlier than Ss,f,i, it is kept in a buffer until time Ss,f,i, then it is considered “arrived” and may be scheduled for transmission • But all schedulers must have a common clock

  35. Dynamic packet scheduling (2) • Scheduler s computes the early departure of pf,i, denoted εs,f,i, as follows εs,f,i = Ss,f,i + Δs,f,i – Es,f,I • Disadvantages • If the output channel has variable delay, then is not computed accurately • Assume some schedulers have clocks which run fast, and forward packets to a scheduler with a normal clock => This will cause excessive delays to other flows of the normal scheduler

  36. Region aggregation (1) • Taking advantage of the hierarchical structure of internetworks • The gateways are nodes in the network • The circuits between gateways are output channels with variable delay • Gateways have synchronized clocks using the NTP protocol

  37. Region aggregation (2) • The packets of all the flows sharing the same circuit are aggregated together to become a single flow g • The aggregation should be done in a fair manner • A lower per-hop delay is possible for the aggregated flow than for the individual flows

  38. Conclusions • Approach to provide QoS guarantees without per-flow state at each router • Signaling protocol • Maintaining a constant amount of state per router • Accurate and resilient to process and link failures • Packet scheduling technique • A combination of the dynamic packet state and flow aggregation

More Related