1 / 27

QoS Requirements

Initial D-QoS Proposal Maarten Hoeben- Intersil/NWN Menzo Wentink- Intersil/NWN Wim Diepstraten- Lucent Technologies WCND Harold Teunissen- Lucent Technologies Prepared by: Wim Diepstraten. QoS Requirements. The reigning network paradigm is IP - not ATM or circuit switching

prue
Download Presentation

QoS Requirements

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Initial D-QoS ProposalMaarten Hoeben- Intersil/NWNMenzo Wentink- Intersil/NWN Wim Diepstraten- Lucent Technologies WCNDHarold Teunissen- Lucent TechnologiesPrepared by: Wim Diepstraten Wim Diepstraten, Lucent

  2. QoS Requirements • The reigning network paradigm is IP - not ATM or circuit switching • IP performance is inherently variable • Multi-media applications for IP based networks include - or will include - pacing mechanisms to adapt to available capacity • e.g. the new MPEG4 standard allows continuous adaptation of the encoder to available capacity • Wireless QoS should take advantage of these developments and focus on: • maintaining [reasonable] performance in the presence of interference • maintaining delivery rates rather than tight delay constraints per packet • accommodate rate adaptation Wim Diepstraten, Lucent

  3. Assumptions • The Enhanced DCF proposal is part of a total QoS proposal which is layered as follows: • QoS Level 0: No QoS using DCF & PCF as in 802.11 1999 • QoS Level 1: Priority based QoS delivery using enhanced DCF • QoS Level 2: Priority based QoS delivery using enhanced DCF & PCF • QoS Level 3: Parameterized QoS delivery using enhanced DCF & PCF • Each successive level does support the full functionality of the level below. • Use priority based interface supporting 8 Priority classes according to 802.1 q. Wim Diepstraten, Lucent

  4. D-QoS objective • Objective is to use a low complexity approach. • Provide priority driven service differentiation between a number of different traffic priority classes. • Provide mechanisms to limit the medium load in order to keep the delay of higher priority classes under control. • While avoiding starvation of low priority classes. • The mechanism should be adaptive such that priority class differentiation mechanisms are only active when there is active traffic load in higher priority classes. • If there is only “Best Effort” traffic load, then the throughput performance should be similar to the legacy throughput. • Should be able to provide priority differentiation also in the presence of legacy DCF devices, although effectiveness can be reduced. • Can be applied in both ESS and IBSS. Wim Diepstraten, Lucent

  5. Approach • Up to 8 priority classes are distinguished, each having a separate Q. • Mapping to a limited subset of about 4 may be more practical. • Objective is to control the total medium load such that the delay of high priority classes are acceptable. • There are no absolute limits, only relative. • By using the medium load feedback to control the distinction between service classes. • By reducing the service rate of lower classes to assure better service of higher priority classes when load increases. • If this load feedback is indicating the load per priority class on the medium, then the differentiation model can be adaptive to scale up the lower priorities, when no traffic is present for any of the higher priority classes. • So that performance of “Best Effort” only traffic remains as is. • The same distinction in service within a station should work across the whole BSS. Wim Diepstraten, Lucent

  6. Basic model for example System & Ntwrk Mant System Interactive Stream Best Effort Multi-media Traffic Source Medium Access Control Drop Rate Control Service Rate Control Wim Diepstraten, Lucent

  7. Model discussion • A limited number of Access Priority Q’s are defined, and 4 is considered a good limit. • Two basic functions are provided: • “Service Rate Control” mechanism that is to assure service differentiation between priority classes by using different access priorities. • A “Drop Rate Control” mechanism that is to regulate the “Offered Load” based on medium capacity limits. • Apart from this there can be a “Submission Control” function above the MAC. Wim Diepstraten, Lucent

  8. D-QoS mechanism approaches • The approach presented at the October 13 conference call has changed. • The previous (SR, CW based) submission rate mechanism did suffer from non-linear effects causing fairness issues, and poor differentiation control. • This is replaced by a superior approach suggested by Maarten Hoeben called Virtual-DCF (V-DCF). • The handling of legacy DCF devices is addressed separately on slide 17. • What is the definition of “Fairness”? • In the DCF “Fairness” is having statistically the same medium access opportunity to get a packet on the medium, independent of packet size or Q-depth. Wim Diepstraten, Lucent

  9. Access mechanism • A “Medium monitor” function does measure the “Load per Class” in terms of medium occupancy duration, over a period, of for instance a Beacon interval. • A “Virtual-DCF” (V-DCF) mechanism is used for each priority class within a station, each using a separate CWmin (CWx, a per class parameter). • So in our example there are up to 4 DCF contenders within each station. • The “Medium monitor” function does translate the “Load per Class” into a set of CWx parameters, which should be the same for all stations. • How this translation takes place does preferably not need to be standerdized. • The “V-DCF” mechanism does generate a separate backoff count per “Access Priority” level. • Which individually decrement when the medium is not busy. • A transmission starts when one of the counters decrements to zero. • A collision between local accesses are resolved locally. • The higher priority frame will be send first, while the lower priority can be send after the post-backoff of the higher access priority frame • Do we need to standardize this, or can it be a policy decision Wim Diepstraten, Lucent

  10. CW Strategy • CW use strategy: • Intent is to use a CWmin that is a good trade-off between collision probability, and delay increase due to excessive retries. • It is assumed that the nominal CWmin=31 as defined for the legacy systems is used for traffic with a number of contenders. • Never use a CWmin lower then 31 for “Best Effort” Traffic. • It could be acceptable for low loads, but affects priority over legacy traffic. • A CWmin lower then 31 can be acceptable for higher priority classes that only have a few contenders. • Note that the “Load Monitor” can also gather information how many contenders there are within one priority class. • But the retry probability will increase when doing this. • Because of collision with lower priority stations that are in backoff, or due to direct contenders in the same priority class. • A low CWmin will increase the retry rate of the high priority traffic, which may result in more net delay. • So a proper tradeoff needs to be made (simulations needed). Wim Diepstraten, Lucent

  11. Example CW as function of load. • Assumption: C0=low priority and “Best Effort” is C3 is highest priority. • Load <= 50%: All classes used. Alternative If low C3 load. • C3: CW3=31 CW3=15 • C2: CW2=63 CW2=31 • C1: CW1=95 CW1=47 • C0: CW0=127 CW0=63 • Load >50%: All classes used. Alternative if C3 is empty (no load monitored) • C3: CW3=31 CW3=15 • C2: CW2=63 CW2=31 • C1: CW1=127 CW1=63 • C0: CW0=255 CW0=127 • Load > 50%: Class C3 and C1 empty • C3: CW3=15 No load monitored at this level but good differentiation when traffic starts. • C2: CW2=31 • C1: CW1=47 No load monitored at this level, but some differentiation when traffic starts • C0: CW0=63 higher priority because C3 and C1 are empty. • Example shows that as function of Load>50% more differentiation is achieved by further trottling down the lower priority traffic, because they are using higher CWmin values. • But if no load is present within a access priority class, then relative priority of C0 is increased. • The Red slanted case is used for further explanation in the next pages. Wim Diepstraten, Lucent

  12. Differentiation example: • >50% load: Alternative if C3 is empty (no load monitored) • C3: CW3=15 No load in this access priority class • C2: CW2=31 • C1: CW1=63 • C0: CW0=127 • Evaluating the red / slanted case • Note that if C3 is empty then higher priority CW is using better (smaller) CW settings so all the lower priorities are bumped up a level. • If in the mean time C3 traffic is starting, it still has access advantage because it then uses a CW=15 in that station as long as that Q is filled. This still provides class differentiation, while all CW’s of the lower classes are bumped up a class. • A station that has only low priority class C0 traffic, will access the medium by using a CW=127 in the above >50% load example, compared to a CW=31 for the C2 class. • So compared to other stations on the medium that generates C2 traffic, the C4 traffic has a factor 4 lower access priority. • Because its average backoff is a factor 4 larger. Wim Diepstraten, Lucent

  13. Example cont’d • >50% load: Alternative if C3 is empty (no load monitored) • C3: CW3=15 No load in this access priority class • C2: CW2=31 average access after every 15 slot no medium busy. • C1: CW1=63 average access after every 31 slot no medium busy. • C0: CW0=127average access after every 63 slot no medium busy. • If we monitor that there is no load in the higher classes at all, then the parameters could be set as follows: • C1: CW3=7 No active traffic in this class • C2: CW2=7 No active traffic in this class • C3: CW1=15 No active traffic in this class • C4: CW0=31 Only Best effort traffic active. • All stations do now use CW=31, and send C0 traffic with same CW as legacy. • So will have identical throughput performance as legacy systems today. • If in the above example high priority traffic is started at a station. • Then within that station the associated lower CW is used, providing relative priority • After the next monitoring period, the CWx can be adopted to increase CW in subsequent classes, to keep the collision probability at an acceptable level. Wim Diepstraten, Lucent

  14. Retry mechanism changes • The Retry mechanism should be such that high priority traffic does not suffer from lower priority traffic being exhaustively retried. • For stability reasons the CW need to increase (double?) at retry, as in the current DCF. • Suggest that all pending backoff counters are doubled, and that the CW of the retrying frame is doubled also, as normal for a DCF. • Goal of the doubling CW was that the station temporarily reduces its load on the medium, which is the main thing we want to achieve here. • So higher priority traffic does not have to wait until the lower priority Retry is completed, but its access priority is reduced compared to other medium contenders due to the CW increase, and the fact that pending backoffs are doubled. • And CW will be changed to CWmin of that class after this frame is succesfully transmitted. • Exhaustive retries are no longer happening in a station due to the inherent parallel nature of the V-DCF approach. • The first backoff counter reaching zero does determine which frame to send. • CWmax may need to be extended to different values per class. Wim Diepstraten, Lucent

  15. V-DCF characteristics • Because there are multiple traffic sources within the same station doing Virtual-DCF, it does have effect on the medium load. • Example, when assuming a CW=31 is an equivalent load of 1. • Then for the case where CWx= 31, 63, 127, 255, with all Q’s active • The equivalent load is 1+1/2+1/4+1/8=1.875 • when C2 is empty then load is 1+1/4+1/8=1.375 • when only C0 is filled then load is effectively 1/8=.125 • If the differentiation is less CWx= 31, 47, 63, 127 with all Q’s active • The equivalent load is 1+2/3+1/2+1/4=2.41 • So a V-DCF station with multiple classes is more aggressively accessing the medium. • So the actual load a station represent to the medium is depending on the CW differentiation, and the momentary Q situation. • Simulations must show us the effects of this. Wim Diepstraten, Lucent

  16. Load monitoring • Monitoring characteristics: • How adaptive does this need to be? • To handle bursty traffic effectively we need a measurement granularity which is relative short. • A Beacon interval granularity seems to be in the ballpark. • Given that for QoS the Beacon interval is expected to be in the 30-50 ms range. • Two approaches possible: • Each station does do its own monitoring, determining the load per class. • This will have effect on its Power Management capability, if measurements are done continuously. • Or only do the measurement just prior to sending traffic. • This may not be effective, and increases latency. • Stations do also take into account the visible overlapping BSS load. • Only the AP is doing the measurements, and distribute the CWx per class in every Beacon (new element). • No effect on Station Power Management. • But AP probably does not see any overlapping BSS load. • Each station uses the same Service Rate scheduling parameter (CWx), assuring fair access differentiation. • Having the monitor function in the AP and distribute the CWx to the stations is considered the superior solution. Wim Diepstraten, Lucent

  17. ESS approach • Monitoring is default done in the AP. • The translation from “Load per Priority” to the CWx value per class, can be policy based, and does not need to be specified in the standard. • The Policy management only needs to be done in the AP, and applies identically in the whole BSS. • So that the whole BSS uses the same CWx values. • A list of CWx parameters are distributed to each station in the Beacon. • Using a new element. • Only the rules how to use the CWx per class parameter in any station needs to be standerdized. • There can still be a “Monitor function” active in a station, which can further tune the CWx parameters to the locally monitored situation. • The AP supplied parameters are considered the default. • But only unidirectional CW changes (getting larger) are allowed from the AP supplied CWx values. • This can help to reduce BSS overlap issues. Wim Diepstraten, Lucent

  18. IBSS • How do we support IBSS? • For IBSS we can use a similar approach. • With the Beacon Master (the one winning the distributed Beacon contention) doing the monitoring. • And send a Beacon with the QoS parameter element in a next Beacon with Backoff=0, resulting in one measurement in two Beacon intervals. • But this requires that each station does need to implement the capability to do the Load Monitoring, and do the translation to the CWx parameters. • Either in a standardized way, so that each station comes to a similar conclusion • or in a proprietary way, so that a different “load translation” regime may occur per Beacon interval, due to the distributed Beacon approach. • This may not be too bad, getting to an average result. • Or use a default CW parameter table, which can be overwritten when a measurement function is present in any station. • Issue: Can we do without standardizing the Load to CW translation? • Is it acceptable to make the measurement function voluntary? • Using a fixed default when no measurement function is implemented? Wim Diepstraten, Lucent

  19. AP access priority • Access efficiency of an AP can be significantly improved, which is desirable because: • Most traffic is going via AP, while majority of traffic is down traffic. • Increasing Access Priority by using smaller CW for an AP translates into higher collision probability, so this can be counter productive. • Better is to allow more packet transmission per single access for an AP. • This can be achieved by allowing an AP to “Burst” its traffic per access up to a certain limit. • Generating either Directed or BC/MC frames with SIFS in between, like we do with fragmentation. • Abort the burst when no Ack is received where expected, and continue the burst after another DCF access. • AP’s can apply some hysteresis to adapt fast to load increase, and slow to load decrease. • The burst content should take into account the “Access Priority” controlled scheduling mechanism, such that the service rate ratio per Q are followed. • This is also compatible with Legacy devices. • Should the Burst mechanism also be allowed for stations? • Perhaps we can allow a limited burst, up to a fixed max duration. • The max duration can be the medium time associated with a max size (2304 Byte) frame at the highest Basic rate. • Do we need to standerdize an algorithm? • At least need to agree on limits. Wim Diepstraten, Lucent

  20. Legacy device handling • Legacy devices are assumed to use the CWmin=31 and contend with the priority traffic in a way that breaks the differentiation model. • For those priority levels that do not use a lower then 31 CWmin. • However all traffic from legacy devices will go through the AP. • So an “Enhanced DCF” AP does put the down traffic in the lowest priority Q. • Management frame responses could be an exception. • In most higher layer protocol environments this will automatically reduce the traffic in the up direction. • AP’s could implement an even lower priority Q for its legacy traffic, so that it bandwidth is extra reduced to balance this further. • If we assume that the number of stations that generate high priority traffic is very limited, then a lower CWmin then 31 can be used, which improves the relative priority compared to legacy traffic. Wim Diepstraten, Lucent

  21. Drop Rate Control • Intent is to apply mechanisms that reduce the offered load from the application when average delay increases due to too high a medium load. • What mechanisms can we apply to control the traffic load coming from the higher layers? • Best method is to trigger the congestion control mechanisms of the higher layers. • How acceptable is “Drop control” in different environments. • Most higher layer protocols that use some kind of Ack mechanism are sensitive to dropped packets, and it is causing them to (temporarily) reduce their load. • It is the main congestion control mechanism in TCP/IP. • However there are also protocols that are not sensitive for this. • Like UDP, or RTP flows in some cases. Here you have to police the traffic stream itself by dropping frames (probably more aggressively). • For this reason it is also needed that drop control is distributed over the stations, because doing drop control only in the AP only affects one side. • Some applications have their own higher level approaches to deal with that. • Reacting on lost packets, or end-to-end delay measurements. Wim Diepstraten, Lucent

  22. Drop Rate Control • What “Drop Control” needs to be applied? • Does every Class have a specified Q-depth, that needs to be controlled? • The Q-depth is felt to be a function of the burstyness of the stream, which causes a temporary high load, while the average Q depth should probably be low. • There probably needs to be a “Time to Live” driven drop control at the output of the Q. • What “Drop Control” mechanism can be applied at the input of the Q, when the Q is full? • Or use a mechanism that reacts on a Q-average threshold. • Could a “Limited retry” (different per class?) be an automatic drop control mechanism? • For instance Retry C3,C2 = 2, and Retry C1 = 3, and Retry C0=4 • Assuming that C0 does need less load control because the best effort traffic can handle much more delay. • While the SR differentiation does trottle down this traffic significantly anyway, when the load increases. • Should we standerdize this, or leave it to the implementations? Wim Diepstraten, Lucent

  23. Mechanisms needed • Priority in frame header • a new field compatible with also the PCF approach needs to be defined. • D-QoS Element in the Beacon • Containing the CWx list per access priority level. • Which can also provide the “Total Load” info, which can be used for “Load Balancing” purposes. • Rules to use CW and retry change • See elsewhere in this document. • Sequence# generation and Duplicate detection rule changes. • To support the non-exhaustive retry approach in a station, it is needed to maintain a sequence# per SA.Class combination. • So the sequencing rules need to change, such that individual S# is maintained per Class, and will increment. • And AP’s and Station receivers need to maintain duplicate detection mechanism on a SA.Class basis, and expand its resources for that. • Issue: If we want to detect reception errors based on S# (for the cases where no Acks are used), then we may need to maintain a S# per SA.DA.class. • Capability exchange mechanisms to determine service levels. Wim Diepstraten, Lucent

  24. Mechanisms cont’d • In order to improve the efficiency of the AP, the AP should be allowed to concatenate multiple frames in one Tx-Access Opportunity. • A (MIB?) limited burst of frames are allowed per access opportunity with SIFS in between. • This can be done in a similar way as specified for fragmentation. • With the “Duration” field containing the proper values for the next exchange. • So Ack contains the duration of the next data frame, and its associated Ack. • Also a station can be allowed to send a burst, but limited to a max size (2304 Byte) duration equivalent size. • We probably need to limit this functionality only to a 2304 Byte duration equivalent for the highest Basic rate only. • To prevent excessive jitter. Wim Diepstraten, Lucent

  25. Other mechanisms • Orthogonal to this, D-QoS can also make use of any other efficiency improvements that are going to be defined in the standard. • Like direct Station to Station traffic, and the Bridge Portal concept. • Apply different Ack policies (Ack, No-Ack). • Delayed Ack is a complexity level that won’t be supported at level 1. • Data Aggregation (if any) (multiple MPDU’s in one PSDU to reduce PHY overhead). • This should however be “Conformance level” dependent. • The lowest “Enhanced DCF” level should probably not support the use of new Ack policies or data aggregation. • Because that is an extra complexity step, which may only apply to the “Enhanced DCF” in QoS Level 2 or 3. • It will depend on how generic this functionality is. Wim Diepstraten, Lucent

  26. Behavior evaluation • Relative priority differentiation is achieved between classes • If a class is empty within the whole BSS, then the distributed CW are such that maximum throughput is achieved. • By using next level up priority. • If only “Best Effort” is used, then the performance will be the same as today. • Because the CW value for the traffic used will be identical for the whole BSS, equal to legacy devices. • That means that there will be transitional issues when higher priority traffic is started, because during that Beacon interval there will not be a class differentiation. • There can however still be a CW differentiation for the empty classes, so that when they start they do have a higher access priority. • AP’s can apply some hysteresis to adapt fast to load increase, and slow to load decrease. • The DCF will defer for some of the overlapping traffic, but will not see the majority of its interferers, which will then result in more retries, and so increased delay. • How acceptable is this for QoS?? • This will in most cases result in more dropped frames, so a reduced offered load. • Applying RTS/CTS to retry traffic may help in some cases. • Simulations must show the difference in “Load measurement” method in either AP or Station. Wim Diepstraten, Lucent

  27. Conclusion • A D-QoS mechanism is proposed that provides for a minimum complexity enhancement of the DCF only, by the implementation of a Virtual-DCF scheme. • The mechanism is dynamic such that if not all traffic classes are active, then the overall throughput performance of lower priority traffic is increased. • Translation of the load per class to the CWx parameters do not need to be standerdized when the measurement function is implemented in the AP. • This mechanism can also be applied to IBSS in a number of ways, that need further discussion. • Interoperability, and priority control over legacy DCF is achieved in most cases. • Further efficiency improvements are suggested to give an AP more access priority, through the use of a traffic burst mechanism. Wim Diepstraten, Lucent

More Related