1 / 36

QoS Framework for Access Networks

QoS Framework for Access Networks. Govinda Rajan Lucent Technologies rajan@lucent.com. Agenda. QoS with resource based admission control Tight & loose QoS models QoS in carrier & application network models Distribution and adaptation of QoS policies Performance monitoring for QoS

abbot-floyd
Download Presentation

QoS Framework for Access Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. QoS Framework for Access Networks Govinda Rajan Lucent Technologies rajan@lucent.com

  2. Agenda • QoS with resource based admission control • Tight & loose QoS models • QoS in carrier & application network models • Distribution and adaptation of QoS policies • Performance monitoring for QoS • Comparison of Muse QoS principles to other QoS proposals • Conclusion

  3. QoS in Access Networks • QoS is key in multi-service access • End-to-End QoS solutions (e.g. IntServ) • commercially failed because of complexity • Priority based QoS (e.g. Diffserv) • works in over-provisioned networks • insufficient for Access & Aggregation => Simplified QoS control in Access & Aggregation needed Edge Node Access Node

  4. Ethernet Network Model Ethernet switch (S - VLAN aware or 802.1Q) Ethernet switch (802.1ad) bridged BRAS or Edge Router or routed CPN NSP/ISP CPG AEN NSP/ISP Ethernet (MPLS) aggregation network NSP/ISP AEN NAP AN ASP

  5. Ag1 A1 Ag4 En1 A2 A A3 Ag2 All links: 1 GB B1 B B2 B3 C1 C C2 Congestion in Ethernet Networks • Assume all nodes have strict priority queuing & assume there are just 2 QoS priority classes. • Say, that there are 10 flows of high QoS priority of 0.1 GB each, upstream from residential gateway and 2 flows of lower QoS priority. • Link A-Ag1 is 1 GB so A allows all 10 flows and the flows of lower priority will be dropped. • Assume: Similarly B also allows 10 flows of 0.1 GB each. • There is congestion at Ag1, since incoming traffic is 2 GB & upstream link capacity is only 1 GB and eventually packets will have to be dropped and QoS is not guaranteed.

  6. ni times bi Bo Statistical Multiplexing where bandwidth nixbi < Bo bo Bi Ingress traffic for a egress port has higher rate than egress port: Bi > bo Potential BottleNecks in Access Networks BNAN SPA SPB BNAgN EN1 BNEN EN2 SPc Edge Node Aggregation Node User Gateway Access Node • Because of the nature of sharing the connectivity resources, there can be potential bottlenecks Upstream & Downstream traffic at Access Nodes, Aggregation Nodes & Edge Nodes depending on the quantity & the required quality of service.

  7. QoS Signalling Central Resource View A1 A2 A Ag1 A3 Ag4 En1 Ag2 B1 All links: 1 GB B B2 B3 C1 C C2 Call Admission Control for QoS • For all flows, decision of allow or block is needed from central resource view. • Strict priority queuing & 2 QoS classes. • Central resource is provisioned with 0.8 & 0.2 GB for 2 QoS classes respectively. • First Mile will be handled locally by ANs. • Central resource view is provisioned with 0.8 Gb for higher QoS and 0.2 for lower QoS. • A gets request for 4 high QoS class flows & 2 low QoS flows. • A signals central resource & gets allow signal for 4 high QoS class. CRV decreases resources by 0.4. • A allows 2 flows of low QoS flows without signalling. • B gets request for 5 flows of high QoS, gets allow for 4 & block for 1 flow. CRV resource count is now 0. • B gets request for 2 low QoS & allows them. • Low QoS flows experience packet loss because of priority queuing at nodes. • Additional requests for high QoS flows from edge nodes are blocked by CRV until current flows are ended and signalled to CRV.

  8. Ag1 A1 Ag4 En1 A2 A A3 Ag2 All links: 1 GB B1 B B2 B3 C1 C C2 Virtual Pipes for Multi-service Provider QoS • At node A • For SP1 , 0.2 GB of flows are allowed & above that new flows are blocked • For SP2 , 0.3 GB of flows are allowed & above that new flows are blocked • All other QoS flows are blocked • Best Effort traffic is allowed • At node B • For SP3 , 0.3 GB of flows are allowed & above that new flows are blocked • All other QoS flows are blocked • Best Effort traffic is allowed

  9. QoS Classes Best effort class Background class (3GPP) Non-critical class (ITU) non-interactive elastic = data integrity dominates Transactional class Interactive class (3GPP) Responsive class (ITU) elastic interactive Streaming class Streaming class (3GPP) Timely class (ITU) non-interactive inelastic Real-time class Conversational class (3GPP) Interactive class (ITU) inelastic = temporal integrity dominates interactive

  10. Agenda • QoS with resource based admission control • Tight & loose QoS models • QoS in carrier & application network models • Distribution and adaptation of QoS policies • Performance monitoring for QoS • Comparison of Muse QoS principles to other QoS proposals • Conclusion

  11. Tight QoS in ‘Application’ Model Flow based FB monitoring FB gating FB policing FB monitoring FB gating FB policing IP Backbone AccessNetwork AccessNetwork ABG ABG EDGE CPN B CPN A EDGE Eth.

  12. Loose QoS in ‘Application’ Model Traffic Class “per Service Provider” Based control: All the traffic flows belonging to the same traffic class and served by the same SP are aggregated together and treated as a whole in the ENs. FB monitoring FB gating FB policing TCB monitoring TCB gating TCB policing Traffic Class Based FB monitoring FB gating FB policing IP Backbone Soft switch AccessNetwork AccessNetwork ABG ABG EDGE CPN B CPN A EDGE Eth.

  13. Agenda • QoS with resource based admission control • Tight & loose QoS models • QoS in carrier & application network models • Distribution and adaptation of QoS policies • Performance monitoring for QoS • Comparison of Muse QoS principles to other QoS proposals • Conclusion

  14. IMS Models of Access Networks ‘Carrier’ model • The network just provides transport/connectivity services. • The network is not application/session aware. • Current best-effort Internet access model. • Can be implemented by enhancing with multiple QoS conenctivity services, tailored and classified for the most prevalent group of services. ‘Application’ model • The provisioning of an end service is controlled & guaranteed by the operator. The user requests (dynamically) for a service, and the network sets-up the most appropriate transport/connectivity service (service-based policy push). • The network is application/session aware.

  15. Loose QoS in ‘Carrier’ Model Traffic Class “per User” & “per SP” Based control Traffic Class “per SP” Based control Traffic Class “per User” & “per SP” Based control FB monitoring FB gating FB policing TCB monitoring TCB gating TCB policing TCB monitoring TCB gating TCB policing IP Backbone Soft switch AccessNetwork AccessNetwork ABG ABG EDGE CPN B CPN A EDGE Eth. Intelligent RGW: It must solve the applications contest for bandwidth according to the end user preferences and the available transport services

  16. IP Backbone Soft switch AccessNetwork AccessNetwork ABG ABG EDGE CPN B CPN A EDGE Eth. Intelligent RGW Converged QoS Model Border nodes (AN & EN) must be able to perform these tasks both at IP flow and traffic class level. Network operator’s choice of policy: • Appropriate application of ‘tight’ or ‘loose’ QoS model based on specific requirements for individual access networks • Mix and match of ‘carrier’ or ‘application’ models

  17. Agenda • QoS with resource based admission control • Tight & loose QoS models • QoS in carrier & application network models • Distribution and adaptation of QoS policies • Performance monitoring for QoS • Comparison of Muse QoS principles to other QoS proposals • Conclusion

  18. A1 A2 A A3 B1 B B2 B3 C1 C C2 Need for Central View of Resource • Virtual Pipe of SP1 is 0.4 GB & total subscribers are for 1 GB, at A & B. • To ensure that only 0.4 GB or less is allowed; for each call request, it should be checked if existing calls from both A & B are less than 0.4 GB totally. • A central view of resource is needed to share the same resource for subscribers at A & B. By resource here, it is meant a fraction of physical resource for exclusive use by SP1 subscribers at A & B. All links: 1 GB Ag1 Ag4 En1 Ag2 Ag5 En2

  19. A1 A2 A A3 B1 B B2 B3 C1 C C2 Local Control of Exclusive Fraction of Resource Example for SP1 Virtual Pipe of 0.4 GB: • Node A is provisioned with 0.1 GB as the local threshold. New calls are allowed at A until the total calls active at A is less than 0.1 GB. If total calls active at A is equal or above 0.1 GB, new call requests are signaled to a central resource control for admission decision. • Node Bis provisioned with 0.2 GB as the local threshold. New calls are allowed at B until total calls active at B is less than 0.2 GB. If total calls active at B is equal or above 0.2 GB, new call requests are signaled to a central resource control for admission decision. • In this example, Node B has local control of a larger fraction of the total resource since it is assumed that there are more subscribers at Node B than at Node A. The central resource control is used to share a certain fraction of the total resource between A & B. All links: 1 GB Ag1 Ag4 En1 Ag2 Ag5 En2

  20. Subscriber Management Server Policy Server Network Management System Distributed Call Admission Control Step 1 User requests service (eg. TV) AN • Policy Agent • Allotted Quota • Un-allotted Quota • Subscribed BN IP addresses • Subscriber’s Policy • IP adds of other SNs 1 CPE 2 Super Node • Step 2 • If subscriber policy is in agreement • If required QoS can be serviced within allotted quota • Then call is admitted 3 3 • Step 3 • If required QoS cannot be serviced within allotted quota • Then bandwidth reserved in un-allotted quota and confirmation requested from BNs subscribed to that un-allotted quota. • On confirmation from all subscribed BNs, call is admitted otherwise call not admitted. Central Resource Database (Not Needed) AN EN AN EN Virtual Network (Control Plane) AN

  21. Distributed Call Admission Control Centralized subscriber and QoS policy system is distributed in a few Super Nodes, which are used to configure and push necessary policy data to the Border Nodes (ANs & ENs). A virtual control network is created between the Super Nodes & the Border Nodes. Some aspects of the local resource quota for a distributed resource system are as follows: • A portion of quota that is controlled exclusively by that Border Node. • A pool of unallocated quota and the IP addresses of Border Nodes that are allowed to use the pool. • IP address of Border Nodes (for requesting quota). • Information of quota that is reserved for future use (at a certain time & for a certain time period) and IP addresses of reserving Border Nodes. • Quotas have a valid time period, in the sense they have to be synchronized after that time period. Usage of unallocated resource is done only after requesting all relevant Border Nodes and getting confirmation, to fore come that multiple usage. If there is no confirmation from all relevant Border Nodes, then there will be a time-out and call request will be denied.

  22. QoS Policy at Access Node • Access nodes are the first point of contact from end-user premise equipment. It is also the first point of multiplexing in access networks and so the policy enforcement is best done at the access node. • In a central CAC system, the policies are stored centrally hence for each connection the central CAC system makes the admission decision and signals the access nodes to either allow or block the connection and enforce the policy for the duration of the connection. QoS policy is thus pushed from the central system, adaptation is also done at the central system. • In a distributed CAC, QoS policies are available locally and the adaptation is also done locally using the thresholds and the congestion state of the network. The QoS policies are based on individual IP flows or aggregated flows. • Irrespective of where the CAC decision is made centrally or locally, the control or gating is done at the access node.

  23. QoS Policy at Aggregation Nodes • The network elements in the aggregation network are usually Ethernet based in access networks. • The QoS policy consists of setting the priority for the different queues defined by the QoS identifiers e.g. ‘p’ bits. This is usually defined once during initial network planning and reflects the QoS classes in the network. • The QoS policy in the aggregation nodes is provisioned using the network or element management system.

  24. QoS Policy at Edge Node • The edge nodes in access networks usually connect to service nodes via transport connectivity through the regional network. • The QoS policy is usually applicable to aggregated flows and is a part of the SLA between the access network operator and the service provider. • The QoS policy is usually provisioned and is rather static, and can be updated with a new SLA.

  25. Policy Adaptation by Network Overload • The principle of adaptation by network overload is that the border nodes modify the QoS policy based on the overload conditions in the network. • Overload conditions for each link are determined by performance monitoring & the QoS policy system is triggered in the presence of overload conditions. • In case of calamities, if extreme network overload conditions are present, the QoS policies could be adapted such that there is a strict reservation of resources for emergency services (fire brigade, police, alarm, etc.).

  26. Policy Adaptation by Service Characteristics • QoS policy can also be adapted to prevent QoS degradation because of sudden uptake of popular new services. • Assume that a new broadband “buzz” service is introduced which attracts a large part of the population. Each day hundreds of new subscription requests are received by the service provider. • In such cases, QoS policy adaptation could be implemented such that based on the service uptake pattern, new service connections can be rejected more often. This solution is more acceptable, than the solution where existing users experience a degradation of the service. • A typical implementation could be that service based traffic overload counters are compared to the actual guaranteed load in the corresponding links. When a service exceeds the pre-defined maximum load for that service, then the policy for that service is adapted such that new calls for that service are rejected or a lower QoS is assigned to the new calls.

  27. Agenda • QoS with resource based admission control • Tight & loose QoS models • QoS in carrier & application network models • Distribution and adaptation of QoS policies • Performance monitoring for QoS • Comparison of Muse QoS principles to other QoS proposals • Conclusion

  28. Counters: • Queue lengths per QoS class • Dropped packets per QoS class • Upstream / downstream throughput for all ports A1 A2 A A3 B1 B Ag1 B2 Ag4 En1 B3 Ag2 C1 C C2 Counters to Monitor Resources in Access Networks • Counters can be used to monitor the node & link resources. • The counters could count the events (e.g. dropped packets) & also quantify the traffic throughput, total dropped packets, queue lengths, etc. and provide averages per time period.

  29. Performance Counters: • Queue lengths per QoS class • Dropped packets per QoS class • Upstream / downstream throughput for all ports QoS: Resource Database Periodic / on-demand Interface • Threshold crossing events for counters • Available link capacities • Overload events Performance Monitoring for Synchronization of Resource Database • The performance monitoring system periodically updates the QoS resource system with the status of the link capacities. • The QoS resource system compares it’s view of used capacities of links, with the view from performance monitoring and synchronizes it’s database if needed.

  30. Agenda • QoS with resource based admission control • Tight & loose QoS models • QoS in carrier & application network models • Distribution and adaptation of QoS policies • Performance monitoring for QoS • Comparison of Muse QoS principles to other QoS proposals • Conclusion

  31. Overview of QoS Proposals • QoS proposals are proposed by international forums & standardization bodies such as ETSI-TISPAN, ITU, 3GPP, etc. • The common element in these proposals is that node based priority queuing combined with resource based admission control provides QoS assurance to the flows as per QoS class. • MUSE proposal after detailed analysis identified many important differentiators from other proposals while the overall concept is common.

  32. AF Gq’ NASS e4 RACS SPDF Rq A - RACF Ia Re Re Ra C - BGF RCEF Core Border Node L2T Access Node Di Ds Point CPE IP Edge Transport Layer ETSI-TISPAN (R1) Resource Model for QoS

  33. MUSE QoS Differentiators The main differentiators of the MUSE proposal are: • IP awareness in Access Nodes. • Concepts of loose & tight QoS. • Local / distributed resource & admission control. • Policy enforcement at edges. • Flow or aggregate based enforcement based on location. • Scalability.

  34. Agenda • QoS with resource based admission control • Tight & loose QoS models • QoS in carrier & application network models • Distribution and adaptation of QoS policies • Performance monitoring for QoS • Comparison of Muse QoS principles to other QoS proposals • Conclusion

  35. Conclusion • Good traffic engineering combined with CAC is the key to providing QoS in access networks. • Judicious choice of ‘Tight’ & ‘Loose’ QoS • Centralized or distributed CAC as per size of network domain. • Regular load monitoring of network and corresponding QoS policy adaptation. • Policy enforcement based on flows as close as possible to the origin of the flow, while based on aggregated flows further into the network. • Open architecture for multi service / multi provider Acess

  36. Thank You!

More Related