1 / 21

End-to-End Performance with Traffic Aggregation

End-to-End Performance with Traffic Aggregation. Tiziana Ferrari Tiziana.Ferrari@cnaf.infn.it TF-TANT Task Force TNC 2000, Lisbon 23 May 2000. Overview. Diffserv and aggregation EF: Arrival and departure rate configuration Test scenario Metrics End-to-end performance (PQ): EF load

alain
Download Presentation

End-to-End Performance with Traffic Aggregation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. End-to-End Performancewith Traffic Aggregation Tiziana Ferrari Tiziana.Ferrari@cnaf.infn.it TF-TANT Task Force TNC 2000, Lisbon 23 May 2000 End-to-End Performance with Traffic Aggregation

  2. Overview • Diffserv and aggregation • EF: Arrival and departure rate configuration • Test scenario • Metrics • End-to-end performance (PQ): • EF load • Number of EF streams • EF packet size • WFQ and PQ • Conclusions End-to-End Performance with Traffic Aggregation

  3. Problem statement • Support of end-to-end Quality of Service (QoS) for mission-critical applications in IP networks • Solutions: • Per-flow  the Integrated Services architecture • Signalling (RSVP) • Per-class  the Differentiated Services • Classification and marking (QoS policies) • Scheduling • Traffic conditioning (policing and shaping) • DSCP • Aggregation • Expedited Forwarding and Assured Forwarding End-to-End Performance with Traffic Aggregation

  4. Aggregation • Benefit: greater scalability, no protocol overhead • Problem: interaction between flows multiplexed in the same class • Jitter: distortion of per-flow inter-packet gap • One-way delay: queuing delay in case of non-empty queues • Requirement: maxarrival rate < mindeparture rate End-to-End Performance with Traffic Aggregation

  5. Arrival and departure rate configuration • Maximum arrival rate is proportional to the number of input traffic bundles • One-way delay: maximum queuing delay depends on the number of EF streams and can be arbitrarily large: Del = txMTU + n with priority queuing where n is the number of input streams  Experiments of aggregation without shaping and policing MTU Dep_rate End-to-End Performance with Traffic Aggregation

  6. Test network End-to-End Performance with Traffic Aggregation

  7. Test scenario End-to-End Performance with Traffic Aggregation

  8. Metrics • One-way delay (RFC 2679): difference between the wire time at which the last byte of a packet arrives at destination and the wire time at which the first byte is sent out (absolute value) • Jitter (Instantaneous Packet Delay Variation): for two consequent packets i and i-1 IPDV = | Di – Di-1 | • Max Burstiness: minimum queue length at which no tail drop occurs • Packet loss percentage End-to-End Performance with Traffic Aggregation

  9. Traffic profile • Expedited Forwarding: • SmartBits 200, UDP, CBR • UDP CBR streams injected from each site • Background traffic: • UDP, CBR • Permanent congestion in each hop • Packet size according to areal distribution • Scheduling: priority queuing End-to-End Performance with Traffic Aggregation

  10. Best-effort traffic pack size distribution End-to-End Performance with Traffic Aggregation

  11. Tail drop End-to-End Performance with Traffic Aggregation

  12. EF load -Constant packet size (40 by of payload) and number of streams (40) -Variable EF load: [10, 50]% -delay unit: 108.14 msec  burstiness is a linear function of the number of pack/sec End-to-End Performance with Traffic Aggregation

  13. EF load (2) One-way delay: both average and distribution almost independent of the EF rate IPDV distribution: moderate improvement with load (tx unit: transmission time of 1 EF packet, 0.424 msec) End-to-End Performance with Traffic Aggregation

  14. Number of EF streams -Constant packet size (40 by of payload) and EF load (32%) -Variable number of EF streams: [1, 100]  asymptotic convergence End-to-End Performance with Traffic Aggregation

  15. EF packet size -Constant number of streams (40) and EF load (32%) -Variable EF frame size: 40, 80, 120, 240 bytes (variable pack/sec rate) -delay unit: 113.89 msec  moderate increase in burstiness [1632, 1876] bytes delay increase, IPDV decrease End-to-End Performance with Traffic Aggregation

  16. EF packet size (delay) • -large packet size  smaller packet rate, different composition of • the TX queue and the corresponding time needed to • empty the queue increases • e.g. • 240 bytes: 240 pack/sec  TX queue = BEBEB • queuing time = 16.2 msec 40 bytes: 720 pack/sec  TX queue = BEEEB  queueing time = 11.747 msec The longer the transmission queue, the larger the effect of the pack/sec rate End-to-End Performance with Traffic Aggregation

  17. EF packet size (IPDV) • IPDV inversely proportional to the burst size • Tradeoff between one-way delay and IPDV End-to-End Performance with Traffic Aggregation

  18. WFQ and PQ: comparison • Constant number of streams (40) • Variable EF frame size: 40, 512 bytes and variable rate: [10, 50]% • WFQ is less burstiness prone (interelaving of BE and EF) End-to-End Performance with Traffic Aggregation

  19. Conclusions and future work • Aggregation produces packet loss due to packet clustering and consequent tail drop • Load: • primary factor, great burstiness, minor effect on one-way delay • Rate (pack/sec): great effect on one-way delay • number of EF streams: small dependency • Tradeoff: shaping (in few key aggregation points) and queue size tuning • EF-based services: viable, validation needed (future work) End-to-End Performance with Traffic Aggregation

  20. References • http://www.cnaf.infn.it/˜ferrari/tfng/ds/ • http://www.cnaf.infn.it/˜ferrari/tfng/qosmon/ • Report of activities (phase 2) http://www.cnaf.infn.it/˜ferrari/tfng/ds/rep2-del.doc • Priority Queuing Applied to Expedited Forwarding: a Measurement-Based Analysis, T. Ferrari, G. Pau, C. Raffaelli, Mar 2000 http://www.cnaf.infn.it/˜ferrari/tfng/ds/pqEFperf.pdf • A Measurement-based Analysis of Expedited Forwarding PHB Mechanisms, T. Ferrari, P. Chimento, Feb 2000, IWQoS 2000 , in print http://www.cnaf.infn.it/˜ferrari/tfng/ds/iwqos2ktftant.doc End-to-End Performance with Traffic Aggregation

  21. Overview of diffserv experiments • Policing: Single- and multi-parameter token buckets with TCP traffic • traffic metering and packet marking (PHB class selectors) • scheduling: WFQ, SCFQ, PQ • capacity allocation between queues, class isolation • queue dimensioning (buffer depth and TCP burst tolerance, tx queue) • per-class service rate configuration • one-way delay and instantaneous packet delay variation • Assured Forwarding: PHB differentiation through WRED • throughput performance : • packet drop probability, number of TCP streams per AF PHB, minimum threshold • Expedited Forwarding: • multiple congestion points • multiple EF aggregation points • variable load, number of streams and packet size End-to-End Performance with Traffic Aggregation

More Related