1 / 25

NADA: A Unified Congestion Control Scheme for Real-Time Media

NADA: A Unified Congestion Control Scheme for Real-Time Media. Xiaoqing Zhu and Rong Pan Advanced Architecture & Research Cisco Systems August 2012. Agenda. Design goals System model Network node operations Sender / Receiver behavior Evaluation results Open issues. Packet loss.

maren
Download Presentation

NADA: A Unified Congestion Control Scheme for Real-Time Media

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NADA: A Unified Congestion Control Scheme for Real-Time Media Xiaoqing Zhu and Rong Pan Advanced Architecture & Research Cisco Systems August 2012

  2. Agenda Design goals System model Network node operations Sender/Receiver behavior Evaluation results Open issues

  3. Packet loss Design Goal #1:Limit Self-Inflicted Delay Congestion window Size Rate React early Time Time network queue network queue

  4. Design Goal #2:Leverage A Suite of Feedback Mechanisms PCN-based Performance Delay-based ECN-based Existing loss-based schemes none existing feature new feature Network Support

  5. Design Goal #3:Weighted Bandwidth Sharing Application-level priority Relative bandwidth Bandwidth sharing among flows:

  6. System Overview measure delay/marking optimal rate calculation RTCP report video playout target rate encoder rate control buffer level update network congestion notification rate shaping buffer video packets sender network node receiver

  7. Network Node Behavior • Queuing discipline: FIFO • Congestion notification via: • Delay: no special operation at the queue • ECN: queue-based random marking • PCN: token-bucket-based random marking

  8. Queue-based ECN Marking Marking Probability Avg. Queue Occupancy if : no marking else if : else: mark all packets update:

  9. Token-bucket-based PCN Marking Marking Probability Remaining Token Bucket upon packet arrival: meter packet against token bucket ; update token level if : no marking else if : else: mark all packets

  10. Sender Structure RTCP report (loss/delay/ECN/PCN) target rate calculation sending rate calculation rate shaping buffer target rate buffer level video rate encoder rate control video packets

  11. Target Rate Calculation • Reacting to delay: session priority scaling parameter scaling parameter queuing delay marking ratio • Reacting to ECN/PCN marking:

  12. Sending Rate Calculation encoder reaction time • Accommodate lag in encoder reaction • Trade-off between between network queuing and rate shaping delay scaling parameter

  13. Slow-Start Rate Rate Time start time time horizon

  14. Receiver Behavior • Observe instantaneous end-to-end per packet statistics: • Queuing delay: • ECN/PCN marking: • Obtain time-smoothed estimations: • Periodic RTCP reports (e.g., at 3% of received packets)

  15. Test Scenario • Bottleneck bandwidth: 30Mbps • Random delay measurement error for stream 6, at time t=30s

  16. Delay-Based Adaptation: Per-flow Rate Stream 6 delay measurement error Weighted BW sharing

  17. Delay-Based Adaptation: Total Rate Fast convergence

  18. Delay-Based Adaptation: Bottleneck Queue Low standing queue

  19. Delay-Based Adaptation: Packet Loss Ratio No persistent losses

  20. Delay vs. ECN: Per-Flow Rate Delay-Based ECN-Based

  21. Delay vs. ECN: Bottleneck Queue Delay ECN

  22. ECN vs. PCN: Per-Flow Rate ECN PCN Smoother streaming rates

  23. ECN vs. PCN: Total Rate ECN PCN Slight under-utilization No losses

  24. ECNvs. PCN: Bottleneck Queue ECN PCN Zero standing queue

  25. Conclusions and Next Steps • Key benefits of NADA: • Fast rate adaptation • Weighted bandwidth sharing • Can work with a range of congestion signals • In case of PCN: zero standing queue and smoother streaming rates • Next steps: • Future evaluations in linux-based implementations • Graceful transition between different congestion signals • Compete robustly against loss-based schemes

More Related