1 / 99

Previously, on CS5248..

Previously, on CS5248. Idea. Adjust sending rate based on network conditions. Part 1: Detecting Congestion. Bolot and Tulertti if median loss rate > threshold then decrease rate else increase rate. Part 1: Detecting Congestion. Busse, Deffner and Schulzrine

lridgeway
Download Presentation

Previously, on CS5248..

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Previously, on CS5248..

  2. Idea • Adjust sending rate based on network conditions

  3. Part 1: Detecting Congestion Bolot and Tulertti if median loss rate > threshold then decrease rate else increase rate

  4. Part 1: Detecting Congestion Busse, Deffner and Schulzrine if a lot of congested receivers decrease rate if a lot of unloaded receivers increase rate else do nothing

  5. Multiple receivers • Heterogeneity Issues • Scalability Issues (Another Lecture..) THIS

  6. Encoder Decoder Middlebox Receiver Sender Network Today’s Lecture

  7. McCanne, Jacobson, and Vetterli • "Receiver-driven layered multicast," • ACM SIGCOMM 96 • Wu, Sharma, and Smith, • "Thin Streams: An architecture for multicasting layered video,“ • NOSSDAV 97

  8. Encoder Decoder Middlebox Receiver Sender Network Today’s Lecture

  9. Amir, McCanne, and Zhang. • "An application level video gateway." • ACM MM 95

  10. Encoder Decoder Middlebox Receiver Sender Network Today’s Lecture

  11. Bolot, Turletti, and Wakeman. • "Scalable feedback control for multicast video distribution in the internet," • ACM SIGCOMM 94.

  12. Encoder Decoder Middlebox Receiver Sender Network Today’s Lecture

  13. Receiver-Driven Layered Multicast McCanne, Jacobson & Vetterli SIGCOMM 96

  14. Internet Heterogeneity 2 Mbps 56kbps 40kbps

  15. Heterogeneous Clients • How to satisfy receivers with different requirements?

  16. Method 1: Simulcast • Send multiple streams

  17. Method 2: Rate Adaptation • Send one stream

  18. Method 3: Layered Multicast Layer 1 Layer 2 Layer 3

  19. Layering Scheme • Temporal Layering

  20. Layering Scheme • Spatial Layering

  21. Layering Scheme • DCT Layering 30 8 2 30 8 2 0 30 -6 -1 -6 -1 0 0 1 1 0 0 0 0 0 0 0

  22. Layering Scheme • Fine Granularity Scalability (FGS) 1 1 0 0 0 0 0 0 1 1 1 0 0 0 1 0 1 0 0 1 0 1 0 1 1 0 0 0 0 1 0 0

  23. Layered Multicast • 1 Layer : 1 Multicast Group • Receiver subscribes to as many layers as desired

  24. RLM Example

  25. Question • How many layers is enough?

  26. Solution: Join Experiment highest layer = 1 join layer 1 while no packet loss highest layer ++ join next layer leave highest layer highest layer --

  27. Details • Tjoin • Time between join experiments • Tdetect • Time taken to detect packet loss

  28. Effects of Tjoin • Need to converge to the right level quickly • Tjoin should be small • Repeated failed experiments congest networks • Tjoin should be large

  29. Adapting Tjoin • One Tjoin per layer if join experiment for layer k fails Tjoin(k) = Tjoin(k)*2

  30. Example 4 3 2 1

  31. Adapting Tdetect • Set Tdetect to large initial value • Estimate Tdetect with mean and deviation • Mesure time between join and packet loss occur

  32. Two Problems • Interference • Scalability

  33. Problem 1: Interference

  34. Problem 1: Interference I see, layer 2 is bad for me..

  35. Problem 2: Scalability • Lots of receivers • Lots of experiments • Lots of congestions

  36. Solution: Shared Learning I am joining layer 2, do not disturb!

  37. Solution: Shared Learning

  38. Solution: Shared Learning I am joining layer 3, do not disturb!

  39. Solution: Shared Learning I see, layer 3 is bad for me..

  40. Shared Learning • Conservative: learn from failure not success • Improve convergence time • Give priority to low-layer experiments

  41. Evaluation

  42. Evaluation

  43. Problems • Failed join experiments are bad • Interference across sessions?

  44. Thin Streams Linda Wu, Rosen Sharma and Brian Smith NOSSDAV ‘97

  45. Problems • Failed join experiments are bad • Interference across sessions?

  46. How bad is failed experiments? R : sending rate of a layer Tj : IGMP join latency Tl : IGMP leave latency Buffer space at the router = R*(Tj + Tdetect + Tl)

  47. Reduce Buffer Space • Reducing R • Many layers, each with small bandwidth • Reducing Tl • Sharma designed IGMP v2.0 • Reducing Tdetect • Rely on throughput rather than congestions

  48. Reducing Tdetect • Detect congestion before packet drops • E : Expected Throughput • A : Actual Throughput

  49. Calculating A & E • R : bandwidth of one layer • I : measurement interval • N : number of bytes received in I • G : number of layers joined A = aA + (1-a)N/I E = aE + (1-a)GR

  50. Thin Streams Join-Leave Algo do every I seconds if (E-A) > Cleave leave else if (E-A) < Cjoin and time since last leave > Twait join

More Related