1 / 27

Chin-Ying Wang Advisor: Sonia Fahmy Department of Computer Sciences Purdue University

Multicast Congestion Control in the Internet: Fairness and Scalability. Sponsored by Tektronix and the Schlumberger Foundation technical merit award. Chin-Ying Wang Advisor: Sonia Fahmy Department of Computer Sciences Purdue University http://www.cs.purdue.edu/homes/fahmy/. Overview.

oksana
Download Presentation

Chin-Ying Wang Advisor: Sonia Fahmy Department of Computer Sciences Purdue University

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multicast Congestion Control in the Internet: Fairness and Scalability Sponsored by Tektronix and the Schlumberger Foundation technical merit award Chin-Ying Wang Advisor: Sonia Fahmy Department of Computer Sciences Purdue University http://www.cs.purdue.edu/homes/fahmy/

  2. Overview • What is Multicasting? • PGM • PGMCC • Feedback Aggregation • Fairness • Conclusions and Ongoing Work

  3. What is Multicasting? = group member • Multicasting: allows information exchange among multiple senders and multiple receivers • Popular applications include: audio/video conferencing, distributed games, distance learning, searching, server and database synchronization, and many more

  4. How does Multicasting Work? R • A single datagram is transmitted from the sending host • This datagram is replicated at network routers and forwarded to interested receivers via multiple outgoing links • Using multicast connections  traffic and management overhead not number of participants • If reliability is required, receivers provide feedback to notify the sender whether the data is received datagram Router Router S feedback R R

  5. The Feedback Implosion Problem R R S = Sender R = Receiver = data Router Router = ACK/NAK R S Feedback implosion R

  6. The Congestion Control Problem • How should the sender determine the sending rate? R ? Router Router S R 500 Kb/s R 1000 Kb/s 300 Kb/s 750 Kb/s

  7. Our Goals • To study the impact of feedback aggregation on a promising protocol, the PGMCC multicast congestion control protocol • To evaluate PGMCC performance when competing with bursty traffic in a realistic Internet-like scenario • Ultimately, to design more scalable and more fair multicast congestion control techniques

  8. Multicast Congestion Control R2 ?=300 Kb/s • Single-rate schemes: • Sender adapts to the slowest receiver • TCP-like service: one window/rate for all the receivers • Limitations: • Underutilization on some links • Selects the slowest receiver in the group (“crying baby syndrome”) 500 Kb/s Router Router S 1000 Kb/s 300 Kb/s 750 Kb/s R3 R1

  9. The PGM Multicast Protocol • PGM: Pragmatic General Multicast • Single sender and multiple-receiver multicast protocol • Reliability:NAK based retransmission requests • Scalability: feedback aggregation and selective repair forwarding • Suppress replicated NAKs from the same sub-tree in each router

  10. PGM NAK/NCF Dialog Subnet NCF Router NCF NAK NCF NCF NAK ODATA Router RDATA NAK PGM Receivers NAK NCF NAK Subnet Subnet Router NCF PGM Sender NAK PGM Receiver See [Miller1999] and RFC for more details.

  11. PGMCC [Rizzo2000] • Use TCP throughput approximation to decide on the group representative, called “ACKer” • Update acker to I when T(I) < cT(J) RJ 300 Kb/s Current acker Router Router S RK 500 Kb/s RI 1000 Kb/s 300 Kb/s 750 Kb/s Newly joined receiver whose throughput T(I) < c× current acker’s throughput T(J)

  12. PGMCC (cont’d) • Attempts to be TCP-friendly, i.e., on the average, no more aggressive than TCP • ACKs are used between the sender and acker • TCP-like increase and decrease • Throughput of each receiver is computed as a function of fields in NAK packets: • Round Trip Time (RTT) • Packet loss

  13. Feedback Aggregation Experimental Topology Goal: To determine if there are unnecessary/missing acker switches due to feedback aggregation PR3 PR1 20 % loss PS Router Router 25 % loss Ns-2 Simulator is used. All links are 10 Mb/s with 50 ms delay. PR4 PR2

  14. Feedback Aggregation Experimental Result

  15. PGMCC Fairness • Simulate PGMCC in a realistic scenario similar to the current Internet • The objective is to determine whether PGMCC remains TCP friendly in this scenario • Different bottleneck link bandwidths are used in the simulation: • Highly congested network • Medium congestion • Non-congested network

  16. Link0 Link1 Link2 Link3 Link4 Link5 General Fairness (GFC-2) Experimental Topology PS S16 S15 S17 S14 S10 S20 S18 S4 S21 S5 S11 S1 S0 S3 S13 S19 D5 S7 D4 S12 S2 router5 S6 S8 D13 router1 router3 D3 router0 D14 router4 router2 router6 D2 D0 D9 D7 D12 D1 D15 S9 D6 D20 D21 D8 D10 D16 D11 PR2 D19 D17 D18 PR1 PR3 PR4 PR5

  17. Topology (cont’d) • 22 source nodes (S*) and 22 destination nodes (D*) • NewReno TCP connection is run between each pair of source and destination nodes • One UDP flow sending Pareto traffic runs across Link4 with a 500 ms on/off interval • All simulations were run for 900 seconds • TCP connection traced runs from S4 to D4

  18. Topology (cont’d) • Link bandwidth between each node and router is 150 kbps with 1 ms delay • Link bandwidths and delays between routers are:

  19. Highly Congested Network • PGM has a higher throughput in the first 50 seconds • Afterwards, PGM has very low throughput due to time-outs

  20. Medium Congestion • Maintain all simulation parameters unchanged except increasing the link bandwidth between routers from 2.5 and 3.5 times the bandwidth in “highly congested” network • PGM flow outperforms TCP during initial acker switching • TCP has higher throughput when the timeout interval at PGM sender does not adapt to the increase of the acker RTT

  21. Medium Congestion (cont’d) Bandwidth = 2.5×”Congested”

  22. Medium Congestion (cont’d) Bandwidth = 3.5×”Congested”

  23. Non-congested Network • Maintain all simulation parameters unchanged except increasing the link bandwidth between routers from 10 and 80 times the bandwidth in highly congested network • PGM flow outperforms TCP flow as the bandwidth increases • Frequent acker switches cause the increase of the PGMCC sender’s window • The RTT of the PGMCC acker is shorter than the TCP flow RTT at many instances

  24. Non-congested Network (cont’d) Bandwidth = 10×”Congested”

  25. Non-congested Network (cont’d) Bandwidth = 80×”Congested”

  26. Main Results • Feedback aggregation: • Results in incorrect acker selection with PGMCC • Problem is difficult to remedy without router assistance • PGMCC fairness in realistic scenarios: • Initial acker switches causes the PGM flow to outperform the TCP flow due to the steep increase of the PGM sending window • A TCP-like retransmission timeout is needed to avoid the PGM performance degradation caused by using a fixed timeout interval

  27. Ongoing Work • Conduct Internet experiments with various reliability semantics (e.g., unreliable and semi-reliable transmission) and examine their effect on PGMCC, especially on acker selection with insufficient NAKs • Exploit Internet tomography in multicast and geo-cast application-layer overlays [NOSSDAV2002, ICNP2002]

More Related