1 / 43

On Multicast

On Multicast. ?. CS614 - March 7, 2000 Tibor Jánosi. What to Expect. Motivation / Why is it difficult? IP Multicast Routing Reliable Multicast Transport Protocol Scalable Reliable Multicast Light-Weight Multicast Services Pragmatic General Multicast PGM/LMS Comparison. Motivation.

coyne
Download Presentation

On Multicast

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On Multicast ? CS614 - March 7, 2000 Tibor Jánosi

  2. What to Expect • Motivation / Why is it difficult? • IP Multicast Routing • Reliable Multicast Transport Protocol • Scalable Reliable Multicast • Light-Weight Multicast Services • Pragmatic General Multicast • PGM/LMS Comparison

  3. Motivation • Multimedia streams: live and not. • Financial data distribution. • Distributed fault tolerance. All these: Same basic communication pattern, but have widely different requirements.

  4. Why Difficult? • Very different application needs. • Limited bandwidth and processing power. • Poorly known/changing network topology. • Hard to deploy changes in routers. • Large/unequal/changing propagation delays. • Unclear what “best” policy means in various contexts. • Etc...

  5. Ideal Multicast • Senders (S) and Receivers (R) not aware of each other’s position in the network. • Scalable. • Low latency (join, data propagation). • Low bandwidth and processing overhead. • “Reliable”, if this is cheap (“end-to-end”?) • Easy to join/leave.

  6. IP Multicast: Basic Idea • Multicast groups: abstract “rendez-vous” points. • Set up optimal spanning tree spanning participants for each group. • Make it cheap by not providing strong guarantees: send out packets and hope for the best. Not that bad, in fact.

  7. Big question: Who gets which packets? Send everything to everybody. You get … Invent Multicast Routing, (try to) forward only what’s needed, when needed!

  8. LAN Multicast: IGMP • Queries/Replies • Random delay before reply. • Don’t report multicast groups already reported. • Router will know groups with members on its LAN.

  9. Reverse Path Forwarding From shortest path to S From other path router router How do we determine the shortest path to the source? One possibility: routers exchange distance info. Also, duplicate packets possible.

  10. Idea: Pruning prune sender receiver

  11. Core Based Trees

  12. CBT(2) • All senders could be sending to the Core. • Single point of failure. • Core address must be known; fallbacks also. • Each router has to know only which interfaces to send packets on. Cheap. • Join/leave explicit. No need to wait. • No pruning.

  13. Protocol Independent Mcast • Two styles: sparse and dense. • Dense: flood and pruning. • Sparse: much like CBT: join a “rendez-vous” point. Receiver’s routers can identify “shortcuts”. • No need for data to pass through rd point. • Rd points send “alive” packets. Receivers will switch to alternative, if rd’s dead.

  14. PIM (2)

  15. Smart “session manager” elects DR’s and sets parameters. How? Just like that... Reliable Mcast Transport Protocol -S, R use windows -Designated Receivers eliminate ACK implosion -ACK’s sent to DR’s -DR’s and S cache data and retransmit it when needed.

  16. RMTP(2) • After set up S starts sending data. Receivers send periodic ACK’s after first packet received. • If no ACK’s for a long time, connection terminates. • DR’s or S retransmit info using unicast or multicast, depending on number of errors. • Immediate TX request sent to DR’s, for receivers that join the session.

  17. RMTP (3) • Sender window advance determined by slowest receiver. • ACK’s must not be repeated too often. Measure RTT to AP. • S adjusts (decreases) send window to 1 if many errors; then increases linearly. • DR’s are fixed, but each R chooses its DR. (DR sends SND_ACK_TOME with TTL fixed to a known value).

  18. Scalable Reliable Multicast • ALF: explicitly include app’s semantic in protocol design. No solution will work for all. • Data identified by unique, persistent names. • Source id’s are persistent. • IP Multicast is available. • Data conventionally grouped in “pages.” • No distinction between receivers and senders. • These assumptions fit wb’s semantics.

  19. SRM (2) “Session messages” (SM) multicast periodically. Used to: • advertise sequence number of active page for active sources (data grouped in “pages” to limit history) • determining set of participants • estimate one-way distance between nodes

  20. SRM (3) • Loss signalled by multicast NACK’s with persistent, unique name. • NACK preceded by randomized wait. • If waiting for data: Wait timer reset w/ time doubled if NACK for same data received or timer expired. • If have data when NACK is received: Randomized wait, then multicast repair data, unless somebody else did during this wait.

  21. SRM (4) • Wait periods drawn from uniform distributions on intervals w/ length dependent on distances between hosts and (almost) arbitrary constants. • These constants depend on topology and network conditions. They should be adaptive. • Leaving the group is indistinguishable from being in inaccessible partition. • No partial/total ordering provided; but these could be built on top of SRM.

  22. SRM(5) • Performance much better if local recovery is possible (no need to multicast to everybody). • Solutions: • TTL-based scoping (one step and two step); • Separate multicast group for recovery; • Administrative scoping.

  23. SRM(6): Extreme Topologies • Deterministic supression: • exactly one NACK; • exactly one repair. • Probabilistic supression: • at most g-1 requests, always expect more than one; • the longer the interval the fewerthe requests, but latency bigger.

  24. Light-Weight Multicast Service • LMS modifies standard router forwarding. Achieves minimal overhead, no pathological behavior, minimal latency. • Three new functions are needed in router: • Select a replier for each subtree. • Send request to replier corresponding to subtree. • Multicast replier’s repairs to loss subtree (subcast).

  25. LMS (2) • Routers pick a “replier link”from list of available links. • Router state added: • upstream link; • list of downstream links; • replier link. • Problem: Same replier could • be picked many times.

  26. LMS (3) • All receivers detectingloss send repair requests. • Routers forward requeststo replier. • If replier does not havedata, it sends a request also. • Replier’s request is forwarded uplink.

  27. LMS (4)

  28. LMS (5) Duplicate sends arepossible: select replierwith least amount ofloss. Repliers can fail: receivers will time out waiting for the replier. They will ask for a new replier to be elected.

  29. R R R R R R R R R R R R R R R R R R R R R R R R LMS (6) Duplicate requests are possible (if everybody picks the samereplier). But we can protect against this.

  30. LMS (7): Bad Replier Choice

  31. Pragmatic General Multicast • Somewhat similar to LMS. • All retransmissions originate from the source. Designated Local Retransmitters might help, but they must be on the path. • Receivers send NACK’s back to source; and repeat it until they get a confirmation (NCF). • NCF’s inhibit NACK’s from other receivers.

  32. PGM (2) • Only one NACK per (source, packet) is propagated upward. • S (or DLR) send RDATA when they get NACK. RDATA retraces path of NACK’s. • In routers: no NACK, no pass! • “Dangling NAK state”: RDATA lost, first NACK in router, subsequent NACK’s rejected. • A retransmission only reaches those that have requested it, but not necessarily all of them.

  33. PGM (3)

  34. PGM (4) • “Repeated Retransmission”

  35. LMS Latency Loss at source, random topology.

  36. LMS Latency (2) Loss at source, random topology.

  37. PGM Latency Loss at source, random topology.

  38. PGM Latency (2) Loss at source, random topology.

  39. PGM Latency (2) Loss at source, random topology.

  40. Random Loss: LMS vs. PGM LMS picks replier that is farther than the source! Topology!!

  41. LMS Duplicates

  42. PGM Retransmissions

  43. Questions? Comments? Thank you!

More Related