1 / 72

Free Riding Multicast

Free Riding Multicast. Berkeley SysLunch (10/10/06). Sylvia Ratnasamy (Intel Research) Andrey Ermolinskiy (U.C. Berkeley) Scott Shenker (U.C. Berkeley and ICSI) ACM SIGCOMM 2006. Talk Outline. Introduction Overview of the IP Multicast service model

grady
Download Presentation

Free Riding Multicast

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Free Riding Multicast Berkeley SysLunch (10/10/06) Sylvia Ratnasamy (Intel Research) Andrey Ermolinskiy (U.C. Berkeley) Scott Shenker (U.C. Berkeley andICSI) ACM SIGCOMM 2006

  2. Talk Outline • Introduction • Overview of the IP Multicast service model • Challenges of Multicast routing • Free Riding Multicast (FRM) • Approach overview • Overhead evaluation • Design tradeoffs • Implementation

  3. Talk Outline • Introduction • Overview of the IP Multicast service model • Challenges of Multicast routing • Free Riding Multicast (FRM) • Approach overview • Overhead evaluation • Design tradeoffs • Implementation

  4. C2 C1 C3 C4 Internet Routing – a High-Level View • Internet is a packet-switched network • Each routable entity is assigned an IP address C1: Send(Packet, C2Addr); • Routers forward packets towards their recipients • Routing protocols (BGP, OSPF) establish forwarding state in routers

  5. Internet Routing – a High-Level View • Traditionally, Internet routing infrastructure offers a one-to-one (unicast) packet delivery service • Problem: Some applications require one-to-many packet delivery • Streaming media delivery • Digital conferencing • Online multiplayer games S G G G G

  6. IP Multicast Service Model • In 1990, Steve Deering proposed IP Multicast • extension to the IP service model for efficient one-to-many packet delivery S • Group-based communication: • Join (IPAddr, GrpAddr); • Leave (IPAddr, GrpAddr); • Send (Packet, GrpAddr); G G G • Multicast routing problem: • Set up a dissemination tree rooted at the source with group members as leaves G

  7. IP Multicast Routing S G G G G

  8. New members must find tree ? join G? ? IP Multicast Routing S G G G G

  9. ? join G? ? IP Multicast Routing • New members must find tree • Tree changes with new members, sources S G G G G

  10. New members must find tree Tree changes with new members, sources Tree changes with network failures ? join G? ? IP Multicast Routing S G G G G

  11. New members must find tree Tree changes with new members, sources Tree changes with network failure Admin. boundaries and policies matter ? join G? ? IP Multicast Routing S G G G G

  12. New members must find tree Tree changes with new members, sources Tree changes with network failure Admin. boundaries and policies matter Forwarding state grows with number of groups, sources ? join G? ? IP Multicast Routing S G G G G

  13. IP Multicast – a Brief History • Extensively researched, limited deployment • Implemented in routers, supported by OS vendors • Some intra-domain/enterprise usage • Virtually no inter-domain deployment • Why? • Too complex? PIM-SM, PIM-DM, MBGP, MSDP, BGMP, IGMP, etc. • FRM goal: make inter-domain multicast simple

  14. Talk Outline • Introduction • Overview of the IP Multicast service model • Challenges of Multicast routing • Free Riding Multicast (FRM) • Approach overview • Overhead evaluation • Design tradeoffs • Implementation

  15. FRM Overview • Free Riding Multicast: radical restructuring of inter-domain multicast • Key design choice: decouple group membership discovery from multicast route construction • Principal trade-off: avoidance of distributed route computation at the expense of optimal efficiency

  16. FRM Approach • Group membership discovery • Extension to BGP - augment route advertisements with group membership information

  17. FRM Approach • Group membership discovery • Extension to BGP - augment route advertisements with group membership information • Multicast route construction • Centralized computation at the origin border router • Exploit knowledge of unicast BGP routes • Eliminate the need for a separate routing algorithm

  18. Group Membership Discovery • Augment BGP with per-prefix group membership information AS X AS Y a.b.*.* c.d.e.* AS P AS Z T AS h.i.*.* f.g.*.* AS Q AS R AS V

  19. Group Membership Discovery • Augment BGP with per-prefix group membership information AS X AS Y a.b.*.* c.d.e.* AS P AS Z T AS • Domain X joins G1 h.i.*.* f.g.*.* AS Q AS R AS V

  20. a.b*.* {G1} Group Membership Discovery • Augment BGP with per-prefix group membership information AS X AS Y a.b.*.* c.d.e.* AS P AS Z T AS • Domain X joins G1 h.i.*.* f.g.*.* AS Q AS R • Border router at X re-advertises its prefix, attaches encoding of active groups AS V BGP UPDATE Dest AS Path FRM group membership a.b.*.* X {G1}

  21. a.b*.* {G1} Group Membership Discovery • BGP disseminates membership change AS X AS Y a.b.*.* c.d.e.* • Border routers maintain membership info. as part of per-prefix state in BGP RIB AS P AS Z T AS h.i.*.* f.g.*.* AS Q AS R AS V

  22. a.b*.* {G1} Group Membership Discovery • BGP disseminates membership change AS X AS Y a.b.*.* c.d.e.* • Border routers maintain membership info. as part of per-prefix state in BGP RIB AS P AS Z T AS h.i.*.* f.g.*.* AS Q AS R AS V

  23. a.b*.* a.b*.* a.b*.* a.b*.* {G1} {G1} {G1} {G1} Group Membership Discovery • BGP disseminates membership change AS X AS Y a.b.*.* c.d.e.* • Border routers maintain membership info. as part of per-prefix state in BGP RIB AS P AS Z T AS h.i.*.* f.g.*.* AS Q AS R AS V

  24. a.b*.* a.b*.* a.b*.* a.b*.* a.b*.* a.b*.* a.b*.* a.b*.* {G1} {G1} {G1} {G1} {G1} {G1} {G1} {G1} Group Membership Discovery • BGP disseminates membership change AS X AS Y a.b.*.* c.d.e.* • Border routers maintain membership info. as part of per-prefix state in BGP RIB AS P AS Z T AS h.i.*.* f.g.*.* AS Q AS R AS V

  25. a.b*.* {G1} Group Membership Discovery • BGP disseminates membership change AS X AS Y a.b.*.* c.d.e.* • Border routers maintain membership info. as part of per-prefix state in BGP RIB AS P AS Z T AS h.i.*.* f.g.*.* AS Q AS R AS V

  26. Group Membership Discovery • Domains Y and Z join G1 AS X AS Y a.b.*.* c.d.e.* AS P AS Z T AS h.i.*.* f.g.*.* AS Q AS R AS AS V V

  27. c.d.e.* f.g.*.* {G1} {G1} Group Membership Discovery • Domains Y and Z join G1 AS X AS Y a.b.*.* c.d.e.* AS P AS Z T AS h.i.*.* f.g.*.* AS Q AS R AS V

  28. c.d.e.* f.g.*.* {G1} {G1} Group Membership Discovery • Domains Y and Z join G1 AS X AS Y a.b.*.* c.d.e.* AS P AS Z T AS h.i.*.* f.g.*.* AS Q AS R AS V

  29. Packet Forwarding AS X AS Y a.b.*.* c.d.e.* AS P AS Z T AS h.i.*.* f.g.*.* AS Q AS R AS V Domain V: Send(G1, Pkt)

  30. Packet Forwarding Dissemination tree AS X AS Y a.b.*.* c.d.e.* AS P AS Z T AS h.i.*.* f.g.*.* AS Q AS R {G1 } Lookup AS V Domain V: Send(G1, Pkt)

  31. Packet Forwarding Dissemination tree AS X AS Y a.b.*.* c.d.e.* V AS P Q AS Z T AS P h.i.*.* f.g.*.* AS Q AS R X {G1 } Lookup AS V Domain V: Send(G1, Pkt)

  32. Packet Forwarding Dissemination tree AS X AS Y a.b.*.* c.d.e.* V AS P Q AS Z T AS P h.i.*.* f.g.*.* AS Q AS R X Y {G1 } Lookup AS V Domain V: Send(G1, Pkt)

  33. Packet Forwarding Dissemination tree AS X AS Y a.b.*.* c.d.e.* V AS P Q R AS Z T AS P Z h.i.*.* f.g.*.* AS Q AS R X Y {G1 } Lookup AS V Domain V: Send(G1, Pkt)

  34. Packet Forwarding Dissemination tree AS X AS Y a.b.*.* c.d.e.* V AS P Q R AS Z T AS P Z h.i.*.* f.g.*.* AS Q AS R X Y {G1 } Lookup AS V Domain V: Send(G1, Pkt)

  35. G1 G1 SubtreeQ SubtreeR Packet Forwarding AS X AS Y a.b.*.* c.d.e.* V AS P Q R AS Z T AS P Z h.i.*.* f.g.*.* AS Q AS R X Y SubtreeQ SubtreeR AS V Domain V: Send(G1, Pkt) • V forwards packet to its children on the tree, attaches encoding the subtree in a “shim” header

  36. G1 G1 SubtreeR SubtreeQ Packet Forwarding AS X AS Y a.b.*.* c.d.e.* V AS P Q R AS Z T AS P Z h.i.*.* f.g.*.* AS Q AS R X Y AS V Domain V: Send(G1, Pkt) • V forwards packet to its children on the tree, attaches encoding the subtree in a “shim” header

  37. G1 G1 SubtreeQ SubtreeR Packet Forwarding AS X AS Y a.b.*.* c.d.e.* V AS P Q R AS Z T AS P Z h.i.*.* f.g.*.* AS Q AS R X Y AS V • Transit routers inspect FRM header, forward packet to their children on the tree Domain V: Send(G1, Pkt)

  38. G1 G1 SubtreeQ SubtreeR Packet Forwarding AS X AS Y a.b.*.* c.d.e.* V AS P Q R AS Z T AS P Z h.i.*.* f.g.*.* No AS Q AS R X Y AS V • Transit routers inspect FRM header, forward packet to their children on the tree Domain V: Send(G1, Pkt)

  39. G1 G1 SubtreeR SubtreeQ Packet Forwarding AS X AS Y a.b.*.* c.d.e.* V AS P Q R AS Z T AS P Z h.i.*.* f.g.*.* AS Q AS R X Y No AS V • Transit routers inspect FRM header, forward packet to their children on the tree Domain V: Send(G1, Pkt)

  40. G1 G1 SubtreeR SubtreeQ Packet Forwarding AS X AS Y a.b.*.* c.d.e.* V AS P Q R Yes AS Z T AS P Z h.i.*.* f.g.*.* AS Q AS R X Y AS V • Transit routers inspect FRM header, forward packet to their children on the tree Domain V: Send(G1, Pkt)

  41. G1 G1 G1 SubtreeR SubtreeQ TREE_BFQ Packet Forwarding AS X AS Y a.b.*.* c.d.e.* V AS P Q R AS Z T AS P Z h.i.*.* f.g.*.* AS Q AS R X Y AS V • Transit routers inspect FRM header, forward packet to their children on the tree Domain V: Send(G1, Pkt)

  42. G1 SubtreeR Packet Forwarding AS X AS Y a.b.*.* c.d.e.* V AS P Q R AS Z T AS P Z h.i.*.* f.g.*.* AS Q AS R X Y AS V • Transit routers inspect FRM header, forward packet to their children on the tree Domain V: Send(G1, Pkt)

  43. G1 SubtreeR Packet Forwarding AS X AS Y a.b.*.* c.d.e.* V AS P Q R AS Z T AS P Z h.i.*.* f.g.*.* AS Q AS R X Y AS V • Transit routers inspect FRM header, forward packet to their children on the tree Domain V: Send(G1, Pkt)

  44. FRM Details • Encoding group membership • Simple enumeration is hard to scale • Border routers encode locally active groups using a Bloom filter • Transmit encoding using a new path attribute in BGP UPDATE message • Encoding the dissemination tree • Encode edges into a shim header using a Bloom filter • Tree computation is expensive  Border routers maintain shim header cache

  45. Talk Outline • Introduction • Free Riding Multicast (FRM) • Approach overview • Overhead evaluation • Router storage requirements • Forwarding bandwidth overhead (in paper) • Design tradeoffs • Implementation

  46. FRM Overhead – Router Storage AS X AS Y a.b.*.* c.d.e.* AS P Transit router AS Z T AS Transit forwarding state (per-neighbor, line card memory) h.i.*.* f.g.*.* AS Q AS R AS V Origin border router 1. Source forwarding state (per-group, line card memory) 2. Group membership state(per-prefix, BGP RIB)

  47. FRM Overhead – Router Storage AS X AS Y a.b.*.* c.d.e.* AS P Transit router AS Z T AS Transit forwarding state (per-neighbor, line card memory) h.i.*.* f.g.*.* AS Q AS R AS V Origin border router 1. Source forwarding state (per-group, line card memory) 2. Group membership state(per-prefix, BGP RIB)

  48. Forwarding State (Source Border Router) • A -- number of groups with sources in the local domain • Zipfian group popularity with a minimum of 8 domains per group • 25 groups have members in every domain (global broadcast) 256 MB of line card memory enables fast-path forwarding for ~200000 active groups

  49. FRM Overhead – Router Storage AS X AS Y a.b.*.* c.d.e.* AS P Transit router AS Z T AS Transit forwarding state (per-neighbor, line card memory) h.i.*.* f.g.*.* AS Q AS R AS V Origin border router 1. Source forwarding state (per-group, line card memory) 2. Group membership state(per-prefix, BGP RIB)

  50. Group Membership State Requirements • Total of A multicast groups • Domains of prefix length phave 232-pusers • Each user chooses and joins k distinct groups from A • 10 false positives per prefix allowed 1M simultaneously active groups and 10 groups per user require~3GBof route processor memory (not on the fast path)

More Related