1 / 23

Multicast Redux: A First Look at Enterprise Multicast Traffic

Multicast Redux: A First Look at Enterprise Multicast Traffic. Elliott Karpilovsky 1 , Lee Breslau 2 , Alexandre Gerber 2 , Subhabrata Sen 2 Princeton University 1, AT&T Labs - Research 2. A brief history of Multicast. Efficient 1 to many distribution at network layer

nanji
Download Presentation

Multicast Redux: A First Look at Enterprise Multicast Traffic

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multicast Redux: A First Look at Enterprise Multicast Traffic Elliott Karpilovsky1, Lee Breslau2, Alexandre Gerber2, Subhabrata Sen2 Princeton University1, AT&T Labs - Research2

  2. A brief history of Multicast • Efficient 1 to many distribution at network layer • Significant interest in the 1990s • Experimentally deployed on Mbone • Extensive protocol research (DVMRP, PIM, CBT, MOSPF, etc.) • Failed to achieve significant deployment on the Internet • Problems regarding economic incentives, inter-provider dependencies, security and management

  3. Resurgence of Multicast • Seeing deployment two contexts: • IPTV • Video distribution in a walled garden environment managed by Internet Service Providers. • Enterprise networks • Billing, dependency, security and management problems are mitigated • Support multitude of applications (file distribution, conferencing, financial trading, etc.)

  4. Our focus: Multicast in VPN Environment! MVPN (Multicast VPN) • Enterprise typically supported by MPLS based VPNs • There is clear demand for service providers to support Multicast within a VPN • But little is currently known about multicast behavior and characteristics in the VPN setting: Nbr. Receivers? Bitrates? Duration? Etc. • Let’s study VPN multicast from vantage point of a large ISP now that it has been deployed: • Pros: Scale, diversity • Cons: Incomplete visibility

  5. CE1 CE3 PE1 PE3 Src MPLS CE1 CE3 Rcv PE1 PE3 Rcv1 CE2 PE4 PE2 What kind of encapsulation for backbone transport? Rcv3 ??? Src Rcv2 CE2 PE4 PE2 CE4 MVPN Introduction:First Challenge: “MPLS Multicast” doesn’t exist Unicast VPN • Customer unicast between sites • Packet delivered from one PE to another PE • MPLS used as point-to-point encapsulation mechanism CE4 Multicast VPN • Customer multicast between sites (receivers at multiple sites) • Packets delivered from one PE to one or more other PEs • MPLS does not currently support one-to-many distribution • Needed: Encapsulation mechanism across provider backbone

  6. Basic Solution: GRE encapsulation at the PE + IP Multicast in the core CE1 CE3 PE1 PE3 Rcv1 • Src sends mcast packet to customer group 224.5.6.7 • CE2 forwards customer packet to PE2 • PE2 encapsulates customer packet in provider group 239.1.1.1 • PE2 transmits provider packet across backbone multicast tree • PE1, PE3 and PE4 decapsulate customer packet and forward to CE1, CE3 and CE4, which then forward it to Rcv1, Rcv2 and Rcv4 Rcv3 239.1.1.1 Src Rcv2 CE2 PE4 PE2 CE4 Customer pkt 224.5.6.7 Payload Encapsulation Provider pkt 239.1.1.1 224.5.6.7 Payload Decapsulation Customer pkt 224.5.6.7 Payload

  7. But it gets more complicated: MVPN Design Issue CE1 CE3 • Customer group CG1 has receivers behind PE1 and PE3 • CG2 has receivers behind PE3 and PE4 PE1 PE3 CG1 CG1 CG2 Src1 Src1 CG2 CE2 PE4 PE2 CE4 Src2 Src2 How many multicast groups should the provider use to encapsulate traffic from CE2 to CG1 and CG2? => 1 Provider Group per Customer Group: not scalable => 1 Provider Group for all Customer Groups: bandwidth wasted (e.g. CG1, CE4)

  8. MVPN Solution: Default MDT, Data MDT • Industry practice, initially defined by Cisco • A single default Provider Group per-VPN to carry low bandwidth customer traffic (and inter-PE control traffic) • Referred to as “Default MDT” • All PEs in a VPN join this group • “Broadcast” to all VPN PEs • High bandwidth customer traffic carried on special “data” groups in the provider backbone • Referred to as “Data MDTs” • Only relevant PEs join these groups • Customer Group moved to Data MDT if exceeds throughput/duration threshold • N data groups per VPN (configured)

  9. CE1 CE3 CG3 PE1 PE3 CG1 CG1 CG2 Default MDT Src CG2 Data MDT CG3 CE2 PE4 PE2 CE4 MVPN Example • CG1 and CG2 • Low rate groups • Encapsulated on Default MDT • CG3 • High data rate group • Encapsulated on Data MDT • All PEs join Default MDT • Receive traffic for CG1 and CG2 • CE1 drops traffic for CG2 • CE4 drops traffic for CG1 • PE1 and PE4 join Data MDT • CG3 traffic delivered only to PE1 and PE4

  10. Data Collection • SNMP poller • Approximately 5 minute intervals • Contacts PEs for Default, Data MDT byte count MIBs • About the data: • Jan. 4, 2009 to Feb. 8, 2009 • 88M records collected • Data MDT sessions ( (S,G),start time, end time, receivers ): 25K • Challenges: • We only see the behavior at the backbone level (Provider Groups), Customer Groups and applications are hidden to this measurement methodology

  11. Results – Data MDT • Wide variations but some initial observations: • 70% of sessions send less than 5 kbps • 70% of sessions last less than 10 minutes • 50% of sessions have only 1 receiver (PE)

  12. Let’s correlate some of these variables!K-mean Clustering • Cluster based on: • Duration • Throughput • Peak rate • Maximum number of receivers • Average number of receivers • Normalize to z-scores • Use k-means with simulated annealing / local search heuristic • Pick a small k such that significant variance is explained

  13. Clustering • (We pick k=4) K=4 seems to be the right number of clusters

  14. Clustering results: the 4 categories • (Some variables removed)

  15. Conclusion • First study of Enterprise multicast traffic in the wild: • Variety of traffic types • Significant number of flows have very few receivers • Moreover, some of these flows have moderate throughput • This raises more questions! • Why are there such different Multicast sessions? • What are the customer applications behind these Multicast sessions? • What are the Multicast Customer Groups behind these Multicast Provider Groups? • Future work will drill down into these Multicast sessions • By actively joining groups and/or by using DPI monitoring

  16. Questions? Alexandre Gerber gerber@research.att.com

  17. Backup

  18. MVPN Strategy • Observation • Need to transport customer packets from a single ingress PE to one or more egress PEs • We already have a mechanism to do this: IP Multicast • Solution • Compute multicast distribution trees between PEs • Use GRE to encapsulate customer multicast packet in a multicast packet for transport across the provider backbone • GRE: Generic Routing Encapsulation (think IP-in-IP) • Receiving PE decapsulates packet and forwards it to CE • Basic idea is simple! • Many design details and engineering tradeoffs to actually pull this off

  19. MVPN Example CE1 CE3 PE1 PE3 Rcv1 • Src sends mcast packet to customer group 224.5.6.7 • CE2 forwards customer packet to PE2 • PE2 encapsulates customer packet in provider group 239.1.1.1 • PE2 transmits provider packet across backbone multicast tree • PE1, PE3 and PE4 decapsulate customer packet and forward to CE1, CE3 and CE4, which then forward it to Rcv1, Rcv2 and Rcv4 Rcv3 239.1.1.1 Src Rcv2 CE2 PE4 PE2 CE4 Customer pkt 224.5.6.7 Payload Encapsulation Provider pkt 239.1.1.1 224.5.6.7 Payload Decapsulation Customer pkt 224.5.6.7 Payload

  20. MVPN Strawman Solution #1 CE1 CE3 • Dedicated provider group per customer group • CG1 is encapsulated in PG1; CG2 is encapsulated in PG2 • PE1 and PE3 join PG1; PE3 and PE4 join PG2 • Multicast routing trees reach the appropriate PEs PE1 PE3 CG1 CG1 CG2 Src PG1 CG2 PG2 CE2 PE4 PE2 CE4 • Advantage: Customer multicast is only delivered to PEs that have interested receivers behind attached CEs • Disadvantage: Per-customer group state in the backbone violates scalability requirement

  21. MVPN Strawman Solution #2 CE1 CE3 • Single provider group per VPN • CG1 and CG2 are both encapsulated in PG1 • PE2, PE3 and PE4 all join PG1 • Both customer groups delivered to all PEs • PEs with no interested receivers behind attached CEs will drop the packets PE1 PE3 CG1 CG1 CG2 PG1 Src CG2 CE2 PE4 PE2 CE4 • Advantage: Only a single multicast routing table entry per VPN in the backbone • Disadvantage: Inefficient use of bandwidth since some traffic is dropped at the PE • E.g., CE4 drops CG1 pkts; CE1 drops CG2

  22. MVPN Scalability and Performance Issue • High bandwidth customer data groups are encapsulated in non-default provider groups (data MDTs) • N groups per VPN • What happens when there are more than N high bandwidth customer groups in a VPN? • Solution: map multiple high bandwidth groups onto a single provider data group • E.g., CGx and CGy are both encapsulated in PGa • Implication: if CGx and CGy cover a different set of PEs (which is likely), some high bandwidth traffic reaches PEs where it is unwanted (and dropped) • Wastes bandwidth • Open question: to what degree will this be a problem? • Unknown; potentially big; will depend on usage patterns; will need data

  23. MVPN Characteristics • Isolation • Each customer VPN is assigned its own set of Provider Groups, isolating traffic from different customers • Scalability • Backbone multicast routing state is a function of the number of VPNs, not the number of customer groups • N data groups +1default group per VPN • Flexibility • No constraints on customer use of multicast applications or group addresses

More Related